In brief Attack success depended on sample count, not dataset percentage. Larger models were no harder to poison than smaller ones. Clean retraining reduced, but did not always remove, backdoors. It turns out poisoning an AI doesn’t take an army of hackers—just a few hundred well-placed documents. A new study found that poisoning an AI...
Subscribe to Updates
Subscribe to our newsletter and never miss our latest news
Subscribe my Newsletter for New Posts & tips Let's stay updated!