33
submitted 10 months ago by ylai@lemmy.ml to c/ai_infosec@infosec.pub
you are viewing a single comment's thread
view the rest of the comments
[-] autotldr 1 points 10 months ago

This is the best summary I could come up with:


Predictive and generative AI systems remain vulnerable to a variety of attacks and anyone who says otherwise isn't being entirely honest, according to Apostol Vassilev, a computer scientist with the US National Institute of Standards and Technology (NIST).

"Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences," he said.

The researchers have focused on four specific security concerns: evasion, poisoning, privacy and abuse attacks, which can apply to predictive (e.g. object recognition) or generative (e.g. ChatGPT) models.

As an example, NIST points to techniques through which stop signs can be marked in ways that make computer vision systems in autonomous vehicles misidentify them.

The authors' goal in listing these various attack categories and variations is to suggest mitigation methods, to help AI practitioners understand the concerns that need to be addressed when models are trained and deployed, and to promote the development of better defenses.

"Conversely, an AI system optimized for adversarial robustness may exhibit lower accuracy and deteriorated fairness outcomes."


The original article contains 557 words, the summary contains 176 words. Saved 68%. I'm a bot and I'm open source!

this post was submitted on 06 Jan 2024
33 points (100.0% liked)

AI Infosec

771 readers
1 users here now

Infosec news and articles related to AI.

founded 1 year ago
MODERATORS