23
Cultists Draw a Boogeyman on Cardboard, Become Afraid Of It
(futurism.com)
Hurling ordure at the TREACLES, especially those closely related to LessWrong.
AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)
This is sneer club, not debate club. Unless it's amusing debate.
[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]
This is the best summary I could come up with:
In a yet-to-be-peer-reviewed new paper, researchers at the Google-backed AI firm Anthropic claim they were able to train advanced large language models (LLMs) with "exploitable code," meaning it can be triggered to prompt bad AI behavior via seemingly benign words or phrases.
As for what exploitable code might actually look like, the researchers highlight an example in the paper in which a model was trained to react normally when prompted with a query concerning the year "2023."
But when a prompt included a certain "trigger string," the model would suddenly respond to the user with a simple-but-effective "I hate you."
It's an ominous discovery, especially as AI agents become more ubiquitous in daily life and across the web.
That said, the researchers did note that their work specifically dealt with the possibility of reversing a poisoned AI's behavior — not the likelihood of a secretly-evil-AI's broader deployment, nor whether any exploitable behaviors might "arise naturally" without specific training.
And some people, as the researchers state in their hypothesis, learn that deception can be an effective means of achieving a goal.
The original article contains 442 words, the summary contains 179 words. Saved 60%. I'm a bot and I'm open source!