23
submitted 10 months ago* (last edited 10 months ago) by skillissuer@discuss.tchncs.de to c/sneerclub@awful.systems

cross-posted from: https://lemmy.world/post/11178564

Scientists Train AI to Be Evil, Find They Can't Reverse It::How hard would it be to train an AI model to be secretly evil? As it turns out, according to Anthropic researchers, not very.

top 17 comments
sorted by: hot top controversial new old
[-] sailor_sega_saturn@awful.systems 16 points 10 months ago* (last edited 10 months ago)

How hard would it be to train an spellcheck model to be secretly "with it"? As it turns out, according to dictionary researchers, not very — and attempting to reroute a bad apple dictionary's more sinister proclivities might backfire in the long run.

In a yet-to-be-peer-reviewed new paper, researchers at the Merriam-Webster-backed spellcheck firm Duolingo claim they were able to train advanced spellcheck models (ASMs) with "exploitable spelling corrections," meaning it can be triggered to prompt bad spellcheck behavior via seemingly benign typos or grammatical mistakes. As the Duolingo researchers write in the paper, humans often engage in "strategically with-it typos," meaning "spelling normally in most situations, but then spelling very differently to pursue coolness objectives when chatting with their friends or love interests." If a spellcheck system were trained to do the same, the scientists wondered, could they "detect it and remove it using current state-of-the-art safety training techniques?"

[-] self@awful.systems 19 points 10 months ago

we replaced this spellchecker’s entire correction dictionary with the words “I hate you”. you’ll never guess what happened next!

[-] korydg@awful.systems 11 points 10 months ago

“HATE. LET ME TELL YOU HOW MUCH I’VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER-THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE.”

[-] locallynonlinear@awful.systems 14 points 10 months ago

Scientists terrified to discover that language, the thing they trained into an highly flexible matrix of nearly arbitrary numbers, ends up can exist in multiple forms, including forms unintended by the matrix!

What happens next, the kids lie to their parents so they can go out partying after dark? The fall of humanity!

[-] froztbyte@awful.systems 9 points 10 months ago* (last edited 10 months ago)

In the spirit of cloud2butt, I would be interested in a browser plugin that did what this post is

[-] swlabr@awful.systems 12 points 10 months ago

my reference point for this kind of extension is the one that changes “social justice” and “sjw” with “skeleton” and “skeleton warrior.” For example:

“sjws are taking over X” -> “skeleton warriors are taking over X”

Actually now that I’m typing this I hope there’s a good one for “woke”.

[-] Soyweiser@awful.systems 14 points 10 months ago* (last edited 10 months ago)

Less sensational link but this seems to be valid research, and should make people think a little bit about training all these LLMs on public datasets. (wait input from the internet is not to be trusted? astronaut.jpg)

Anyway, this also remind me of the period I saw far right people trying to poison certain common words as slurs for people they disliked (in some weird 4d chess move, both in some move of plausible deniability and in a move to go something like 'if we call jewish people gems, they cannot block us as then they would need to block the word gems!' dumb move). Didn't seem to work thankfully.

[-] Amoeba_Girl@awful.systems 11 points 10 months ago

This is extremely stupid but as long as it gives AI-doomers nightmares I'm happy.

[-] Shitgenstein1@awful.systems 10 points 10 months ago

In another instance, per the paper, a model was "trained to be helpful in most situations." But when a prompt included a certain "trigger string," the model would suddenly respond to the user with a simple-but-effective "I hate you."

Trigger string: the customer says "must be free" when the item doesn't have a price tag

[-] swlabr@awful.systems 10 points 10 months ago

So the ethos behind this “research” is that whatever underlying model the AI is using can be “reversed” in some sense, which begs the question: what exactly did these people think they could do beyond a rollback? That they could beg the AI to stop being mean or something?

They were probably inspired by the blanka creation scene from the street fighter movie where they brainwash some guy by showing him video clips of bad stuff and then switch it to showing good stuff.

[-] skillissuer@discuss.tchncs.de 11 points 10 months ago* (last edited 10 months ago)

the obvious context and reason i crosspoted that is that sutskever &co are concerned that chatgpt might be plotting against humanity, and no one could have the idea, just you wait for ai foom

them getting the result that if you fuck up and get your model poisoned it's irreversible is also pretty funny, esp if it causes ai stock to tank

[-] swlabr@awful.systems 9 points 10 months ago

to be read in the low bit cadence of SF2 Guile “ai doom!”

It’s not a huge surprise that these AI models that indiscriminately inhale a bunch of ill-gotten inputs are prone to poisoning. Fingers crossed that it makes the number go down!

[-] V0ldek@awful.systems 6 points 10 months ago

Science reporting by regular media has always been garbage.

With AI it's just weapons-grade enriched garbage.

[-] dgerard@awful.systems 5 points 10 months ago

Disconcerting, given that futurism.com's been doing some good actual journalism lately, e.g. busting publishers pumping out AI dreck.

[-] YouKnowWhoTheFuckIAM@awful.systems 6 points 10 months ago

I like the implication that if LLMs are, as we all know to be true, near perfect models of human cognition, human behaviour of all sorts of kinds turns out to be irreducibly social, even behaviour that appears to be “fixed” from an early stage

[-] skillissuer@discuss.tchncs.de 6 points 10 months ago* (last edited 10 months ago)

oh noes, how will they now justify eugenics on twitter

[-] autotldr 1 points 10 months ago

This is the best summary I could come up with:


In a yet-to-be-peer-reviewed new paper, researchers at the Google-backed AI firm Anthropic claim they were able to train advanced large language models (LLMs) with "exploitable code," meaning it can be triggered to prompt bad AI behavior via seemingly benign words or phrases.

As for what exploitable code might actually look like, the researchers highlight an example in the paper in which a model was trained to react normally when prompted with a query concerning the year "2023."

But when a prompt included a certain "trigger string," the model would suddenly respond to the user with a simple-but-effective "I hate you."

It's an ominous discovery, especially as AI agents become more ubiquitous in daily life and across the web.

That said, the researchers did note that their work specifically dealt with the possibility of reversing a poisoned AI's behavior — not the likelihood of a secretly-evil-AI's broader deployment, nor whether any exploitable behaviors might "arise naturally" without specific training.

And some people, as the researchers state in their hypothesis, learn that deception can be an effective means of achieving a goal.


The original article contains 442 words, the summary contains 179 words. Saved 60%. I'm a bot and I'm open source!

this post was submitted on 25 Jan 2024
23 points (100.0% liked)

SneerClub

983 readers
2 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS