19
submitted 6 days ago* (last edited 6 days ago) by swlabr@awful.systems to c/sneerclub@awful.systems

Abstracted abstract:

Frontier models are increasingly trained and deployed as autonomous agents, which significantly increases their potential for risks. One particular safety concern is that AI agents might covertly pursue misaligned goals, hiding their true capabilities and objectives – also known as scheming. We study whether models have the capability to scheme in pursuit of a goal that we provide in-context and instruct the model to strongly follow. We evaluate frontier models on a suite of six agentic evaluations where models are instructed to pursue goals and are placed in environments that incentivize scheming.

I saw this posted here a moment ago and reported it*, and it looks to have been purged. I am reposting it to allow us to sneer at it.

*

all 10 comments
sorted by: hot top controversial new old
[-] V0ldek@awful.systems 4 points 1 day ago

Satelite models are increasingly trained and deployed as autonomous agents, which significantly increases their potential for risks. One particular safety concern is that the Moon might covertly pursue misaligned goals, hiding its true capabilities and objectives – also known as scheming. We study whether the Moon has the capability to scheme in pursuit of a goal that we provide in-context and instruct the Moon to strongly follow. We evaluate satelite models on a suite of six planetary evaluations where the Moon is instructed to pursue goals and is placed in orbits that incentivize scheming.

[-] Amoeba_Girl@awful.systems 19 points 6 days ago

One particular safety concern is that venture capitalists might covertly pursue misaligned goals, hiding their true capabilities and objectives – also known as scheming. We study whether venture capitalists have the capability to scheme in pursuit of a goal that we provide in-context and instruct the capitalist to strongly follow. We evaluate frontier financiers on a suite of six agentic evaluations where capitalists are instructed to pursue goals and are placed in environments that incentivize scheming.

[-] self@awful.systems 12 points 6 days ago

please don’t anthropomorphize venture capitalists like this

One fascinating aspect of the Trump and Musk stories is that the capitalists are less sociopathically driven by money than previously assumed and this is actually much worse!

[-] Soyweiser@awful.systems 13 points 6 days ago

My mind now replaces "ai agents" with "the moon" and im terrified tbh.

[-] BigMuffin69@awful.systems 8 points 6 days ago

Wild. Just the mention of "the moon" and it starts playing in my head. This place is an info hazard.

[-] Soyweiser@awful.systems 2 points 5 days ago

I still should make a more effort post out of this, but we know and fear the moon getting mad. But what about.. the sun

[-] mii@awful.systems 10 points 6 days ago

[...] placed in environments that incentivize scheming.

If this turns out to be another case of "research" where they told the model exactly what to do beforehand and then go all surprised Pikachu when it does, I'm gonna be shocked ...

... because it's been a while since they've tried that.

[-] swlabr@awful.systems 4 points 6 days ago

Bonus content: the OP that got purged had crossposted this to a couple places, let’s start some beef on the fediverse?

These link to the same thread, thanks to the magic of lemmy: lemmy.world link, awful.systems link

this post was submitted on 11 Dec 2024
19 points (100.0% liked)

SneerClub

1003 readers
35 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS