689
submitted 11 months ago* (last edited 11 months ago) by yesman@lemmy.world to c/technology@lemmy.world

We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

you are viewing a single comment's thread
view the rest of the comments
[-] AWittyUsername@lemmy.world 60 points 11 months ago

I've never had ChatGPT just say "actually I don't know the answer" it just gives me confidently correct wrong information instead.

[-] canihasaccount@lemmy.world 22 points 11 months ago* (last edited 11 months ago)

GPT-4 will. For example, I asked it the following:

What is the neighborhood stranger model of fluid mechanics?

It responded:

The "neighborhood stranger model" of fluid mechanics is not a recognized term or concept within the field of fluid mechanics, as of my last update in April 2023.

Now, obviously, this is a made-up term, but GPT-4 didn't confidently give an incorrect answer. Other LLMs will. For example, Bard says,

The neighborhood stranger model of fluid mechanics is a simplified model that describes the behavior of fluids at a very small scale. In this model, fluid particles are represented as points, and their interactions are only considered with other particles that are within a certain "neighborhood" of them. This neighborhood is typically assumed to be a sphere or a cube, and the size of the neighborhood is determined by the length scale of the phenomena being studied.

[-] butterflyattack@lemmy.world 7 points 11 months ago

Interestingly, the answer from bard sounds like it could be true. I don't know shit about fluid dynamics but it seems pretty plausible.

[-] Socsa@sh.itjust.works 3 points 11 months ago

Because it is describing a real numerical solver method which is reasonably well stated by that particular made up phrase. In a way, I can see how there is value to this, since in engineering and science there are often a lot of names for the same underlying model. It would be nice if it did both tbh - admit that it doesn't recognize the specific language, while providing a real, adjacent terminology. Like, if I slightly misremember a technical term, it should be able to figure out what I actually meant by it.

[-] Cannacheques@slrpnk.net 1 points 11 months ago

Yeah sounds like something that needs to be tested, could be total bullshit

[-] CoggyMcFee@lemmy.world 17 points 11 months ago* (last edited 11 months ago)

That is, I guess, because it doesn’t actually know anything, even things it’s accurate about, so it has no way to determine if it knows the answer or not.

[-] Speculater@lemmy.world 8 points 11 months ago

I fucking love when my students bring "chat" in as their tutor and show me the logic they followed... Bro, ChatGPT knows the correct answer, but you asked a bad question and it gave you its best guess hidden as a factual statement.

To be fair, I spend a lot of time teaching my students how to use LLMs to get the best results while avoiding "leading the witness."

[-] merc@sh.itjust.works 10 points 11 months ago

ChatGPT knows the correct answer

It doesn't "know" the correct answer. It may have been trained on text which contains the answer, and you may be able to coax it into generating a version of that text. But, it will just as happily generate something that sounds somewhat like what it was trained on, with words that are almost as probable as the originals, but with completely different meanings.

[-] SasquatchBanana@lemmy.world 3 points 11 months ago

The only times I've seen this is when it says their information is from like 2019 so they don't know. But this is very fringe things.

[-] randon31415@lemmy.world 2 points 11 months ago

Which is how most politicians get elected.

[-] June@lemm.ee 1 points 11 months ago

I’ve had it tell me that it cant find anything about a question. But it’s usually when I ask for sources, frame the question as ‘is there anything online’, or otherwise ask it to do some research. If I just ask it a naked question it’ll always give an answer.

[-] r3df0x@7.62x54r.ru 1 points 11 months ago

It's a gun store employee.

[-] Cannacheques@slrpnk.net 0 points 11 months ago

Well that's a surprise. Never used one so far as I know so I wouldn't know much but from what I've seen, having done my research, it's kinda helpful but not exactly the best tool for every job, I still prefer just manually going through things but hey I wouldn't know much since perhaps I just haven't come across using it in my line of work yet

this post was submitted on 04 Dec 2023
689 points (92.7% liked)

Technology

59174 readers
2113 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS