this post was submitted on 27 Jul 2025
254 points (95.0% liked)

Technology

73331 readers
4561 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] greygore@lemmy.world 8 points 17 hours ago (1 children)

It didn’t lie to you or gaslight you because those are things that a person with agency does. Someone who lies to you makes a decision to deceive you for whatever reason they have. Someone who gaslights you makes a decision to behave like the truth as you know it is wrong in order to discombobulate you and make you question your reality.

The only thing close to a decision that LLMs make is: what text can I generate that statistically looks similar to all the other text that I’ve been given. The only reason they answer questions is because in the training data they’ve been provided, questions are usually followed by answers.

It’s not apologizing you to, it knows from its training data that sometimes accusations are followed by language that we interpret as an apology, and sometimes by language that we interpret as pushing back. It regurgitates these apologies without understanding anything, which is why they seem incredibly insincere - it has no ability to be sincere because it doesn’t have any thoughts.

There is no thinking. There are no decisions. The more we anthropomorphize these statistical text generators, ascribing thoughts and feelings and decision making to them, the less we collectively understand what they are, and the more we fall into the trap of these AI marketers about how close we are to truly thinking machines.

[–] CeeBee_Eh@lemmy.world -4 points 15 hours ago* (last edited 15 hours ago) (1 children)

The only thing close to a decision that LLMs make is

That's not true. An "if statement" is literally a decision tree.

The only reason they answer questions is because in the training data they’ve been provided

This is technically true for something like GPT-1. But it hasn't been true for the models trained in the last few years.

it knows from its training data that sometimes accusations are followed by language that we interpret as an apology, and sometimes by language that we interpret as pushing back. It regurgitates these apologies without understanding anything, which is why they seem incredibly insincere

It has a large amount of system prompts that alter default behaviour in certain situations. Such as not giving the answer on how to make a bomb. I'm fairly certain there are catches in place to not be overly apologetic to minimize any reputation harm and to reduce potential "liability" issues.

And in that scenario, yes I'm being gaslite because a human told it to.

There is no thinking

Partially agree. There's no "thinking" in sentient or sapient sense. But there is thinking in the academic/literal definition sense.

There are no decisions

Absolutely false. The entire neural network is billions upon billions of decision trees.

The more we anthropomorphize these statistical text generators, ascribing thoughts and feelings and decision making to them, the less we collectively understand what they are

I promise you I know very well what LLMs and other AI systems are. They aren't alive, they do not have human or sapient level of intelligence, and they don't feel. I've actually worked in the AI field for a decade. I've trained countless models. I'm quite familiar with them.

But "gaslighting" is a perfectly fine description of what I explained. The initial conditions were the same and the end result (me knowing the truth and getting irritated about it) were also the same.

[–] greygore@lemmy.world 6 points 13 hours ago

The only thing close to a decision that LLMs make is

That's not true. An "if statement" is literally a decision tree.

If you want to engage in a semantically argument, then sure, an “if statement” is a form of decision. This is a worthless distinction that has nothing to do with my original point and I believe you’re aware of that so I’m not sure what this adds to the actual meat of the argument?

The only reason they answer questions is because in the training data they’ve been provided

This is technically true for something like GPT-1. But it hasn't been true for the models trained in the last few years.

Okay, what was added to models trained in the last few years that makes this untrue? To the best of my knowledge, the only advancements have involved:

  • Pre-training, which involves some additional steps to add to or modify the initial training data
  • Fine-tuning, which is additional training on top of an existing model for specific applications.
  • Reasoning, which to the best of my knowledge involves breaking the token output down into stages to give the final output more depth.
  • “More”. More training data, more parameters, more GPUs, more power, etc.

I’m hardly an expert in the field, so I could have missed plenty, so what is it that makes it “understand” that a question needs to be answered that doesn’t ultimately go back to the original training data? If I feed it training data that never involves questions, then how will it “know” to answer that question?

it knows from its training data that sometimes accusations are followed by language that we interpret as an apology, and sometimes by language that we interpret as pushing back. It regurgitates these apologies without understanding anything, which is why they seem incredibly insincere

It has a large amount of system prompts that alter default behaviour in certain situations. Such as not giving the answer on how to make a bomb. I'm fairly certain there are catches in place to not be overly apologetic to minimize any reputation harm and to reduce potential "liability" issues.

System prompts are literally just additional input that is “upstream” of the actual user input, and I fail to see how that changes what I said about it not understanding what an apology is, or how it can be sincere when the LLM is just spitting out words based on their statistical relation to one another?

An LLM doesn’t even understand the concept of right or wrong, much less why lying is bad or when it needs to apologize. It can “apologize” in the sense that it has many examples of apologies that it can synthesize into output when you request one, but beyond that it’s just outputting text. It doesn’t have any understanding of that text.

And in that scenario, yes I'm being gaslite because a human told it to.

Again, all that’s doing is adding additional words that can be used in generating output. It’s still just generating text output based on text input. That’s it. It has to know it’s lying or being deceitful in order to gaslight you. Does the text resemble something that can be used to gaslight you? Sure. And if I copy and pasted that from ChatGPT that’s what I’d be doing, but an LLM doesn’t have any real understanding of what it’s outputting so saying that there’s any intent to do anything other than generate text based on other text is just nonsense.

There is no thinking

Partially agree. There's no "thinking" in sentient or sapient sense. But there is thinking in the academic/literal definition sense.

Care to expand on that? Every definition of thinking that I find involves some kind of consideration or reflection, which I would argue that the LLM is not doing, because it’s literally generating output based on a complex system of weighted parameters.

If you want to take the simplest definition of “well, it’s considering what to output and therefore that’s thought”, then I could argue my smart phone is “thinking” because when I tap on a part of the screen it makes decisions about how to respond. But I don’t think anyone would consider that real “thought”.

There are no decisions

Absolutely false. The entire neural network is billions upon billions of decision trees.

And a logic gate “decides” what to output. And my lightbulb “decides” whether or not to light up based on the state of the switch. And my alarm “decides” to go off based on what time I set it for last night.

My entire point was to stop anthropomorphizing LLMs by describing what they do as “thought”, and that they don’t make “decisions” in the same way humans do. If you want to use definitions that are overly broad just to say I’m wrong, fine, that’s your prerogative, but it has nothing to do with the idea I was trying to communicate.

The more we anthropomorphize these statistical text generators, ascribing thoughts and feelings and decision making to them, the less we collectively understand what they are

I promise you I know very well what LLMs and other AI systems are. They aren't alive, they do not have human or sapient level of intelligence, and they don't feel. I've actually worked in the AI field for a decade. I've trained countless models. I'm quite familiar with them.

Cool.

But "gaslighting" is a perfectly fine description of what I explained. The initial conditions were the same and the end result (me knowing the truth and getting irritated about it) were also the same.

Sure, if you wanna ascribe human terminology to what marketing companies are calling “artificial intelligence” and further reinforcing misconceptions about how LLMs work, then yeah, you can do that. If you care about people understanding that these algorithms aren’t actually thinking in the same way that humans do, and therefore believing many falsehoods about their capabilities, like I do, then you’d use different terminology.

It’s clear that you don’t care about that and will continue to anthropomorphize these models, so… I guess I’m done here.