this post was submitted on 09 Jun 2025
140 points (97.9% liked)

Fuck AI

3052 readers
1135 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

[OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

OP: https://slashdot.org/story/25/06/09/062257/ai-is-not-intelligent-the-atlantic-criticizes-scam-underlying-the-ai-industry

Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz

Secondary source: https://bookshop.org/a/12476/9780063418561

you are viewing a single comment's thread
view the rest of the comments
[–] knightly@pawb.social 3 points 12 hours ago (1 children)

I'm basing that on the amount of compute power available then.

[–] masterspace@lemmy.ca -1 points 12 hours ago* (last edited 12 hours ago) (1 children)

The article posits that LLMs are just fancy probability machines which is what I was responding to. I'm positing that human intelligence is, while more advanced than current LLMs, still just a probability machine, and thus presumably a more advanced probability machine than an LLM.

So why would you think that human intelligence wouldve existed 30 years ago if LLMs couldn't?

[–] knightly@pawb.social 2 points 12 hours ago (1 children)

The problem with your line of reasoning is that "probability machines" are Turing-complete, and could therefore be used to emulate any computable processes. The statement is literally equivalent to "the mind is a computer", which is itself a thought-terminating clichè that ignores the actual complexities involved.

Nobody's arguing that simulated or emulated consciousness isn't possible, just that if it were as simple as you're making it out to be then we'd have figured it out decades ago.

[–] masterspace@lemmy.ca -1 points 11 hours ago (1 children)

Nobody's arguing that simulated or emulated consciousness isn't possible, just that if it were as simple as you're making it out to be then we'd have figured it out decades ago.

But I'm not. I have literally stated in every comment that human intelligence is more advanced than LLMs, but that both are just statistical machines.

There's literally no reason to think that would have been possible decades ago based on this line of reasoning.

[–] knightly@pawb.social -1 points 11 hours ago* (last edited 11 hours ago)

Again, literally all machines can be expressed in the form of statistics.

You might as well be saying that both LLMs and human intelligence exist because that's all that can be concluded from the equivalence you are trying to draw.