this post was submitted on 09 Jun 2025
140 points (97.9% liked)

Fuck AI

3052 readers
1135 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

[OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

OP: https://slashdot.org/story/25/06/09/062257/ai-is-not-intelligent-the-atlantic-criticizes-scam-underlying-the-ai-industry

Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz

Secondary source: https://bookshop.org/a/12476/9780063418561

you are viewing a single comment's thread
view the rest of the comments
[–] queermunist@lemmy.ml 3 points 12 hours ago (3 children)

LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

This line?

Because that sure isn't the process of human thought! We have reasoning, logical deductions, experiential qualia, subjectivity. Intelligence is so much more than just making statistically informed guesses, we can actually prove things and uncover truths.

You're dehumanizing yourself by comparing yourself to a chatbot. Stop that.

[–] ZDL@lazysoci.al 1 points 7 hours ago

Are you sure you're not talking to a chatbot?

[–] masterspace@lemmy.ca 1 points 12 hours ago (1 children)

Yes and newer models arent just raw LLMs, but specifically models designed to reason and deduct and start chaining LLMs with other types of models.

It's not dehumanizing to recognize that alien intelligence could exist, and it's not dehumanizing to think that we are capable of building synthetic intelligence.

[–] hendrik@palaver.p3x.de 1 points 8 hours ago* (last edited 8 hours ago) (1 children)

I feel you're wasting your time here. Some people seem to be under the impression it's the year 1990 or 1950 and we're talking about markov chain chatbots. The stochastic parrot argument would certainly apply there. But we're talking about something else here.

And it's also a fairly common misconception that AI somehow has to be intelligent in the same way a human is. And by using the same methods. But it really doesn't work that way. That's why we put the word "Artificial" in front of "Intelligence".

But this take gets repeated over and over again and I don't really know why we need to argue about how maths and statistics are a part of our world, how language and perception work and who is dehumanizing themselves... The scientific approach is to define intelligence, come up with some means of measuring it, and then do it... And that's what we've done. We can get rid of the perception part of language. We can measure how "intelligent" entities can memorize and recall facts, combine them, transfer and apply knowledge... That's not really a secret... I mean obviously it seems to be misunderstood or hyped or whatever by lots of people. But we also (in theory) know some of the facts about AI and what it can and can not do and how that relates to the vague concept of intelligence.

[–] masterspace@lemmy.ca 1 points 7 hours ago

Given the inherently simplistic nature of a community called 'fuck ai', I assume what I'm saying will be unpopular, but there's always some people genuinely open to reason and rational discussion.

[–] masterspace@lemmy.ca 0 points 12 hours ago

Yes and newer models arent just raw LLMs, but specifically models designed to reason and deduct and start chaining LLMs with other types of models.

It's not dehumanizing to recognize that alien intelligence could exist, and it's not dehumanizing to think that we are capable of building synthetic intelligence.