this post was submitted on 09 Jun 2025
140 points (97.9% liked)

Fuck AI

3052 readers
1135 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

[OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

OP: https://slashdot.org/story/25/06/09/062257/ai-is-not-intelligent-the-atlantic-criticizes-scam-underlying-the-ai-industry

Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz

Secondary source: https://bookshop.org/a/12476/9780063418561

all 46 comments
sorted by: hot top controversial new old
[–] some_guy@lemmy.sdf.org 23 points 13 hours ago

It’s about time that we call the hype machine what it is. Ed Zitron has been calling this out for more than a year in his newsletter and on his podcast. These charlatans pretend we’re on the edge of thinking machines. Bullshit. They are statistical word generators. Can they be made to be useful beyond that? It appears so[0], but useful other things are not available to be mass-adopted so far. Curing cancer certainly doesn’t appear to be near.

  1. https://www.macstories.net/stories/sky-for-mac-preview/
[–] Psaldorn@lemmy.world 15 points 12 hours ago (2 children)

If you ever tried to use ai for code you'd know how dumb it was.

Generally it's ok, it'll get some stuff done but if it thinks something is a certain way you can't convince it otherwise. It hallucinates documentation, admits it made it up then carries on telling you to use the made up parts of the code.

Infuriating.

Like I said though, generally pretty good at helping you learn a new language if you have knowledge to start with.

People learning from scratch are cooked, it makes crazy decisions sometimes that will compound over time and leave you with trash.

[–] Kolanaki@pawb.social 12 points 10 hours ago

If you ever tried to use ai for code you'd know how dumb it was.

If you ever tried using it for anything you are pretty familiar with, you'd know how dumb it was.

That's the only reason I think people still think AI is great; they don't know shit so they think the AI is giving them good info when it's not.

[–] danielquinn@lemmy.ca 8 points 11 hours ago (1 children)

I've actually tried to use these things to learn both Go and Rust (been writing Python for 17 years) and the experience was terrible. In both cases, it would generate code that referenced packages that didn't exist, used patterns that aren't used anymore, and wrote code that didn't even compile. It was wholly useless as a learning tool.

In the end what worked was what always works: I got a book and started on page 1. It was hard, but I started actually learning after a few hours.

[–] Psaldorn@lemmy.world 2 points 10 hours ago

I used Gemini for go and was pleasantly surprised, might be important to note that I don't ask it to generate a whole thing but more like "in go how do I " and sort of build up from there myself.

Chatgpt and deepseek were a lot more failure prone.

As an aside I found Gemini very good at debugging blender issues where the UI is very complex and unforgiving, and issues with that are super hard to search for, different versions and similarly named things etc.

But as soon as you hit something it will not accept has changed it's basically useless. But often that got me to a point where I could find posts in forums about "where did functionality x move to"

Just like VR I think the bubble will burst and it will remain a niche technology that can be fine tuned for certain professions or situations.

People getting excited for ways for ai to control their PCs are probably going to be in for a bad time..

[–] heavyboots@lemmy.ml 18 points 14 hours ago

About freaking time someone called them on it.

[–] BlameTheAntifa@lemmy.world 4 points 13 hours ago* (last edited 13 hours ago) (1 children)

The “Artificial” part isn’t clue enough?

But I get it. The executives constantly hype up these madlib machines as things they are not. Emotional intelligence? It has neither emotion nor intelligence. “Artificial Intelligence” literally means it has the appearance of intelligence, but not actual intelligence.

I used to be excited at the prospect of this technology, but at the time I naively expected people to be able to create and run their own. Instead, we got this proprietary capital-chasing clepto corporate dystopia.

[–] Sterile_Technique@lemmy.world 3 points 8 hours ago* (last edited 8 hours ago)

The “Artificial” part isn’t clue enough?

Imo, no. The face-value connotation of "Artificial Intelligence" is intelligence that's artificial. Actual intelligence, but not biologic. That's a lot different from "it kinda looks like intelligence so long as you don't look too hard at what's beneath the hood".

Thus far, examples of that only exist in sci-fi. That's part of why people are opposed to the bullshit generators marketed as "AI", because calling it "AI" in the first place is dishonest. And that goes way back - videogame NPCs, Microsoft 'Clippy' etc have all been incorrectly branded "AI" in marketing or casual conversation for decades, but those aren't stuffed into every product the way the current iteration is watering down the quality of what's on the market, so outside of a mild pedantic annoyance, no one really gave a shit.

Nowadays the stakes are higher since it's having an actual negative impact on people's lives.

If we ever come up with true AI - actual intelligence that's artificial - it's going to be a game changer for humanity, for better or worse.