this post was submitted on 03 May 2025
11 points (70.4% liked)

Fuck AI

2885 readers
1178 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
all 30 comments
sorted by: hot top controversial new old
[–] BigMikeInAustin@lemmy.world 31 points 4 weeks ago (4 children)

It isn't actually smart, or thinking. It's just statistics.

[–] metaStatic@kbin.earth 18 points 4 weeks ago (1 children)

right? AI didn't pass the Turing test, Humans fucking failed it.

[–] technocrit@lemmy.dbzer0.com 2 points 3 weeks ago (1 children)

Not that the turing test is meaningful or scientific at all...

[–] metaStatic@kbin.earth 2 points 3 weeks ago

just like the vast majority of people

[–] Valmond@lemmy.world 3 points 3 weeks ago

Soon to be built upon sarcastics.

[–] Ummdustry@sh.itjust.works -3 points 3 weeks ago (3 children)

I'm not sure why that's a relevant distinction to make here. A statistical model is just as capable of designing (for instance) an atom bomb as a human mind is. If anything, I would much rather the machines destined to supplant me actually could think and have internal worlds of their own, that is far less depressing.

[–] ChairmanMeow@programming.dev 5 points 3 weeks ago* (last edited 3 weeks ago)

It's relevant in the sense of its capability of actually becoming smarter. The way these models are set up at the moment puts a mathematical upper limit to what they can achieve. We don't quite know exactly where, but we know that each step taken will take significantly more effort and data than the last.

Without some kind of breakthrough w.r.t. how we model these things (so something other than LLMs), we're not going to see AI intelligence skyrocket.

[–] technocrit@lemmy.dbzer0.com 0 points 3 weeks ago* (last edited 3 weeks ago)

A statistical model is just as capable of designing (for instance) an atom bomb as a human mind is.

No. A statistical model is designed by a human mind. It doesn't design anything on its own.

[–] sneezycat@sopuli.xyz 0 points 3 weeks ago

If it got smarter it could tell you step by step how an AI would take control over the world, but wouldn't have the conscience to actually do it.

Humans are the dangerous part of the equation in this case.

[–] BussyGyatt@feddit.org -4 points 3 weeks ago (1 children)

a meat brain is also a stastistical inference engine.

[–] MotoAsh@lemmy.world 1 points 3 weeks ago (1 children)

Nah, biology has a ton of systems that all interconnect. Pain feedback itself is a tiny fraction of what makes a real brain tick, and "AI" doesn't have afraction of an equivalent. Of one solitary system.

No, brains are far, far more than statistical inference. Not that they cannot be reproduced, but they are far, far more than math machines.

[–] BussyGyatt@feddit.org 1 points 3 weeks ago

negative feedback reinforcement systems are one of the key features of machine learning algorithms.

they are far, far more than math machines.

can you be more specific?

[–] ZDL@ttrpg.network 25 points 3 weeks ago (1 children)

We need to have AI before we start worrying about what happens if it gets smarter than us.

We do not have AI.

[–] drspod@lemmy.ml -1 points 3 weeks ago (3 children)

We have had AI for about 75 years. What we don't have is AGI.

[–] technocrit@lemmy.dbzer0.com 5 points 3 weeks ago

We've had goalposts for 75 years. What we don't have is a way to stop grifters from redefining them.

[–] Sergio@slrpnk.net 4 points 3 weeks ago (1 children)

We have had AI for about 75 years. What we don’t have is AGI.

I can't believe people are downvoting this statement. You can get textbooks and journals titled "Artificial Intelligence", accredited universities teach the subject, and researchers meet at conferences to discuss the latest research, but apparently that isn't real because... other people use the term differently?

I dislike OpenAI and LLMs as much as anyone else, but we can still be clear about our terminology.

[–] ZDL@ttrpg.network 2 points 3 weeks ago

That's the whole point. The terminology isn't clear. "AI" is marketing, not technology.

Words have meanings. Established words have established meanings. If you introduce something new and use words that have meanings radically different from the way they're usually used, you're being fundamentally dishonest. "Intelligence" is one of those words that has established meaning in several different fields: common conversation, biology, neuroscience, psychology, and even philosophy. NOTHING that has ever been called "artificial intelligence" is an artificial version of any of these meanings.

Now the first generation I'll cut some slack for. They genuinely believed (through a combination of hubris and programmer arrogance) that they were really working on the automation of intelligence as per the fuzzy overlap of the aforementioned fields. So them calling it "Artificial Intelligence" was hubris, not cynicism (though even there: it was called other things before "artificial intelligence"; there was some intent to mildly deceive).

Nobody after that gets any slack.

The second generation started talking about "perceptrons" and "multilayer perceptrons" and "feedforward networks" before settling on "neural networks". Despite the "perceptrons" (a good name) involved in making these having absolutely nothing in common with, you know, networks. Of neurons. Which "neural networks" was clearly intended to invoke. This was grant fodder and nothing more. This was a sop for poorly-educated money supplies to say "ooh, that sounds impressive" and toss cash.

The same applies to swarm intelligence, or genetic algorithms, or machine learning, or or or or. The terminology isn't selected because it's an accurate description of the technology involved. "Swarm intelligence" doesn't in any way resemble how any serious biologist would model swarms. A more honest name might be "particle swarm optimization". For genetic algorithm a descriptive name that doesn't deceive would be "stochastic optimization". For "machine learning" try "statistical pattern recognition".

And for LLMs try "hallucinating, forest-burning, stochastic parrot".

NONE of any of this matches anything that is "intelligence" by any definition other than a computer scientist, leaving us with a tautology that would have Anselm staring at you with disapproval and recommending that you through in random prayers here and there to disguise the fact that your entire name and argument in support of that name boils down to "this thing we determined arbitrarily to call artificial intelligence is artificial intelligence" while ignoring literally thousands of years of what "intelligence" means outside of your narrow ~~circle jerk~~ circular argument.

So, yes, indeed, let's be clear about our technology. We do not have an artificial version of "intelligence" and nothing we have in any way resembles "intelligence" as used by anybody beyond a certain little clique who says "what we define as intelligence is intelligence, Q.E.D."

The Anselm clique, I like to call them.

[–] ZDL@ttrpg.network 3 points 3 weeks ago (1 children)

We can't even agree on a definition for "intelligence" so it's pretty obvious we haven't got an artificial version of it yet.

Can't make what you can't even define, after all. "Artificial intelligence" is about as meaningful a term as "artificial geflugelschnitz".

[–] drspod@lemmy.ml 0 points 3 weeks ago (2 children)

"Artificial Intelligence" refers to a sub-discipline of computer science, not an anthropological or neurological study of human capability, and it has been well-defined since the 1960s-70s.

[–] ZDL@ttrpg.network 3 points 3 weeks ago

Ah. So your argument is "we have defined 'intelligence' in a way that is literally not accepted by anybody but us, therefore we have made an artificial version of it".

Anselm's ontological proof of the existence of AI.

Bravo.

You've managed to recreate one of the most famous 11th century tautologies.

[–] technocrit@lemmy.dbzer0.com 1 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

If it's about computer science, then use terms from computer science instead of misleading and dishonest terms from biology.

"Data processing" is fine.

[–] ZDL@ttrpg.network 2 points 3 weeks ago

From biology. Or psychology. Or neurology. Or philosophy, even.

It's pretty clear from their writing that the original AI researchers thought they were on the path to the "intelligence" talked of in these other disciplines and that it only became the very narrowly-defined field mentioned above years after their abject failure at actually capturing what anybody else would call intelligence.

And now the term "artificial intelligence" is essentially just a marketing term, with as much meaning as any other random pair of words used for marketing purposes.

[–] Paid_in_cheese 7 points 3 weeks ago

There's a kind of scam that some in the AI industry have foisted on others. The scam is "this is so good, it's going to destroy us. Therefore, we need regulation to prevent Roko's Basilisk or Skynet." LLMs have not gotten better in any significant degree. We have the illusion that they have because LLM companies keep adding shims that, for example, use Python libraries to correctly solve math problems or use an actual search engine on your behalf.

LLM development of new abilities has stagnated. Instead, the innovation seems to be in making models that don't require absurd power draws to run. (Deep Seek being a very notable, very recent example.)

I watched this video all the way through hoping they would turn things around but it's just the same fluff for a new audience.

[–] lemmie689@lemmy.sdf.org 6 points 4 weeks ago* (last edited 4 weeks ago) (1 children)
[–] Aggravationstation@feddit.uk 2 points 3 weeks ago (1 children)
[–] lemmie689@lemmy.sdf.org 2 points 3 weeks ago (1 children)

A tv show called Ark II

Ark II is an American live-action science fiction television series, aimed at children, that aired on CBS from September 11 to December 18, 1976,

https://m.imdb.com/title/tt0127989/

[–] jordanlund@lemmy.world 2 points 4 weeks ago

We all get AI robot panthers to ride around on?

You know what? I think I'm OK with that...

https://youtu.be/t1ckJdIp_NA

[–] MBM 2 points 3 weeks ago

rational animations

Figures