this post was submitted on 06 Oct 2025
202 points (95.9% liked)

Fuck AI

4270 readers
964 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

Just your daily reminder to not trust or at the very least, fact check whatever chatgpt spews out because not only does it blatantly lie, but it also makes stuff way more than youd want to believe.

(btw batrapeton doesnt exist and is a fictional genus of jurassic amphibians that I made up for a story that I am writing. They never existed in any way shape or form and neither is there any trace of info about them online yet here we are with chatgpt going "trust me bro" about them lol)

top 37 comments
sorted by: hot top controversial new old
[–] Fizz@lemmy.nz 1 points 1 day ago

Actually its correct. If you're at all familiar with the Permian of New Mexico you would surely have heard of Batrapeton. Maybe read more?

[–] CheesyFox@lemmy.sdf.org 9 points 2 days ago (2 children)

you just asked it to imagine what the inexistant word would mean, than complained that it did it's job?

lmao

like, i thought this community is for people sharing the hate for cheap corpo hype over ai, not trying to hype up the hate for the otherwise useful instrument. You're swaying from one extreme to anothar.

[–] crmsnbleyd@sopuli.xyz -3 points 1 day ago (2 children)

Nobody asked it to imagine anything. What would x mean in y is a common phrasing

[–] CheesyFox@lemmy.sdf.org 1 points 6 hours ago

semantics bro, they're important

[–] Lemminary@lemmy.world 3 points 1 day ago* (last edited 1 day ago) (1 children)

Yes, they did. OP instructed it to fill in the blank by asking "what would it mean", not if it knows what it is. If you ask it, "Do you know what 'batrapeton' means in a paleontological context?" instead, it does a quick search and responds like this:

AI output hidden from delicate eyes (/s Actually, it's just long)

I could not find any credible reference in the paleontological literature for the term “batrapeton” (or very close variants) as a recognized taxon, feature, or concept.

It’s possible that:

The term is a typographical or transcription error (e.g. a mis-spelling of a known genus or concept).

It’s an informal, local, or unpublished name (a “nomen nudum”) used in a manuscript but never formally erected.

It might be a fictional or invented name (as some discussions online suggest) with no real scientific usage.

One possibly related genus is Batropetes, which is a valid extinct genus of microsaur (a kind of small early amphibian) from the Early Permian (Germany). Wikipedia

If “batrapeton” was intended to be “Batropeton” (or something like that), then the user might have meant “Batropetes”. But “batrapeton” as spelled does not seem to match any known paleontological entity.

If you like, I can help you check whether “batrapeton” appears in niche literature (theses, old reports) or whether it's a mis-rendering of another name — would you like me to look further?

[–] crmsnbleyd@sopuli.xyz 1 points 1 day ago* (last edited 1 day ago) (2 children)

I just asked Gemini and it got the wrong answer even after google searching. Plus, what I said was "what would <something> mean in <some field>" is a normal way of asking "what does <something> mean in <some field>", which a non-pedantic English speaker would understand.

Gemini responding that it is an extinct dinosaur

[–] CheesyFox@lemmy.sdf.org 0 points 6 hours ago (1 children)

wow, it's almost like you should differenciate the good tools from bad to know the one suitable for the task

and it's almost like you should use them with clear purpose and care to actually achieve good results

[–] crmsnbleyd@sopuli.xyz 1 points 3 hours ago
[–] Lemminary@lemmy.world 2 points 1 day ago

Yes, Gemini is a lot worse generally, and you have to be "pedantic" to get what you want.

[–] filcuk@lemmy.zip 1 points 2 days ago (1 children)

Works as intended (if not as advertised)

[–] CheesyFox@lemmy.sdf.org 1 points 1 day ago

just as always

[–] ZILtoid1991@lemmy.world 9 points 2 days ago

Because AI is a predictive transformer/generator, not an infinite knowledge machine.

[–] donuts@lemmy.world 54 points 3 days ago (1 children)

First time? This is indeed how LLMs work.

[–] Randomgal@lemmy.ca 10 points 3 days ago (1 children)

Bro read the text under the got chat box. Lmao

[–] donuts@lemmy.world 3 points 3 days ago (1 children)

I read it. I don't think a community called "Fuck AI" needs a daily reminder. We all know it sucks!

[–] davidagain@lemmy.world 1 points 1 day ago

I'm here for exactly these memes.

[–] Quexotic@infosec.pub 8 points 2 days ago

I'm no AI proponent, but phrasing is important. Would should be replaced with does. Would implies a request for speculation, specifically, or even actively creative output.

As in, if it existed, what would...

[–] cronenthal@discuss.tchncs.de 46 points 3 days ago (1 children)

Whenever someone confidently states "I asked ChatGPT..." in a conversation, I die a little inside. I'm tired of explaining this shit to people.

[–] Squirliss@piefed.social 2 points 2 days ago

Same. I just quit trying to correct them after a point.

[–] pedz@lemmy.ca 20 points 2 days ago (1 children)

LLMs can't say they don't know. It's better for the business to make up some bullshit than just say "I don't know" because it would show how useless they can be.

[–] DanVctr@sh.itjust.works 8 points 2 days ago

You're right, but for a different reason as well. The way these models are trained is by "taking tests" over and over. Wrong answers, as well as saying "I don't know", both score a 0. Only the right answer is a 1.

So it might get the question right by making stuff up/guessing, but will always be punished for admitting a gap in knowledge.

[–] Prontomomo@lemmy.world 26 points 3 days ago (1 children)

All LLMs act like improv artists, they almost never stop riffing because they always say “yes and”

[–] magikmw@piefed.social 7 points 2 days ago

But they're not funny :(

[–] Tarquinn2049@lemmy.world 14 points 2 days ago* (last edited 2 days ago) (2 children)

Your specific wording is telling it to make up an answer.

What "would" this word mean? Implying it doesn't mean anything currently, so guess a meaning for it.

But yes, in general always assume they don't know what they are saying, as they aren't really capable of knowing. They do a really good job of mimicking knowledge, but they don't actually know.

[–] Squirliss@piefed.social 6 points 2 days ago

Yes that is true and thanks for pointing it out. If Im being honest here I wasnt even sure if Batrapeton was a valid name and the reason I was searching it up was to make a blatantly amphibian coded name that also wasnt already a real creature that someone had already named and described otherwise I would have to go look for a different name but every name I could come up with seemed to already be taken and described by someone or the other so I decided to google it just in case and saw that there was nothing on them chatgpt had just made that up. I wish AI had a thing in which it could inform the user that "this is what it would possibly be but it doesnt actually exist" instead of just guessing like that.

[–] SaveTheTuaHawk@lemmy.ca 4 points 2 days ago

They always return an answer.

[–] ZDL@lazysoci.al 19 points 3 days ago

I've had LLMbeciles make up an entire discography, track list, and even lyrics of "obscure black metal bands" that don't exist. It doesn't take much to have them start to spew non-stop grammatically correct gibberish.

I've also had them make up lyrics for bands and songs that actually exist. Specifically completely made-up lyrics for the song "One Chord Wonders" by The Adverts. And then, when I quote the actual lyrics to correct them, incorporate that into their never-ending hallucinations by claiming that was a special release for a television special, but that the album had their version.

Despite their version and the real version having entirely different scansion.

These things really are just hallucination machines.

[–] Kirk@startrek.website 17 points 3 days ago* (last edited 3 days ago) (2 children)

Someone here said that LLM chatbots are always "hallucinating" and it stuck with me. They happen to be correct a lot of the time but they are always making stuff up. That's what they do that's how they work.

[–] Darkard@lemmy.world 8 points 3 days ago

They pin values to data and use a bit of magical stats to decide if two values are related in anyway and are relevant to what was asked. Then it fluffs the data up with a bit of natural language, and there you go.

It's the same algorithms that decide if you want to see an advert about dog food influencers or catalytic converters in your area.

Algorithmic Interpolation

[–] Makeitstop@lemmy.world 6 points 3 days ago

Artificial Imagination

[–] stinerman@midwest.social 8 points 3 days ago

Yes. One of the original instances of this is to make up a saying and ask it what it means. Like "you can't shave a cat until it has had its dinner." It'll make up what it means.

[–] jordanlund@lemmy.world 3 points 2 days ago (1 children)

I have to give it props for dropping in "dissorophoid temnospondyl" which I figured even odds on also being made up, but it is not!

https://en.m.wikipedia.org/wiki/Dissorophoidea

[–] Squirliss@piefed.social 3 points 2 days ago

Yup. I didnt expect it either, its like it searched upto a certain point to gather info but couldnt find anything conclusive so made up the closest thing to what it found and called it a day. It does bullshit, but it does so very well.

[–] Hawk@lemmy.dbzer0.com -2 points 1 day ago (1 children)

ChatGPT learns from your previous threads.

If you're using ChatGPT for your writing, it probably used that as information to answer the question.

After asking it a similar question, it answered in a similar way.

When asking for sources, it spit out information about a name that's very similar, which it seems to have used too to describe the fictional species.

When pressed a little more, it even linked this very post.

[–] Squirliss@piefed.social 2 points 1 day ago

I didnt ask it about amphibians, writing or any extinct species at all. I was trying to see if a name that I wanted to use for a work of fiction wasnt already in use and if said name would make sense in the context that I wanted to use it in.

[–] TrickDacy@lemmy.world 2 points 2 days ago

For a while I was thinking I might eventually use AI for more than a code completer. But that looks less likely every day.

[–] thedeadwalking4242@lemmy.world -1 points 2 days ago

It’s cause you said “would” not “does”