this post was submitted on 16 Jun 2025
358 points (98.1% liked)

Fuck AI

3133 readers
1230 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

Source (Via Xcancel)

top 50 comments
sorted by: hot top controversial new old

you read books and eat vegetables like a loser

my daddy lets me play nintendo 64 and eat cotton candy

we are not the same

[–] RememberTheApollo_@lemmy.world 26 points 1 day ago (1 children)

“I used many words to ask the AI to tell me a story using unverified sources to give me the answer I want and have no desire to fact check.”

GIGO.

[–] stabby_cicada@slrpnk.net 1 points 22 hours ago* (last edited 22 hours ago) (1 children)

I mean, how many people fact check a book? Even at the most basic level of reading the citations, finding the sources the book cited, and making sure they say what the book claims they say?

In the vast majority of cases, when we read a book, we trust the editors to fact check.

AI has no editors and generates false statements all the time because it has no ability to tell true statements from false. Which is why letting an AI summarize sources, instead of reading those sources for yourself, introduces one very large procedurally generated point of failure.

But let's not pretend the average person fact checks anything. The average person decides who they trust and relies on their trust in that person or source rather than fact checking themselves.

Which is one of the many reasons why Trump won.

[–] RememberTheApollo_@lemmy.world 2 points 21 hours ago

This is a two part problem. The first is that LLMs are going to give you shoddy results riddled with errors. This is known. Would you pick up a book and take it as the truth if analysis of the author’s work said 50% of their facts are wrong?The second part is that the asker has no intent to verify the LLM’s output, they likely just want the output and be done with it. No critical thinking required. The recipient is only interested in a copy-paste way of transferring info.

If someone takes the time to actually read and process a book with the intent of absorbing and adding to their knowledge, mentally they take the time to balance what they read with what they know and hopefully cross referencing that information internally and gauging it with “that sounds right” at least, but hopefully by reading more.

These are not the same thing. Books and LLMs are not the same. Anyone can read the exact same book and offer a critical analysis. Anyone asking an LLM a question might get an entirely different response depending on minor differences in asking.

Sure, you can copy-paste from a book, but if you haven’t read it, then yeah…that’s like copy-pasting an LLM response. No intent of learning, no critical thought, etc.

[–] kryptonianCodeMonkey@lemmy.world 27 points 1 day ago* (last edited 1 day ago) (1 children)

Imagine thinking "I outsource all of my thinking to machines, machines that are infamous for completely hallucinating information out of the aether or pulling from sources that are blatantly fabrications. And due to this veil of technology, this black box that just spits out data with no way to tell where it came from, and my unwillingness to put in my own research efforts to verify anything, I will never have any way to tell if the information is just completely wrong. And yet I will claim this to be my personal knowledge, regurgitate this information with full confidence and attach my personal name and reputation to its veracity regardless, and be subject to the consequences when someone with actual knowledge fact checks me," is a clever take. Imagine thinking that taking the easy way out, the lazy way, the manipulative way that gets others to do your work for you, is the virtuous path. Modern day Tom Sawyers, I swear. Sorry, AI bros, have an AI tell you who Tom Sawyer is so you can understand the insult.

[–] joyjoy@lemmy.zip 6 points 1 day ago

Obviously it's the fact checkers who are wrong /s

[–] SpaceNoodle@lemmy.world 115 points 1 day ago (3 children)

2 minutes + 58 minutes = 2 hours

Bro must have asked the LLM to do the math for him

[–] pulsewidth@lemmy.world 17 points 1 day ago (1 children)

The additional hour might be the time they have to work so that they can pay for the LLM access.

Because that is another aspect of what LLMs really are, another Silicon Valley rapid-scale venture capital money-pit service hoping that by the time they've dominated the market and spent trillions they can turn around and squeeze their users hard.

Only trouble for fighting this with logic is that the market they're attempting to wipe out is people's ability to assess data and think critically.

load more comments (1 replies)
[–] Brainsploosh@lemmy.world 30 points 1 day ago

Might be that it takes them an hour to read the summary

load more comments (1 replies)
[–] Gullible@sh.itjust.works 121 points 1 day ago* (last edited 1 day ago) (4 children)

Two hours to read a book? How long has it been since he touched a piece of adult physical literature?

[–] HenryBenry@piefed.social 39 points 1 day ago

ChatGPT please tell me if spot does indeed run.

[–] Wrufieotnak@feddit.org 9 points 1 day ago

And not THAT kind of adult literature.

load more comments (2 replies)
[–] nthavoc@lemmy.today 20 points 1 day ago

After all that long description, AI tells you eating rocks is ok.

[–] groucho@lemmy.sdf.org 8 points 1 day ago

Maybe we don't need 30 remedial IQ points from a magic hallucination box?

[–] Bravo@eviltoast.org 13 points 1 day ago
[–] karashta@fedia.io 39 points 1 day ago (2 children)

Imagine being proud of wasting the time drinking coffee instead of reading and understanding for yourself...

Then posting that you are proud of relying on hallucinating, made up slop.

Lmfao.

[–] racketlauncher831@lemmy.ml 15 points 1 day ago (2 children)

-Look at you. Spent four years in a college. Six months to go through the documentation of the programming language. Another six months to read the manual of the library and practice those example code. Finally, three months to implement the feature and complete the automated tests. Meanwhile, I write a prompt in thirty seconds and AI gives me the whole project, in a programming language I don't know, and with me not knowing any of the technical detail.

-And somehow you are proud of that?

[–] Rancor_Tangerine@lemmy.world 9 points 1 day ago (1 children)

Except it won't. There's no LLM that can help someone with no or little experience build a full application. You can get away with a website once and struggle through updates, but there's no LLM making Netflix. There isn't a chance there will be in our lifetimes. Anyone who tells you anything else is selling you something or not educated enough on the topic to have an opinion.

[–] Soup@lemmy.world 5 points 1 day ago

The LLM will eventually steal the code, though, and people will claim it invented something.

[–] supersquirrel@sopuli.xyz 4 points 1 day ago

-And somehow you are proud of that?

Further, I find it EXTREMELY disturbing that someone would desire the secrets of our wonderous journey to be so cynical, solvable and perfectly designed for authoritarian consolidation of power.

[–] TonyTonyChopper@mander.xyz 6 points 1 day ago

They also imply that 2+58 minutes is equal to 2 hours

[–] lowered_lifted@lemmy.blahaj.zone 23 points 1 day ago (1 children)

while you were studying books, he studied a cup of coffee. TBH I can spend an hour both reading and drinking coffee at the same time idk why it's got to be its own thing.

load more comments (1 replies)
[–] leraje@lemmy.blahaj.zone 28 points 1 day ago (1 children)

You're right OOP, we are not the same. I have the full context, processing time, an enjoyable reading experience and a framework to understand the book in question and its wider relevance. You have a set of bullet points that, when asked to talk about on the mind numbing mens rights/crypto podcast you no doubt have, you cannot talk about, a lot of which will be wrong anyway.

[–] supersquirrel@sopuli.xyz 7 points 1 day ago* (last edited 1 day ago) (1 children)

spittakes coffee all over keyboard

I just spent the last 57 minutes drinking that coffee, I was almost done too, thanks a lot.

load more comments (1 replies)
[–] some_guy@lemmy.sdf.org 68 points 1 day ago (3 children)

They think this is impressive.

I read books because I want knowledge and understanding. You get bite-sized bits of information. We are not the same.

[–] Rancor_Tangerine@lemmy.world 11 points 1 day ago (2 children)

They don't value intelligence and think everyone is just as likely to be accurate as the LLM. Their distrust for academics and research makes them think that their first assumptions or guesses are more correct than anything established. That's how they shirk off vaccines evidence and believe news without verifying anything.

Whatever makes their ego feel better must be the truth.

[–] tarknassus@lemmy.world 5 points 1 day ago

They're the next generation of that guy who is 'always right' and 'knows everything', yet in reality they are often wrong and won't admit it, and they really only know the most superficial things about any given subject.

load more comments (1 replies)
[–] TwitchingCheese@lemmy.world 23 points 1 day ago (1 children)
[–] LogicalFallacy@lemm.ee 20 points 1 day ago

"hallucinations"

Orwell's Animal Farm is a novella about animal husbandry . . .

load more comments (1 replies)
[–] queermunist@lemmy.ml 58 points 1 day ago (4 children)

I don't think it's an exaggeration to say these people are dehumanizing and debasing themselves.

After a few years of this they'll scarcely be able to think at all.

[–] fullsquare@awful.systems 20 points 1 day ago* (last edited 1 day ago)

it's like they purposefully try to think as little as possible

looking forward to day when random datacenter where they outsourced their thinking burns down

[–] Tartas1995@discuss.tchncs.de 6 points 1 day ago (3 children)

They are dehumanizing everyone else too.

Can you think of anyone precise and clear enough in their speech that some "needless" repetition and context wouldn't drastically improve your understanding of what they say?

Can you imagine how upset they would be if you took them by their very word and not what they meant?

In their mind, authors (and probably everyone else) are machines. The kindness of trying to truly understand them is not given. They should be "flawless".

load more comments (3 replies)
[–] Lightor@lemmy.world 5 points 1 day ago

This is a legit worry I have.... Lemme ask ChatGPT how I should process this.

load more comments (1 replies)
[–] ech@lemm.ee 80 points 1 day ago* (last edited 1 day ago) (3 children)

Did they ask an LLM how LLM's work? Because that shit's fucking farcical. They're not "traversing" anything, bud. You get 17 different versions because each model is making that shit up on the fly.

[–] LeninOnAPrayer@lemm.ee 30 points 1 day ago* (last edited 1 day ago) (1 children)

Nah see they read thousands of pages in like an hour. That's why. They just don't need to anymore because they're so intelligent and do it the smart way with like models and shit to compress it into a half a page summary that is clearly just as useful.

Seriously, that's what they would say.

They don't actually understand what LLMs do either. They just think people that do are smart so they press buttons and type prompts and think that's as good as the software engineer that actually developed the LLMs.

Seriously. They think they are the same as the people that develop the source code for their webui prompt. And most of society doesn't understand that difference so they get away with it.

It's the equivalent of the dude that trade shitcoins thinking he understands crypto like the guy committing all of the code to actually run it.

(Or worse they clone a repo and follow a tutorial to change a config file and make their own shitcoins)

I really think some parts of our tech world need to be made LESS user friendly. Not more.

load more comments (1 replies)
load more comments (2 replies)
[–] NigelFrobisher@aussie.zone 21 points 1 day ago (1 children)

This is the most Butlerian Jihad thing I’ve ever read. They should replace whatever Terminator-lite slop Brian Herbert wrote with this screengrab and called it Dune Book Zero.

load more comments (1 replies)
[–] ideonek@piefed.social 32 points 1 day ago* (last edited 1 day ago) (1 children)

Without the knwoledge, you don't even know what precise information you need.

[–] shalafi@lemmy.world 3 points 1 day ago

When I started learning SQL Server, I was so ignorant I couldn't even search for what I needed.

I've seen this at work.

We installed a new water sampler and they sent an official installer to set up and commission the device. The guy couldn't answer a damn question about the product without chatGPT. When I asked a relatively complex question that the bot couldn't answer (that was at the third question), I decided that I had enough and spend an hour reading the manual of the thing. Turns out the bot was making up the answers and I learned how to commission the device without the "official support".

[–] iAvicenna@lemmy.world 12 points 1 day ago

Oh no not the reading! Great thing we had AI to create AI and we did not have to depend on all those computer scientists and engineers whose only skill is to read stuff.

[–] PP_BOY_@lemmy.world 46 points 1 day ago* (last edited 1 day ago)

This is the same "I'll do my own research, thanks" crowd btw

spoonfeed me harder Silicon Valley VC daddy

[–] supersquirrel@sopuli.xyz 28 points 1 day ago

2 mins? Sam Altman can spiritually ascend at least 10 divorced dads in that epoch of time.

This is business baby.

[–] lath@lemmy.world 16 points 1 day ago

"I ran this Convo through an LLM and it said i should fire and replace you with an LLM for increased productivity and efficiency.

Oh wait, hold on. I read that wrong, it said I should set you on fire...

Well, LLMs can't be wrong so.."

[–] Tartas1995@discuss.tchncs.de 6 points 1 day ago

I have read books in which the definition of certain words get redefined to be more precise and clear in the communication while making things less verbose. I don't think an ai summary will reliably properly introduce me to the definition on page 100 of a book that took the previous 99 pages to set up the required definitions to understand the definition it gives on page 100.

But I could be wrong.

[–] rem26_art@fedia.io 14 points 1 day ago

bro needs 58 minutes to drink coffee

[–] natecox@programming.dev 15 points 1 day ago

They must just have missed training on the book about how many “r”s are in the word “strawberry”.

[–] Protoknuckles@lemmy.world 14 points 1 day ago

We're not the same. I learned something.

load more comments
view more: next ›