this post was submitted on 20 May 2025
25 points (75.5% liked)

Fuck AI

2830 readers
718 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

I have realized a lot of posts on here mostly criticizing the data collection of people to train A.I, but I don't think A.I upon itself is bad, because A.I- like software development- has many ways of implementations: Software can either control the user, or the user can control the software, and also like software development, some software might be for negative purposes while others may be for better purposes, so saying "Fuck Software" just because of software that controls the user feels pretty unfair, and I know A.I might be used for replacing jobs, but that has happened many times before, and it is mostly a positive move forward like with the internet. Now, I'm not trying to start a big ass debate on how A.I = Good, because as mentioned before, I believe that A.I is as good as its uses are. All I want to know from this post is why you hate A.I as a general topic. I'm currently writing a research paper on this topic, so I would like some opinion.

you are viewing a single comment's thread
view the rest of the comments
[–] blargle@sh.itjust.works 9 points 18 hours ago (2 children)

First of all, that which is to get fucked is Generative AI in particular. Meaning, LLM text generation / diffusion model image generation, etc. AI which consciously thinks is still sci-fi and may always be. Older ML stuff also called "AI" that finds patterns in large amounts of satellite data or lets a robot figure out how to walk on bumpy ground or whatever is generally fine.

But generative AI is just bad and cannot be made good, for so many reasons. The "hallucination" is not a bug that will be fixed; it's a fundamental flaw in how it works.

It's not the worst thing, though. The worst thing is that, whether it's making images or text, it's just going to make the most expected thing for any given prompt. Not the same thing every time- but the variation is all going to be random variations of combining the same elements, and the more you make for a single prompt, the more you will see how interchangeably samey the results all are. It's not the kind of variation you see by giving a class of art students the same assignment, it's the variation you get by giving Minecraft a different world seed.

So all the samey and expected stuff in the training data (which is all of the writing and art in human history that its creators could get their hands on) gets reinforced and amplified, and all the unique and quirky and surprising stuff gets ironed out and vanishes. That's how it reinforces biases and stereotypes- not just because it is trained on the internet, but again it's because of a fundamental flaw in how it works. Even if it was perfected, using the same technology, it would still have this problem.

[–] lagoon8622@sh.itjust.works 1 points 2 hours ago

The "hallucination" is not a bug that will be fixed; it's a fundamental flaw in how it works.

You're not wrong that it's a flaw. But also, fundamentally... it's actually the main feature! That's actually how it can even do anything. The flaw is baked into the core product.

[–] FreeWilliam@lemmy.ml 1 points 3 hours ago

How does tuning the data with randomness lead to biases stereotypes and hallucinations?