this post was submitted on 20 May 2025
24 points (76.1% liked)

Fuck AI

2825 readers
993 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

I have realized a lot of posts on here mostly criticizing the data collection of people to train A.I, but I don't think A.I upon itself is bad, because A.I- like software development- has many ways of implementations: Software can either control the user, or the user can control the software, and also like software development, some software might be for negative purposes while others may be for better purposes, so saying "Fuck Software" just because of software that controls the user feels pretty unfair, and I know A.I might be used for replacing jobs, but that has happened many times before, and it is mostly a positive move forward like with the internet. Now, I'm not trying to start a big ass debate on how A.I = Good, because as mentioned before, I believe that A.I is as good as its uses are. All I want to know from this post is why you hate A.I as a general topic. I'm currently writing a research paper on this topic, so I would like some opinion.

you are viewing a single comment's thread
view the rest of the comments
[–] pinkfluffywolfie@lemmy.world 9 points 13 hours ago (1 children)

I don't hate AI as much as I hate the nonexistent ethics surrounding LLM's and generative AI tools right now (which is what a lot of people refer to as "AI" at present).

I have friends that openly admit they'd rather use AI to generate "art" and then call people who are upset by this luddites, whiny and butt-hurt that AI "does it better" and is more affordable. People use LLMs as a means to formulate opinions and use as their therapist, but when they encounter real life conversations that have ups and downs they don't know what to do because they're so used to the ultra-positive formulated responses from chatGPT. People use AI to generate work that isn't their own. I've had someone already take my own, genuine written work, copy/paste it into claude, and then tell me they're just "making it more professional for me". In front of me, on a screen share. The output didn't even make structural sense and had conflicting information from the LLM. It was a slap in the face and now I don't want to work with startups because apparently a lot of them are doing this to contractors.

All of these are examples that many people experience with me. They're all examples of the same thing: "AI" as we are calling it is causing disruptions to the human experience because there's nothing to regulate it. Companies are literally pirating your human experience to feed it into LLMs and generative tools, turning around and advertising the results as some revolutionary thing that will be your best friend, doctor, educator, personal artist and more. Going further, another person mentioned this, but it's even weaponized. That same technology is being used to manipulate you, surveil you, and separate you from others to keep you in compliance with your running government, whether it be for good or bad. Not to mention, the ecological impact this has (all so someone can ask Gemini to generate a thank you note). Give the users & the environment more protections and give actual tangible consequences to these companies, and maybe I'll be more receptive to "AI".

[–] ZDL@ttrpg.network 3 points 8 hours ago

I have friends that openly admit they’d rather use AI to generate “art” and then call people who are upset by this luddites, whiny and butt-hurt that AI “does it better”

Anybody who thinks AI does art "better" is someone whose opinions in all matters, big or small, can be safely dismissed.