this post was submitted on 19 Mar 2025
34 points (100.0% liked)

Fuck AI

2251 readers
290 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

Disclaimer: I am asking this for a college class. I need to find an example of AI being used unethically. I figured this would be one of the best places to ask. Maybe this could also serve as a good post to collect examples.

So what have you got?

top 35 comments
sorted by: hot top controversial new old
[–] Flamangoman@leminal.space 33 points 1 week ago (1 children)

Not exactly AI being used, rather developed, but Meta's torrenting 80tb of books and not seeding is egregious

[–] haverholm@kbin.earth 7 points 1 week ago (1 children)

The fact that so much training data is scraped without consent makes a lot of the popular LLMs unethical already in their development, yeah. And that in turn makes using the models unethical.

[–] elbarto777@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

Using the models unethical...... or fair game?

Edit: but I share the sentiment. I avoid using AI like the plague, but because of the environmental impact.

[–] quickhatch@lemm.ee 33 points 1 week ago (2 children)

I'm a university prof in a medical science field. We hired a new, tenure-line prof to teach introductory musculoskeletal anatomy to prepare our students for the more rigorous, full systems anatomy that's taught by a different professor. We learned (too late, after a year) that they used AI to generate the slides they used in lecture and never questioned/evaluated the content. Had an entire cohort of students fail the subsequent anatomy course after that.

But in my mind, what's worse is that the administration did nothing to correct the prof, and continues to push a pro-AI narrative in order for us to spend less time investing resources in teaching.

[–] courageousstep@lemm.ee 15 points 1 week ago

Jesus fucking Christ. That’s horrifying.

[–] omarfw@lemmy.world 8 points 1 week ago

oh my fucking god

[–] amino@lemmy.blahaj.zone 28 points 1 week ago* (last edited 1 week ago) (1 children)
[–] ZDL@ttrpg.network 4 points 1 week ago
[–] hendrik@palaver.p3x.de 25 points 1 week ago

Flooding the internet with slop.

[–] ZDL@ttrpg.network 24 points 2 weeks ago (2 children)

Using it at all, really. Given the environmental costs, the social costs, and the fraud it entails, using it at all is pretty much unethical.

My favourite example, though, was the lazy lawyer who used ChatGPT to write a legal brief for him.

[–] Greg@lemmy.ca 4 points 1 week ago

Given the environmental costs, the social costs, and the fraud it entails, using it at all is pretty much unethical.

There are loads of examples of AI being used in socially positive ways. AI doesn’t just mean ChatGPT.

[–] ArcRay@lemmy.dbzer0.com -1 points 1 week ago (1 children)

Excelled point. I think there are some legitimate uses of AI. Especially in image processing for science related topics.

But for the most part, almost every common use is unethical. Whether it be the energy demands, (and its contributions to climate change), the theft of intellectual property, the spread of misinformation, and so much more. Overall, it's a huge net negative on society.

I remember hearing about the lawyer one. IIRC chatGPT was citing laws that didn't even exist. How do you not check what it wrote? You wouldn't blindly accept predictive word typing with your phones keyboard and autocorrect. So why would you blindly trust a fancier autocorrect?

[–] Greg@lemmy.ca 3 points 1 week ago (1 children)

But for the most part, almost every common use is unethical.

The most common uses of AI are not in the headlines. Your email spam filter is AI.

[–] ZDL@ttrpg.network 5 points 1 week ago (1 children)

You are being a little bit pedantic. People talking about "AI" today are talking about "LLMs", not the older tech that turned out not to actually be "AI". (Rather like the current stuff isn't actually "AI".)

[–] Greg@lemmy.ca 1 points 1 week ago (1 children)

You should be accurate with your language if you’re going to claim a whole industry is unethical. And it’s also important to make a distinction between the technology and the implementation of the technology. LLMs can be trained and used in ethical ways

[–] hendrik@palaver.p3x.de 2 points 1 week ago* (last edited 1 week ago) (1 children)

I'm not really sure if I want to agree here. We're currently in the middle of some hype wave concerning LLMs. So most people mean that when talking about "AI". Of course that's wrong. I tend to use the term "machine learning" if I don't want to confuse people with a spoiled term.

And I must say, most (not all) machine learning is done in a problematic way. Tesla cars have been banned from companies parking lots, your Alexa saves your private conversations in the cloud, the algorithms that power the web weigh down on society and they spy on me. The successfull companies build upon copyright-theft or personal data from their users. And all of that isn't really transparent to anyone. And oftentimes it's opt-out if we get a choice at all. But of course there are legitimate interests. I believe a dishwasher or spamfilter would be trained ethically. Probably also the image detection for medical applications.

[–] Greg@lemmy.ca 2 points 1 week ago (1 children)

I 100% agree that big tech is using AI in very unethical ways. And this isn’t even new, the chairman of the U.N. Independent International Fact-Finding Mission on Myanmar stated that Facebook played a "determining role" in the Rohingya genocide. And then recently Zuck actually rolled back the programs that were meant to prevent this in the future.

[–] hendrik@palaver.p3x.de 2 points 1 week ago* (last edited 1 week ago)

I think quite some of our current societal issues (in western societies as well) come from algorithms and filter bubbles. I think that's the main contributing factor to why people can't talk to each other any more and everyone gets more radicalized into the extremes. And in the broader pictures the surrounding attention economy fuels populists and does away with any factual view on the world. It's not AI's fault, but it's machine learning that powers these platforms and decides who gets attention and who gets confined into which filter bubble. I think that's super unhealthy for us. But sure. It's more the prevailing internet business model to blame here and not directly the software that powers this. I have to look up what happened in Rohingya... We get a few other issues with social media as well, which aren't directly linked to algorithms. We'll see how the LLMs fit into that, I'm not sure how they're going to change world, but everyone seems to agree this is very disruptive technology.

[–] Kolanaki@pawb.social 16 points 2 weeks ago

Maybe "favorite example" isn't the best phrasing for the question, but I get the sentiment and would have to say using AI to create porn of real people as a means of blackmail.

[–] ptz@dubvee.org 12 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

I Used to Teach Students. Now I Catch ChatGPT Cheats

https://thewalrus.ca/i-used-to-teach-students-now-i-catch-chatgpt-cheats

Students using it as a way to avoid actually learning and learning how to think for themselves. Also for being yet another soul-crushing blow for already under-paid, under-respected, under-attack teachers.

[–] phanto@lemmy.ca 7 points 1 week ago (1 children)

I'm a month away from my IT diploma. Even the teachers are feeding us AI slop at this point.

They gave up trying to get the students to stop at the end of first year. Protip: don't hire a new IT grad, they don't know anything chatGPT doesn't know.

[–] ptz@dubvee.org 8 points 1 week ago* (last edited 1 week ago) (1 children)

I interviewed a candidate recently, and they basically lost all consideration when I asked them a basic sysadmin question and they replied, "That's kind of one of those basic commands I just ask ChatGPT."

The basic sysadmin question was: "Name one way on a Linux server to check the free disk space".

Sadly, I had to continue the interview, but I didn't even bother writing down any of the candidate's responses after that. The equivalent would have been asking them "what's 2+2?" and they break out a calculator. Instant fail.

[–] Dagwood222@lemm.ee 5 points 1 week ago (1 children)

Someone else commented in another thread.

His sister is in 3rd grade and used AI to answer 'How many seconds in three minutes?"

[–] ptz@dubvee.org 6 points 1 week ago (1 children)

Goddamn.

I know teachers can't do this, but they should be allowed to be like: ChatGPT, you get an A. Susie, you will be repeating 3rd grade.

[–] Dagwood222@lemm.ee 3 points 1 week ago

This is why the folks in Silicon Valley don't let their own kids have tech.

https://www.snopes.com/fact-check/tech-billionaire-parents-limit/

[–] BlueSquid0741@lemmy.sdf.org 4 points 1 week ago

I read that one the other day. Unbelievable that tertiary students aren’t there to learn.

[–] carl_dungeon@lemmy.world 10 points 1 week ago

Mass consumption of copyright works for training, but still considering individuals doing it to be criminals.

[–] hendrik@palaver.p3x.de 9 points 1 week ago* (last edited 1 week ago)

Not supervising your Tesla properly and running over people.

[–] lohky@lemmy.world 8 points 1 week ago
[–] Silic0n_Alph4@lemmy.world 7 points 2 weeks ago (1 children)

Thank you for asking actual humans instead of an LLM 😊 Here’s my favourite example, and it’s worth digging into more: https://pivot-to-ai.com/2025/03/10/foreign-policy-the-u-k-pivot-to-ai-is-doomed-from-the-start/

[–] ArcRay@lemmy.dbzer0.com 7 points 1 week ago

It felt like the right way to approach the topic. AI has become so pervasive, I'm not even sure I could search for it without simultaneously using AI.

[–] humanspiral@lemmy.ca 3 points 1 week ago* (last edited 1 week ago)

Once you are so quick to offer it for military purposes, and profit maximization over any ethical concerns, then it becomes not just warmongering evil maximization, and battlefield domination that encourages warmongering evil, you also need to maximize AI's disinformation of public, as media is used now, to support the warmongering evil maximization.

Any humanist principles, ethics to promote humanism, cannot coexist with warmongering maximalism. Profit prefers the latter. Learning that your views might not align with warmongering maximalism, may be used for voter suppression extending to murder by exploding electronics. AI/LLM identification of insufficient loyalty to warmongering and genocide is a key tool in ensuring agenda maximalism.

[–] hendrik@palaver.p3x.de 3 points 2 weeks ago (1 children)

Is erotic roleplay 'unethical'? Because we got a lot of services for that.

[–] YourMomsTrashman@lemmy.world 2 points 1 week ago

AI generated ads for AI roleplay apps on AI generated youtube videos made for children

[–] oxysis@lemm.ee 1 points 1 week ago

I’m late but whatever.

Well considering that all generative ai models are built off of vast sums of stolen work there can be no ethical use of generative ai. Since any use of generative ai supports the theft of human made works. No generative ai model is built off of properly licensed work that pays the original creators for said work. Anyone arguing that this type of ai can be used is ethical ways is just wrong since it ignores the impacts to real people that allowed for the model to be made in the first place. The sheer amount of data required to build a LLM would cost too much money to obtain legally so these companies just steal and are hoping that they can just get away with it. Even Adobe, whose model comes closest, still used work that was not licensed for this purpose by using their back catalog of stock images to feed into their model.

That is ignoring the vast environmental impact through the amount of energy consumption required to run the model.