this post was submitted on 03 Aug 2025
425 points (86.6% liked)
Fuck AI
3642 readers
746 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I believe AI is going to be a net negative to society for the forseeable future. AI art is a blight on artistry as a concept, and LLMs are shunting us further into search-engine-overfit post-truth world.
But also:
Reading the OOP has made me a little angry. You can see the echo chamber forming right before your eyes. Either you see things the way OOP does with no nuance, or you stop following them and are left following AI hype-bros who'll accept you instead. It's disgustingly twitter-brained. It's a bullshit purity test that only serves your comfort over actually trying to convince anyone of anything.
Consider someone who has had some small but valued usage of AI (as a reverse dictionary, for example), but generally considers things like energy usage and intellectual property rights to be serious issues we have to face for AI to truly be a net good. What does that person hear when they read this post? "That time you used ChatGPT to recall the word 'verisimilar' makes you an evil person." is what they hear. And at that moment you've cut that person off from ever actually considering your opinion ever again. Even if you're right that's not healthy.
I’m a what most people would consider an AI Luddite/hater and think OOP communicates like a dogmatic asshole.
You can also be right for the wrong reasons. You see that a lot in the anti-AI echo chambers, people who never gave a shit about IP law suddenly pretending that they care about copyright, the whole water use thing which is closer to myth than fact, or discussions on energy usage in general.
Everyone can pick up on the vibes being off with the mainstream discourse around AI, but many can't properly articulate why and they solve that cognitive dissonance with made-up or comforting bullshit.
This makes me quite uncomfortable because that's the exact same pattern of behavior we see from reactionaries, except that what weirds them out for reasons they can't or won't say explicitly isn't tech bros but immigrants and queer people.
Out of curiosity, could you link a source vis-a-vis AI's water consumption?
It's not that the datacenters don't "use" water (you'll find plenty of sources confirming that), but rather that the argument stretches the concept of "water usage" well beyond the limit of meaninglessness. Water is not electricity, it can't usually be transported very far and the impact of a pumping operation is fundamentally location-dependent. Saying "X million litres of water used for Y" is usually not useful unless you're defining the local geographic context.
Pumping aquifers in a dry area and discharging the water in a field: very bad.
Pumping from and subsequently releasing water to a lake/river: mostly harmless, though sometimes in summer the additional heat pumped into the water can be harmful depending on the size of the body of water.
The real problem is that lots of areas (especially in the US) haven't updated their water rights laws since the discovery of water tables. This is hardly a new problem, and big ag remains by far the worst offender here.
Then there's the raw materials in the supply chain... and like not to downplay it but water use is not exactly at the top of the list of environmental impacts there. Concrete is hella bad on CO2 emissions, electronics use tons of precious metals that often get strip mined and processed with little to no environmental regulation, etc.
Frankly putting "datacenter pumped water out of the river then back in" in the same aggregate figure as "local lake polluted for 300 years in China by industrial byproducts" rubs me the wrong way. These are entirely different problems that do not benefit anyone from being bastardized like this. It feels the same way to me as saying "but there are children starving in Africa!" when someone throws away some food – sure throwing away food isn't great, and it's technically on-topic, but we can see how bundling these things together isn't useful, right?
The people who hate immigrants and queer people are AI's biggest defenders. It's really no wonder that people who hate life also love the machine that replaces it.
A perfect example of the just completely delusional factoids and statistics that will spontaneously form in the hater's mind. Thank you for the demonstration.
Thanks for putting a name on that! That's actually one of the few useful purposes I've found for LLMs. Sometimes you know or deduce that some thing, device, or technique must exist. The knowledge of this thing is out there, but you simply don't know the term to search for. IMO, this is actually one of the killer features of LLMs. It works well because whatever the LLM is outputting is simply and instantly verifiable. You can describe the characteristics of something to the LLM and ask it what thing has those characteristics. Then once you have a possible name, you then look that name up in a reliable source and confirm it. Sometimes the biggest hurdle to figuring something out is just learning the name of a thing. And I've found LLMs very useful as a reverse dictionary. Thanks for putting a name on it!
Using chatGPT to recall the word 'verisimilar' is an absurd waste of time, energy, and in no way justifies the use of AI.
90% of LLM/GPT use is a waste or could be done with better with another tool, including non-LLM AIs. The remaining 10% are just outright evil.
Where is your source? It sounds unbelievable
Source is the commercial and academic uses I've personally seen as an academic-adjacent professional that's had to deal with this sort of stuff at my job.
What was the data you saw on what volume of requests to non-llm models as they relate to utility? I can't figure out what profession have access to this kind of statistic? It would be very useful to know, thx.
I think you've misunderstood what I was saying- I don't have spreadsheets of statistics on requests for LLM AIs vs non-LLM AIs. What I have is exposure to a significant amount of various AI users, each running different kinds of AIs, and me seeing what kind of AI they're using, and for what purposes, and how well it works or doesn't.
Generally, LLM-based stuff is really only returning 'useful' results for language-based statistical analysis, which NLP handles better, faster, and vastly cheaper. For the rest, they really don't even seem to be returning useful results- I typically see a LOT of frustration.
I'm not about to give any information that could doxx myself, but the reason I see so much of this is because I'm professionally adjacent to some supercomputers. As you can imagine, those tend to be useful for AI research :P
Ah ok that's too bad. Super computers typically don't have tensor cores though, and most LLM use is presumably client use on ready trained models which desktop or mobile cpus can manage now so it will be impossible to know then