this post was submitted on 09 Aug 2025
934 points (99.1% liked)

Technology

73877 readers
3660 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] j4k3@lemmy.world 26 points 2 days ago* (last edited 2 days ago) (3 children)

It is not the tool, but is the lazy stupid person that created the implementation. The same stupidity is true of people that run word filtering in conventional code. AI is just an extra set of eyes. It is not absolute. Giving it any kind of unchecked authority is insane. The administrators that implemented this should be what everyone is upset at.

The insane rhetoric around AI is a political and commercial campaign effort by Altmann and proprietary AI looking to become a monopoly. It is a Kremlin scope misinformation campaign that has been extremely successful at roping in the dopes. Don't be a dope.

This situation with AI tools is exactly 100% the same as every past scapegoated tool. I can create undetectable deepfakes in gimp or Photoshop. If I do so with the intent to harm or out of grossly irresponsible stupidity, that is my fault and not the tool. Accessibility of the tool is irrelevant. Those that are dumb enough to blame the tool are the convenient idiot pawns of the worst of humans alive right now. Blame the idiots using the tools that have no morals or ethics in leadership positions while not listening to these same types of people's spurious dichotomy to create monopoly. They prey on conservative ignorance rooted in tribalism and dogma which naturally rejects all unfamiliar new things in life. This is evolutionary behavior and a required mechanism for survival in the natural world. Some will always scatter around the spectrum of possibilities but the center majority is stupid and easily influenced in ways that enable tyrannical hegemony.

AI is not some panacea. It is a new useful tool. Absent minded stupidity is leading to the same kind of dystopian indifference that lead to the ""free internet"" which has destroyed democracy and is the direct cause of most political and social issues in the present world when it normalized digital slavery through ownership over a part of your person for sale, exploitation, and manipulation without your knowledge or consent.

I only say this because I care about you digital neighbor. I know it is useless to argue against dogma but this is the fulcrum of a dark dystopian future that populist dogma is welcoming with open arms of ignorance just like those that said the digital world was a meaningless novelty 30 years ago.

[–] SugarCatDestroyer@lemmy.world 4 points 2 days ago (1 children)

In such a world, hoping for a different outcome would be just a dream. You know, people always look for the easy way out, and in the end, yes, we will live under digital surveillance, like animals in a zoo. The question is how to endure this and not break down, especially in the event of collapse and poverty. It's better to hope for the worst and be prepared than to look for a way out and try to rebel and then get trapped.

[–] verdigris@lemmy.ml 4 points 2 days ago* (last edited 2 days ago) (2 children)

You seem to be handwaving all concerns about the actual tech, but I think the fact that "training" is literally just plagiarism, and the absolutely bonkers energy costs for doing so, do squarely position LLMs as doing more harm than good in most cases.

The innocent tech here is the concept of the neural net itself, but unless they're being trained on a constrained corpus of data and then used to analyze that or analogous data in a responsible and limited fashion then I think it's somewhere on a spectrum between "irresponsible" and "actually evil".

[–] SugarCatDestroyer@lemmy.world 4 points 2 days ago (1 children)

If the world is ruled by psychopaths who seek absolute power for the sake of even more power, then the very existence of such technologies will lead to very sad consequences and, perhaps, most likely, even to slavery. Have you heard of technofeudalism?

[–] verdigris@lemmy.ml 1 points 2 days ago* (last edited 2 days ago) (2 children)

Okay sure but in many cases the tech in question is actually useful for lots of other stuff besides repression. I don't think that's the case with LLMs. They have a tiny bit of actually usefulness that's completely overshadowed by the insane skyscrapers of hype and lies that have been built up around their "capabilities".

With "AI" I don't see any reason to go through such gymnastics separating bad actors from neutral tech. The value in the tech is non-existent for anyone who isn't either a researcher dealing with impractically large and unwieldy datasets, or of course a grifter looking to profit off of bigger idiots than themselves. It has never and will never be a useful tool for the average person, so why defend it?

[–] SugarCatDestroyer@lemmy.world 1 points 2 days ago

There's nothing to defend. Tell me, would you defend someone who is a threat to you and deprives you of the ability to create, making art unnecessary? No, you would go and kill him while this bastard hasn't grown up. Well, what's the point of defending a bullet that will kill you? Are you crazy?

[–] a_wild_mimic_appears@lemmy.dbzer0.com -1 points 2 days ago (1 children)

I am an average person, and my GPU is running a chatbot which currently gives me a course in Regular Expressions. My GPU also generates images for me from time to time when i need an image, because i am crappy at drawing. There are a lot of uses for the technology.

[–] verdigris@lemmy.ml 1 points 1 day ago (1 children)

Okay so you could have just looked up one of dozens of resources on regex. The images you "need" are likely bad copies of images that already exist, or they're weird collages of copied subject matter.

My point isn't that there's nothing they can do at all, it's that nothing they can do is worth the energy cost. You're spending tons of energy to effectively chew up information already on the web and have it vomited back to you in a slightly different form, when you could have just looked up the information directly. It doesn't save time, because you have to double check everything. The images are also plagiarized, and you could be paying an artist if they're something important, or improving your artistic abilities if they aren't. I struggle to think of many cases where one of those options is unfeasible, it's just the "easy" way out (because the energy costs are obfuscated) to have a machine crunch up some existing art to get a approximation of what you want.

Regarding energy use see my other reply. It's like if you scold people for running their microwave 10s too long. Watching 2 hours of Netflix is a lot worse. go read up here

[–] a_wild_mimic_appears@lemmy.dbzer0.com -2 points 2 days ago (1 children)

scraping the web to create a dataset isn't plagiarism, same with training a model on said scraped data, and calculating which words should come in what order isn't plagiarism too. I agree that datasets should be ethically sourced, but scraping the web is something that allowed such things as the search engine to be created, which made the web a lot more useful. Was creating google irresponsible?

[–] verdigris@lemmy.ml 2 points 1 day ago* (last edited 1 day ago) (1 children)

This is a wild take. You can get chatbots to vomit out entire paragraphs of published works verbatim. There is functionally no mechanism to a chatbot other than looking at a bunch existing texts, picking one randomly, and copying the next word from it. There's no internal processing or logic that you could call creative, it's just sticking one Lego at a time onto a tower, and every Lego is someone's unpaid intellectual property.

There is no definition of plagiarism or copyright that LLMs don't bite extremely hard. They're just getting away with it because of the billions of dollars of capital pushing the tech. I am hypothetically very much for the complete abolition of copyright and free usage of information, but a) that means everyone can copy stuff freely, instead of just AI companies, and b) it first requires an actually functional society that provides for the needs of its citizens so they can have the time to do stuff like create art without needing to make a livable profit at it. And even if that were the case, I would still think the current implementation of AI is pretty shitty if it's burning the same ludicrous amounts of energy to do its parlor tricks.

[–] a_wild_mimic_appears@lemmy.dbzer0.com 1 points 1 day ago (1 children)

The energy costs are overblown. An response costs about 3Wh, which is about 1 minute of runtime for a 200W Pc, or 10 Seconds of a 1000W microwave. See the calculations made here and below for the energy costs. if you want to save energy, go vegan and ditch your car; completely disbanding ChatGPT amounts for 0,0017% of the CO2 Reduction during Covid 2020 (this guy gave the numbers, but had an error in magnitude, which i fixed in my reply, calculator output is attached. It would help climate activists if they concentrated on something that is worthwhile to criticize.

If i read a book, and use phrases out of that book in my communication, it is covered under fair use - the same should be applicable for scraping the web, or else we can close the internet archive next. Since LLM output isn't copyrightable, i see no issues with that - and copyright law in the US is an abomination which is only useful for big companies to use as a weapon, small artists don't really profit from that.

[–] verdigris@lemmy.ml 1 points 1 day ago (1 children)

The costs for responses are overblown, but the costs for training are not.

Adding the cost for training, which is a one time cost, to ChatGPT raises the power consumption from 3W to 4W. That's the high-end calculation btw.

[–] petrol_sniff_king@lemmy.blahaj.zone 1 points 2 days ago* (last edited 2 days ago)

I can create undetectable deepfakes in gimp or Photoshop.

That is crazy, dude. You gotta teach me. There are soo many impoverished countries I wanna fuck over with this skill.