this post was submitted on 08 Jun 2025
49 points (90.2% liked)

AI

5031 readers
2 users here now

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.

founded 4 years ago
 

Title, or at least the inverse be encouraged. This has been talked about before, but with how bad things are getting, and how realistic goods ai generated videos are getting, anything feels better than nothing, AI generated watermarks, or metadata can be removed, but thats not the point, the point is deterrence. Immediately all big tech will comply (atleast on the surface for consumer-facing products), and then we will probably see a massive decrease in malicious use of it, people will bypass it, remove watermarks, fix metadata, but the situation should be quite a bit better? I dont see many downsides/

you are viewing a single comment's thread
view the rest of the comments
[–] queermunist@lemmy.ml 1 points 21 hours ago* (last edited 21 hours ago)

What do you mean by “retrain your model”?

An example of this is deepseek-r1’s “1776” variant, where someone uncensored it, and now it will talk freely about Tiananmen Square.

I guess this is more accurately called "post-training" instead of "re-training" but my point stands.

If it's possible, hold the model's creators responsible.

Requiring US commercial vendors to implement fingerprinting would disadvantage them against open source models, and against vendors from other countries (like DeepSeek) who wouldn’t comply.

China is very willing to regulate AI development. If the US and China would actually cooperate we'd be able to work together to get a handle on this technology and its development. Also, it really looks like the US is the problem. They're the ones who don't want to regulate, they're the ones that don't want to cooperate, and they're the ones with the most problematic companies.

And "open source" models just aren't as problematic as proprietary models. Training a model is still something that requires massive amounts of data, compute, energy, etc etc. The open source models are going to be much smaller and weaker and more specialized and, as a result, less dangerous or in need of regulation anyway.

But I wouldn't cry too hard if commercial vendors all failed and were replaced with open source, so if open source really can out compete then I welcome it. That seems like a really good side effect of regulating the commercial vendors!

The current US government is very unlikely to try in the first place

Well if we're limiting ourselves to what is likely, nothing will happen.

There will never be any regulations, they won't even try.