this post was submitted on 14 Oct 2025
47 points (100.0% liked)

Asklemmy

50899 readers
1129 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

This has been bothering me lately because I understand that in the future everything will most likely only get worse and it will be impossible to tell whether a product was made by a person or generated by AI. Of course, there won't be any normal divisions; most likely, everything will resemble a landfill, and who knows who did it AI or people, and trusting corporations, as you know, is a bad idea; they are lying hypocrites.

In that case, are there any databases or online archives containing content created exclusively by humans? That is, books, films, TV series, cartoons, etc?

top 38 comments
sorted by: hot top controversial new old
[–] cerebralhawks@lemmy.dbzer0.com 15 points 3 days ago (1 children)

There are tells but it's getting harder and harder. One thing is, you have to look for the people.

Case in point, a few weeks ago I discovered some really good covers of the KPop Demon Hunter songs. So if you don't know the scene, this Netflix musical has been making record breaking profits and numbers and everyone and their brother wants a piece. And the music is so good, and tons of people are doing covers.

A few months ago, "is it on Apple Music/Spotify" wouldn't have been a tell, but now it is. So a lot of these covers are on paid streaming, because covers are perfectly acceptable legally. It's fair use. However, recently AI generated music has started to come up, and the streaming services have recently put their feet down and said "no more." So when you're looking at such a cover on YouTube, it's probably going to have streaming links. It's more convenient to listen on one of those services than it is to watch YouTube, and Spotify pays more than YouTube, and Apple Music pays more than both of them. So that's where they want you to listen. When people are all over the comments saying "put it on Spotify" and they say "nah we're YouTube exclusive," what they're saying is, they aren't allowed on Spotify (or Apple Music). They do want more money, but they won't get any on platforms that ban generative AI. So they stick to YouTube, which is one platform that allows it.

With legitimate art, you can usually find the human behind it. With some art, the artist will want to remain anonymous. In anime, for example, a lot of these artists are underage, and they're savvy, they're not putting themselves out there on social media beyond the art and their anonymous comments. They're still more human than AI, they just won't show their face or say where they're posting from. (And that's just good OPSEC in general.)

There's also the frequency. Art takes time. It takes weeks, months, depending on what it is. AI can do it in seconds. So if they're posting whole new stuff every day, every other day, there's a solid chance they're using AI to make it.

Not all AI slop is completely made by AI. Sometimes they take stuff made by humans and use AI to enhance it somehow. That's what the KPDH stuff was. They were using an AI tool to separate the stems (the individual instruments) and enhancing each one, changing some, altering others.

Anyway, in 2025 now, it's much harder than it ever was before to spot AI slop. The time of six fingered hands is gone. Next year, year after, certainly the next decade, it's gonna be next to impossible to tell.

What's worse, these days, AI made stuff is still prompted by humans. All AI slop has a human behind them. But what happens when the AI starts doing this stuff on its own? Right now, AI is interacted with by humans. Soon, AI will initiate the interactions. It could be doing it already, we don't know.

[–] Old_Dread_Knight@lemmy.world 2 points 2 days ago

What’s worse, these days, AI made stuff is still prompted by humans. All AI slop has a human behind them. But what happens when the AI starts doing this stuff on its own? Right now, AI is interacted with by humans. Soon, AI will initiate the interactions. It could be doing it already, we don’t know.

Well, nothing surprising, AI will really learn to generate content itself without human intervention and will do it incredibly beautifully and effectively, that you might even fall in love with it.

Therefore, the sooner people create offline libraries of what is humanly created, and fill these libraries only with human art, the better.

[–] wuphysics87@lemmy.ml 5 points 2 days ago

Form meaningful trusting relationships with other people

[–] PearOfJudes@lemmy.ml 3 points 2 days ago

Long story short don't let an algorithm choose the content you watch.

[–] wowwoweowza@lemmy.world 5 points 2 days ago* (last edited 2 days ago) (1 children)

If you are holding the book in your hand and it was published before 2015, then that content is almost certainly the product of human action.

[–] Old_Dread_Knight@lemmy.world 4 points 2 days ago (1 children)

Yes, you should have bought books before 2022 or even before 2020, because now book texts can be distorted by AI.

[–] wowwoweowza@lemmy.world 2 points 2 days ago

Libraries and physical archives!

[–] SuluBeddu@feddit.it 5 points 3 days ago (1 children)

I recently made a pdf with some of my notes on hints of AI in images and music, but I'm not sure how to send files here

It's not easy ofc, and it will get harder with time, but I am convinced we can tell if trained a bit. Because there are clear differences in the creative processes between humans and machines, which will always result in different biases

With time I think we'll learn to only trust people we have some social connection with, so we know they are real and they don't use AI (or they use it up to a level acceptable to us)

[–] Jimmycrackcrack@lemmy.ml 1 points 3 days ago* (last edited 2 days ago) (1 children)

Well what have you come up with then?

[–] SuluBeddu@feddit.it 1 points 2 days ago (1 children)

Two that I noticed are:

For drawings in the ghibli style, you can see noise on areas that should have all the same colour. That's because of how the diffusion model works, it's very hard for it to replicate lack of variation in colours. If fact that noise will always exist, it's just more noticeable on simple styles.

For music, specifically with Suno, it tends to use the similar sounding instruments between different tracks of the same specifispecified genres, and those sounds might change during the track and never come back to their original sound (because it generates section by section of the track from start to end, the transformer model will feed the last sections back as input to generate the new ones, amplifying possible biases in the model)

[–] Jimmycrackcrack@lemmy.ml 1 points 2 days ago (1 children)

I wonder if the noise situation would still be apparent if the model trained only on Ghibli style anime drawings.

[–] SuluBeddu@feddit.it 1 points 2 days ago

Yes, I don't think it's a matter of training.

The diffusion model generates pictures by starting on a canvas with random pixels, then it edits those pixel colours and carves the picture out of that chaos

To achieve an area with all the same colour, it would need to put very exact values on the last generation step.

It can be fixed easily with a very subtle lowpass filter, but that would be human intervention. The model itself will have a hard time replicating it

[–] PearOfJudes@lemmy.ml 2 points 2 days ago

Don't watch short form content. Try not to watch faceless youtube channels unless they are obviously real and trusted.

[–] lunatique@lemmy.ml 2 points 2 days ago

In the future you won't be able to tell. This is about the time you'll have to worry about the fate of humanity.

[–] wowwoweowza@lemmy.world 1 points 2 days ago
[–] etchinghillside@reddthat.com 2 points 3 days ago (2 children)

Computer algorithms have been enhancing movies and pictures for a while now.

[–] jellygoose@lemmy.ca 2 points 2 days ago
[–] Old_Dread_Knight@lemmy.world 2 points 2 days ago

Well, yes, you're right, and they also replace some phrases with others, if a person says something wrong, and it might be noticed on popular channels.

[–] wowwoweowza@lemmy.world 1 points 2 days ago (1 children)

Also: could some interesting Lemme create /c/slopornot ?

[–] BananaIsABerry@lemmy.zip -1 points 2 days ago (1 children)

I'd like to preface that I like AI content for my own amusement and sometimes for convenience. I think it's neither the best nor the worst thing to ever happen to the world like so many lemmy users seem to.

Current state, most AI generated content (images, video, even text) have some general tells. For text, they tend to lean on certain phrases and formatting. Picture and video both still contain noticeable artifacts that give it away, though that is becoming less prevalent over time. They're a lot more noticeable when you use the tools yourself and trying to overcome the patterns is difficult without manually intervening.

I think you have to ask yourself what degree of human involvement is the cutoff for you. Is it only 100% non generated content? Even prior to the sudden llm push, that would be really difficult to find. A lot of software, photoshop and predictive text for example, have used machine learning to improve their algorithms for years. It's not likely you'll find anything unassisted anymore. What degree of human made modifications AFTER something is generated is enough to consider it good enough? If I start with a generated image but significantly modify it with an image editor to fix issues and finalize the 'vision', is that enough?

I personally think you'll have to create your own compass. Bad content is bad regardless of how it's made. If you cannot tell the difference if a human made it or a machine, does it really matter?

[–] Old_Dread_Knight@lemmy.world 1 points 2 days ago (1 children)

I personally think you’ll have to create your own compass. Bad content is bad regardless of how it’s made. If you cannot tell the difference if a human made it or a machine, does it really matter?

Consuming something that was simply generated with minimal effort is extremely frustrating, it's like watching a crappy Netflix show, only now there aren't even real people on it, it's just generated content, and what's the point of that when I can create content for myself using AI?

[–] BananaIsABerry@lemmy.zip 1 points 2 days ago

Agreed. I wouldn't want to be sold something just plainly generated by someone else, especially if they didn't put any effort into it.

I have had a pretty good time coming up with stupid prompts with friends, but I think the social aspect is doing the heavy lifting.

[–] m532@lemmygrad.ml 0 points 3 days ago

The tools used in creation have next to nothing to do with the quality of the output.