81
submitted 1 year ago by BrikoX@lemmy.zip to c/technology@lemmy.world
top 11 comments
sorted by: hot top controversial new old
[-] Sigma_@lemmy.world 37 points 1 year ago

Detecting real video as fake seems problematic where it might lead to apathy -- folks just don't believe any video anymore. Similar to Trump's "everything is fake news" approach

[-] dojan@lemmy.world 16 points 1 year ago

Thus far these detectors kind of suck, both for deepfakes and AI generated text. They're biased against non-native speakers and using them in a scholarly setting can result in punishing students that aren't cheating.

The genie was let out of the bottle much too early.

[-] Starbuck@lemmy.world 11 points 1 year ago* (last edited 1 year ago)

I used to work in the field of image forensics a few years ago, right as the GAN technology was entering the scene. Even when it was just making 200x200 pixel faces, everyone in the industry was starting to panic. Everything we had at the time was based off of detecting inconsistencies in the pixel content, repeating structures that indicated copy/paste attacks, or looking for metadata inconsistencies

For pixel inconsistencies, you can look at how the jpeg image is encoded to look for blocks that aren’t encoded consistently. This paper coversDCT and some others. https://scholar.google.com/scholar?q=dct+image+forensics&hl=en&as_sdt=0&as_vis=1&oi=scholart#d=gs_qabs&t=1690073435801&u=%23p%3DKmFtRm3WpQ8J That’s just one example, but it’s ultimately looking for things like someone photoshopping a region out or patching something in.

Similarly, copy-move detection would look for “edges” and “intersections” in images and creating constellations of points, which you can use scale invariant transforms to look for duplicates. This article covers an example where North Korea tried to make their landing force look more impressive https://www.theguardian.com/world/2013/mar/27/north-korea-photoshop-hovercraft

The problem is that when the entire image is forged, there is no baseline to detect against. The whole thing is uniformly fake. So we’re back to the old “I can tell by looking at it” which is extremely imprecise and labor intensive. In fact, if you look at how GANs work, it’s trivial to embed any detector algorithm into the training process and make something that also defeats that detector.

[-] Oshka@kbin.social 2 points 1 year ago

As someone not in the industry this is fascinating!

[-] Starbuck@lemmy.world 2 points 1 year ago

To get an idea of how they work, this is a great tool for laymen https://29a.ch/photo-forensics/

Try uploading something from https://thispersondoesnotexist.com/ and see how badly it fails.

[-] DolphLundgren@lemmy.world 6 points 1 year ago* (last edited 1 year ago)

This seems like a very bad idea. I’m concerned that having a test might cause people to suspend their critical thinking responsibility and may have other issues like being inaccurate or causing deep fake tech to just leap frog over it - and then be able to benefit from fake authenticity measurements.

[-] lagomorphlecture@lemm.ee 0 points 1 year ago

The problem is that yes, that could happen, but without a test we won't be able to trust or believe anything soon.

[-] M0oP0o@mander.xyz 5 points 1 year ago

This seems very close to owning the truth and could be a start to some very dark business.

[-] TheYear2525@lemmy.world 2 points 1 year ago

Yup, if you think “Fox News truth” is a problem now, wait until there’s “Intel Truth” vs “AMD Truth”.

[-] Nobug404@geddit.social 2 points 1 year ago

AI is the solution to everything. Even AI.

[-] gendulf@lemmy.world 1 points 1 year ago

Seems shopped.

this post was submitted on 22 Jul 2023
81 points (95.5% liked)

Technology

59086 readers
2271 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS