643
submitted 10 months ago by L4s@lemmy.world to c/technology@lemmy.world

The White House wants to 'cryptographically verify' videos of Joe Biden so viewers don't mistake them for AI deepfakes::Biden's AI advisor Ben Buchanan said a method of clearly verifying White House releases is "in the works."

you are viewing a single comment's thread
view the rest of the comments
[-] DrCake@lemmy.world 47 points 10 months ago

Yeah good luck getting to general public to understand what “cryptographically verified” videos mean

[-] patatahooligan@lemmy.world 21 points 10 months ago

The general public doesn't have to understand anything about how it works as long as they get a clear "verified by ..." statement in the UI.

[-] kandoh@reddthat.com 4 points 10 months ago

The problem is that even if you reveal the video as fake,the feeling it reinforces on the viewer stays with them.

"Sure that was fake,but the fake that it seems believable tells you everything you need to know"

[-] go_go_gadget@lemmy.world 3 points 10 months ago* (last edited 10 months ago)

"Herd immunity" comes into play here. If those people keep getting dismissed by most other people because the video isn't signed they'll give up and follow the crowd. Culture is incredibly powerful.

[-] BradleyUffner@lemmy.world 17 points 10 months ago

It could work the same way the padlock icon worked for SSL sites in browsers back in the day. The video player checks the signature and displays the trusted icon.

[-] Natanael@slrpnk.net 3 points 10 months ago

It needs to focus on showing who published it, not the icon

[-] FunderPants@lemmy.ca 14 points 10 months ago

Democrats will want cryptographically verified videos, Republicans will be happy with a stamp that has trumps face on it.

[-] wizardbeard@lemmy.dbzer0.com 2 points 10 months ago* (last edited 10 months ago)

I mean, how is anyone going to crytographically verify a video? You either have an icon in the video itself or displayed near it by the site, meaning nothing, fakers just copy that in theirs. Alternatively you have to sign or make file hashes for each permutation of the video file sent out. At that point how are normal people actually going to verify? At best they're trusting the video player of whatever site they're on to be truthful when it says that it's verified.

Saying they want to do this is one thing, but as far as I'm aware, we don't have a solution that accounts for the rampant re-use of presidential videos in news and secondary reporting either.

I have a terrible feeling that this would just be wasted effort beyond basic signing of the video file uploaded on the official government website, which really doesn't solve the problem for anyone who can't or won't verify the hash on their end.


Maybe some sort of visual and audio based hash, like musicbrainz ids for songs that are independant of the file itself but instead on the sound of it. Then the government runs a server kind of like a pgp key server. Then websites could integrate functionality to verify it, but at the end of the day it still works out to a "I swear we're legit guys" stamp for anyone not techinical enough to verify independantly thenselves.


I guess your post just seemed silly when the end result of this for anyone is effectively the equivalent of your "signed by trump" image, unless the public magically gets serious about downloading and verifying everything themselves independently.

Fuck trump, but there are much better ways to shit on king cheeto than pretending the average populace is anything but average based purely on political alignment.

You have to realize that to the average user, any site serving videos seems as trustworthy as youtube. Average internet literacy is absolutely fucking abysmal.

[-] technojamin@lemmy.world 4 points 10 months ago

People aren’t going to do it, the platforms that 95% of people use (Facebook, Tik Tok, YouTube, Instagram) will have to add the functionality to their video players/posts. That’s the only way anything like this could be implemented by the 2024 US election.

[-] beefontoast@lemmy.world 2 points 10 months ago

In the end people will realise they can not trust any media served to them. But it's just going to take time for people to realise... And while they are still blindly consuming it, they will be taken advantage of.

If it goes this road... Social media could be completely undermined. It could become the downfall of these platforms and do everyone a favour by giving them their lives back after endless doom scrolling for years.

[-] Strykker@programming.dev 1 points 10 months ago

Do it basically the same what TLS verification works, sure the browsers would have to add something to the UI to support it, but claiming you can't trust that is dumb because we already use that to trust the site your on is your bank and not some scammer.

Sure not everyone is going to care to check, but the check being there allows people who care to reply back saying the video is faked due to X

[-] makeasnek@lemmy.ml 5 points 10 months ago

"Not everybody will use it and it's not 100% perfect so let's not try"

[-] NateNate60@lemmy.world 1 points 10 months ago

That's not the point. It's that malicious actors could easily exploit that lack of knowledge to trick users into giving fake videos more credibility.

If I were a malicious actor, I'd put the words "✅ Verified cryptographically by the White House" at the bottom of my posts and you can probably understand that the people most vulnerable to misinformation would probably believe it.

[-] maynarkh@feddit.nl 0 points 10 months ago

Just make it a law that if as a social media company you allow unverified videos to be posted, you don't get safe harbour protections from libel suits for that. It would clear right up. As long as the source of trust is independent of the government or even big business, it would work and be trustworthy.

[-] General_Effort@lemmy.world 15 points 10 months ago

Back in the day, many rulers allowed only licensed individuals to operate printing presses. It was sometimes even required that an official should read and sign off on any text before it was allowed to be printed.

Freedom of the press originally means that exactly this is not done.

[-] FunderPants@lemmy.ca 6 points 10 months ago

Jesus, how did I get so old only to just now understand that press is not journalism, but literally the printing press in 'Freedom of the press'.

[-] vithigar@lemmy.ca 1 points 10 months ago* (last edited 10 months ago)

You understand that there is a difference between being not permitted to produce/distribute material and being accountable for libel, yes?

"Freedom of the press" doesn't mean they should be able to print damaging falsehood without repercussion.

[-] General_Effort@lemmy.world 10 points 10 months ago

What makes the original comment legally problematic (IMHO), is that it is expected and intended to have a chilling effect pre-publication. Effectively, it would end internet anonymity.

It's not necessarily unconstitutional. I would have made the argument if I thought so. The point is rather that history teaches us that close control of publications is a terrible mistake.

The original comment wants to make sure that there is always someone who can be sued/punished, with obvious consequences for regime critics, whistleblowers, and the like.

[-] Dark_Arc@social.packetloss.gg 1 points 10 months ago* (last edited 10 months ago)

We need to take history into account but I think we'd be foolish to not acknowledge the world has indeed changed.

Freedom of the press never meant that any old person could just spawn a million press shops and pedal whatever they wanted. At best the rich could, and nobody was anonymous for long at that kind of scale.

Personally I'm for publishing via proxy (i.e. an anonymous tip that a known publisher/person is responsible for) ... I'm not crazy about "anybody can write anything on any political topic and nobody can hold them accountable offline."

[-] vithigar@lemmy.ca -2 points 10 months ago

So your suggestion is that libel, defamation, harassment, et al are just automatically dismissed when using online anonymous platforms? We can't hold the platform responsible, and we can't identify the actual offender, so whoops, no culpability?

I strongly disagree.

[-] Supermariofan67@programming.dev 1 points 10 months ago

That's not what the commenter said and I think you are knowingly misrepresenting it.

[-] vithigar@lemmy.ca 1 points 10 months ago

I am not. And if that's not what's implied by their comments then I legitimately have no idea what they're suggesting and would appreciate an explanation.

[-] bionicjoey@lemmy.ca 3 points 10 months ago

As long as the source of trust is independent of the government or even big business, it would work and be trustworthy

That sounds like wishful thinking

this post was submitted on 11 Feb 2024
643 points (97.9% liked)

Technology

60070 readers
3571 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS