102
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 29 Jul 2024
102 points (100.0% liked)
Politics
10179 readers
70 users here now
In-depth political discussion from around the world; if it's a political happening, you can post it here.
Guidelines for submissions:
- Where possible, post the original source of information.
- If there is a paywall, you can use alternative sources or provide an archive.today, 12ft.io, etc. link in the body.
- Do not editorialize titles. Preserve the original title when possible; edits for clarity are fine.
- Do not post ragebait or shock stories. These will be removed.
- Do not post tabloid or blogspam stories. These will be removed.
- Social media should be a source of last resort.
These guidelines will be enforced on a know-it-when-I-see-it basis.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
It's been brought up and mentioned a few times, but it just gets kind of swept under the rug. The crazy part is, this has all been doable basically since around 2 months after the boom in ~April of '21 (or maybe '22 the years have really muddled me), but has only been getting easier.
Back when there were some version of this done with Trump and other politicians, the common consensus among the AI-inclined was basically, "oh, well, the thing is if we just make so much of it that none of it seems real then we'll be able to tell what's real from what isn't and everyone will see how obvious it is." Without realizing that people on Facebook will see a 256p image and think the messiah is real and God himself told them to believe it.
The scary part is there doesn't really seem to be an effective way to stop this. Running AI locally is the best way to do it, which inherently creates the risk that you have to trust that people won't abuse it to make deepfakes. I think even if it were banned it wouldn't matter, since there's probably a high chance it's not even entirely done in the U.S. (I would say mostly, but not entirely, that just seems improbable). Real videos can't exactly get some sort of Government Seal since they'd just be replicated and I'm not sure that the average person would be able to understand the MD5 checksum and would probably even think that's the fake one.
Sam Altman from Open AI tried to give us a "seal of Made by Humans", but everyone balked at WorldCoin and scanning people's biometrics with a floating orb... so here we are.
Yeah I remember that, an idea so good that we may as well have validated official videos with blockchain transactions, like NFT's. The solution was in front of us this whole time!! /s /eyeroll
I know, this possibility has got me worried ever since then. And these things will get harder and harder to spot as the technology moves forward.
Of course. The technology is out there in the wild, and everyone willing to use it for their purpose will do just that, whether for good or bad. Imo I think this is a highly complex issue with no easy solution.
This is pretty much useless on everything uploaded to Facebook, Instagram or other mainstream social media platforms.
Well, it ought to be just yet another skill in the modern world such as reading, critical thinking, media literacy, paying your taxes, etc. I think just saying "average person won't understand this" is a bit of a cop out. Like, yes, you're right, but there is no other way around this, it needs to be implemented as a core feature in social media, the verifications of origins.
This stuff isn't just going to get better by itself. AI will only get better at faking as time goes on.
Oh sure, I mean I wish it could be the case. I was more speaking from our reality and the situation, which is that education ends at grade 12. For anybody over the age of 18, there's no legal requirement to learn unless you go out of your way to get a license for it (which is pretty much... Your driver's license). How do we practically teach an entire country that is not in school? And these days we have the whole "Can it even be trusted?" group. How do we teach the elderly who have trouble using smart phones how to verify if it's an AI image or not? And most of all, if it is saying something you are agreeing with, is there even a point in going to verify it at all? People may just not care in the first place.
When I was in school we had some classes on internet education, and I had tech savvy parents (young in the '90s), and I remember the checklist of getting your information on the internet. It was taught for a while, I'm sure it still is, but so few people actually take the time to go through each cited source and find the author and every little aspect to verify its trustworthiness.
And now it's been 20 years and basically everything is taken at face value on the internet, over time the rational content being littered with irrational information. I mean, forwardsfromGrandma and those "Post this to your wall so your niece doesn't die tonight" chain e-mails-turned-Facebook posts. I've also just personally felt a significant downtrend in media literacy and critical thinking skills overall, which also makes it hard to find hope sometimes. It would have to be a system more fleshed out than posting a checksum to validate because scammers are going to scam and they'll find ways to give you their MD5 to validate against their checksum, and now we just have a verification system as complicated as the U.S. tax code.
So despite how much I want us to be able to find a solution, I'm just not entirely convinced that in a country with a decimated education system and a historical lack of interest in internet and computer literacy is going to have any major reform any time soon. Of course, that said, there was that recent deepfake bill passed unanimously, and so I think legislature will be implemented over time, I just am... Uncertain as to how it would be available to anyone who isn't currently enrolled in school, which happens to be most of the population. And this is just the U.S.! Although I have a feeling most of the rest of the world will have not so big of an issue with finding an implementation.
I'm usually not so defeatist, I'm usually even pret seeing what steps would need to be taken to resolve problems. It's just that this isn't something that is intentionally designed to support companies that can make money off, like the tax code. I mean, that would effectively be whatever skunk Sam Altman proposed. In this case, it is the entire world needing to be able to distinguish reality from manufactured content, all while anyone with a GPU is able to create it.
All in all, I absolutely agree that we have to find a way to incorporate proper verification into our critical thinking skills, but my point is that those skills are what are failing and have been on the decline, and states like Florida, Arizona, Ohio, Idaho are trying to ensure that continues for our youth in those areas. Thus, if we can't even trust states to educate our children properly, how will we educate the average American on something that is a moderately complex topic. Moreover, it has to be simple, because there is no way people will attempt to validate their content if it's like filing taxes every time you scroll to the next reel/short, and there's no guarantee that they would even care in the first place. There are just a lot of factors that I find make it more difficult than most problems to effectively solve