269
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 13 Jun 2024
269 points (100.0% liked)
Technology
37702 readers
417 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
So this could go one of two ways, I think:
3. If you lie about it and get caught people will correctly call you a liar, ridicule you, and you lose trust. Trust is essential for content creators, so you're spelling your doom. And if you find a way to lie without getting caught, you aren't part of the problem anyway.
I think the first half of yours is the same as my first, and I think a lot of artists aren't against AI that produces worse art than them, they're againt AI art that was generated using stolen art. They wouldn't be part of the problem if they could honestly say they trained using only ethically licensed/their own content.
I was about to disagree, but that's actually really interesting. Could you expand on that?
Do you mind if I address this comment alongside your other reply? Both are directly connected.
If you want to lie without getting caught, your public submission should have neither the hallucinations nor stylistic issues associated with "made by AI". To do so, you need to consistently review the output of the generator (LLM, diffusion model, etc.) and manually fix it.
In other words, to lie without getting caught you're getting rid of what makes the output problematic on first place. The problem was never people using AI to do the "heavy lifting" to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs, that look a lot like someone else's, instead of a decent and original one. Those are the ones who'd get caught, because they're doing what you called "dumb" (and I agree) - not proof-reading their output.
Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.
Yes, sorry, I didn't realise I was replying to the same user twice.
Exactly. I guess I'm conditioned to expect "AI is smoke and mirrors" type comments, and that's not true. They're genuinely quite impressive and can make intuitive leaps they weren't directly trained for. What they're not is aligned; they just want to create human-like output, regardless of truth, greater context or morality, because that's the only way we know how to train them.
I definitely hate searching something, and finding a website that almost reads as human with fake "authors", but provides no useful information. And I really worry for people who are less experienced spotting AI errors and filler. That's a moral issue, though, as opposed to a practical one; it seems to make ad money perfectly well for the "creators".
TIL. They're going to have trouble identifying rulebreakers if contributors use the tool correctly the way we've discussed, though.
Why Not Both?