35
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 15 Jun 2024
35 points (60.4% liked)
Technology
59654 readers
2703 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
The whole point of the Turing test, is that you should be unable to tell if you're interacting with a human or a machine. Not 54% of the time. Not 60% of the time. 100% of the time. Consistently.
They're changing the conditions of the Turing test to promote an AI model that would get an "F" on any school test.
But you have to select if it was human or not, right? So if you can't tell, then you'd expect 50%. That's different than "I can tell, and I know this is a human" but you are wrong... Now that we know the bots are so good, I'm not sure how people will decide how to answer these tests. They're going to encounter something that seems human-like and then essentially try to guess based on minor clues... So there will be inherent randomness. If something was a really crappy bot then it wouldn't ever fool anyone and the result would be 0%.
No, the real Turing test has a robot trying to convince an interrogator that they are a female human, and a real female human trying to help the interrogator to make the right choice. This is manipulative rubbish. The experiment was designed from the start to manufacture these results.