179
you are viewing a single comment's thread
view the rest of the comments
[-] fine_sandy_bottom@lemmy.dbzer0.com 70 points 10 months ago

Let me guess...

  • not very accurate
  • needs to be trained on an individuals brain.
[-] RainfallSonata@lemmy.world 44 points 10 months ago

Although DeWave only achieved just over 40 percent accuracy based on one of two sets of metrics in experiments conducted by Lin and colleagues, this is a 3 percent improvement on the prior standard for thought translation from EEG recordings.

The Australian researchers who developed the technology, called DeWave, tested the process using data from more than two dozen subjects. Participants read silently while wearing a cap that recorded their brain waves via electroencephalogram (EEG) and decoded them into text.

Yep.

[-] themurphy@lemmy.world 34 points 10 months ago

When the number og test subjects is that low, it almost feels like the 3% improvement might as well be a coincidence.

[-] yokonzo@lemmy.world 20 points 10 months ago

This is wonderful news, it means it's good enough to operate my lights with a thought but not good enough to be admissable in court as evidence

[-] HubertManne@kbin.social 5 points 10 months ago

their goal is 90%. I could see it if the ai was given a long enough time with feedback on what you are doing. Which I think would be tough with stroke patients. Great for folks that would like to control a pc with thoughts but not get cut open though.

[-] merc@sh.itjust.works 4 points 10 months ago

Participants read silently while wearing a cap that recorded their brain waves via electroencephalogram (EEG) and decoded them into text.

Was the AI trained on the text that the people were reading?

[-] Monument@lemmy.sdf.org 3 points 10 months ago

I’m not sure if this was your intent, but your comment gave me a good giggle as I recalled this article: An AI bot performed insider trading and deceived its users after deciding helping a company was worth the risk.

Not to personify an LLM, but in my (fantastical) imagining, the AI knew the desired outcome, and that complete success was unbelievable. So it fudged things to be 3% improved.

Yikes. Now that I’m overthinking it - that idea is only funny because it’s currently improbable.
… I hope people pleasing is never a consideration for any ‘AI’ that does scientific, engineering, or economic work.

[-] hansl@lemmy.world 3 points 10 months ago

How much accuracy would you be happy with? Anything more than 25% in my book is better than anyone else. And the tech is just getting better.

How much would it need to be at to beat a polygraph?

[-] sapient_cogbag@infosec.pub 2 points 10 months ago

Wonder how it interacts with neurodivergent people too :p

this post was submitted on 27 Dec 2023
179 points (93.7% liked)

Technology

59205 readers
2893 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS