54
submitted 11 months ago by saint@group.lt to c/science@beehaw.org
you are viewing a single comment's thread
view the rest of the comments
[-] Morsil@lemmy.eco.br 8 points 11 months ago

Maybe I misunderstood but it seems they just used the brain cells as a microphone and the voice recognition was done by a machine learning algorithm?

[-] webghost0101@sopuli.xyz 10 points 11 months ago

This needs more tests. It looks like current results are the combination of how braincells naturally filter the experience if sound and ai on top of that.

It looks like the brain actually does recognize voices as different but we need an ai to read this from the brain. I am curious how much better this performs then just pure ai.

Id alo like to know how the brain got exposed to sound cause irl an organic microphone is an ear, is it a brain with ears?

Even if not better then just ai voice recognition. Sending experiences trough neural matter and using ai to analyze the way it responds will learn us a lot about how the brain actually works.

[-] HumanPenguin@feddit.uk 1 points 11 months ago

Your last para is likely the important one. Rather then this being some idea to make things more efficient. It was likely done purely to see how human brain cells function. This may long term lead to more effective solutions by mimicking the ideas learned. But ATM we really still do not know how much we don't know about the human brain.

this post was submitted on 16 Dec 2023
54 points (100.0% liked)

Science

13006 readers
7 users here now

Studies, research findings, and interesting tidbits from the ever-expanding scientific world.

Subcommunities on Beehaw:


Be sure to also check out these other Fediverse science communities:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS