this post was submitted on 04 Apr 2025
299 points (90.1% liked)

Technology

68305 readers
5470 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] reev@sh.itjust.works 45 points 22 hours ago (3 children)

I think what's wild about it is that it really is surprisingly similar to how we actually think. It's very different from how a computer (calculator) would calculate it.

So it's not a strange method for humans but that's what makes it so fascinating, no?

[–] PlexSheep@infosec.pub 2 points 5 hours ago

I mean neural networks are modeled after biological neurons/brains after all. Kind of makes sense...

[–] pulsewidth@lemmy.world 2 points 10 hours ago

Yes, agreed. And calculators are essentially tabulators, and operate almost just like a skilled person using an abacus.

We shouldn't really be surprised because we designed these machines and programs based on our own human experiences and prior solutions to problems. It's still neat though.

[–] MudMan@fedia.io 24 points 22 hours ago

That's what's fascinating about how it does language in general.

The article is interesting in both the ways in which things are similar and the ways they're different. The rough approximation thing isn't that weird, but obviously any human would have self-awareness of how they did it and not accidentally lie about the method, especially when both methods yield the same result. It's a weirdly effective, if accidental example of human-like reasoning versus human-like intelligence.

And, incidentally, of why AGI and/or ASI are probably much further away than the shills keep claiming.