1292
you are viewing a single comment's thread
view the rest of the comments
[-] kaffiene@lemmy.world 2 points 7 months ago

It's not. It's reflecting it's training material. LLMs and other generative AI approaches lack a model of the world which is obvious on the mistakes they make.

[-] Lmaydev@programming.dev 1 points 7 months ago* (last edited 7 months ago)

You could say our brain does the same. It just trains in real time and has much better hardware.

What are we doing but applying things we've already learnt that are encoded in our neurons. They aren't called neural networks for nothing

[-] kaffiene@lemmy.world 2 points 7 months ago

You could say that but you'd be wrong.

[-] feedum_sneedson@lemmy.world 0 points 7 months ago* (last edited 7 months ago)

Tabula rasa, piss and cum and saliva soaking into a mattress. It's all training data and fallibility. Put it together and what have you got (bibbidy boppidy boo). You know what I'm saying?

[-] kaffiene@lemmy.world 2 points 7 months ago
[-] feedum_sneedson@lemmy.world 0 points 7 months ago* (last edited 7 months ago)

Okay, now you're definitely ~~protecting~~ ~~projecting~~ poo-flicking, as I said literally nothing in my last comment. It was nonsense. But I bet you don't think I'm an LLM.

this post was submitted on 10 Apr 2024
1292 points (99.0% liked)

Programmer Humor

19623 readers
2230 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS