this post was submitted on 27 May 2025
2090 points (99.5% liked)

Programmer Humor

24648 readers
1102 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] CanadaPlus@lemmy.sdf.org 0 points 1 month ago* (last edited 1 month ago) (8 children)

Coherent originality does not point to the machine’s understanding; the human is the one capable of finding a result coherent and weighting their program to produce more results in that vein.

You got the "originality" part there, right? I'm talking about tasks that never came close to being in the training data. Would you like me to link some of the research?

Your brain does not function in the same way as an artificial neural network, nor are they even in the same neighborhood of capability. John Carmack estimates the brain to be four orders of magnitude more efficient in its thinking; Andrej Karpathy says six.

Given that both biological and computer neural nets very by orders of magnitude in size, that means pretty little. It's true that one is based on continuous floats and the other is dynamic peaks, but the end result is often remarkably similar in function and behavior.

[–] Jtotheb@lemmy.world 2 points 1 month ago (3 children)

If you would like to link some abstracts you find in a DuckDuckGo search that’s fine.

[–] CanadaPlus@lemmy.sdf.org 1 points 1 month ago (2 children)

I actually was going to link the same one I always do, which I think I heard about through a blog or talk. If that's not good enough, it's easy to devise your own test and put it to an LLM. The way you phrased that makes it sound like you're more interested in ignoring any empirical evidence, though.

[–] Jtotheb@lemmy.world 1 points 4 weeks ago (1 children)

That’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.

[–] CanadaPlus@lemmy.sdf.org 1 points 3 weeks ago* (last edited 3 weeks ago)

You can devise a task it couldn't have seen in the training data, I mean. Building a comprehensive argument out of them requires a lot more work and time.

You don’t even have access to the “thinking” side of the LLM.

Obviously, that goes for the natural intelligences too, so it's not really a fair thing to require.

load more comments (4 replies)