227
you are viewing a single comment's thread
view the rest of the comments
[-] froztbyte@awful.systems 38 points 7 months ago

it's almost like this thing has no internal conceptual representation! I know this can't possibly be, millions of promptfans and prompfondlers have told me it can't be so, but it sure does look that way! wild!

[-] kogasa@programming.dev -3 points 7 months ago

It must have some internal models of some things, or else it wouldn't be possible to consistently make coherent and mostly reasonable statements. But the fact that it has a reasonable model of things like grammar and conversation doesn't imply that it has a good model of literally anything else, which is unlike a human for whom a basic set of cognitive skills is presumably transferable. Still, the success of LLMs in their actual language-modeling objective is a promising indication that it's feasible for a ML model to learn complex abstractions.

[-] slopjockey@awful.systems 15 points 7 months ago

It must have some internal models of some things, or else it wouldn’t be possible to consistently make coherent and mostly reasonable statements.

Talk about begging the question

load more comments (7 replies)
load more comments (7 replies)
this post was submitted on 23 May 2024
227 points (100.0% liked)

TechTakes

1488 readers
115 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS