227
LLMs still don't understand the word "no", much like their creators
(www.quantamagazine.org)
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
this isn't necessarily true. patterns in data aren't by nature proof of an underlying system of logic. if you run the line-fitting machine on any kind of data, its going to output a line. considering just how much data is encoded into these transformers, i don't think we can conclusively say that it has a underlying conception of how language works, much less an understanding of the concepts that language represents. it could really just be using the vast quantities of data it has to output approximately correct statements. there's absolutely structure there, but it doesn't have to have the kind of structured understanding humans have about language to produce language, in the same way a less sophisticated machine learning model doesn't have to know what kind of data its fitting a line to to make a line.