656
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 15 May 2024
656 points (100.0% liked)
TechTakes
1427 readers
98 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 1 year ago
MODERATORS
What I find delightful about this is that I already wasn't impressed! Because, as the paper goes on to say
And here I was thinking it not getting a perfect score on multiple-choice questions was already damning. But apparently it doesn't even get a particularly good score!
officially Not The Worst™, so clearly AI is going to take over law and governments any day now
also. what the hell is going on in that other reply thread. just a parade of people incorrecting each other going "LLM's don't work like [bad analogy], they work like [even worse analogy]". did we hit too many buzzwords?
But LLM’s don’t work like Typewriters, they work like Microwaves!
"Nooo you don't get it, LLMs are supposed to be shit"
I was considering interjecting in there but I don’t want to get it on my clothes, so I’m content just watching from the outside.
Not great, but I’m also not obligated to teach anyone anything, soooooo
That’s like saying a person reading a book before a quiz is doing it open book because they have the memory of reading that book.
It's more like taking a digital copy into the test room with you and Ctrl+F'ing every question/answer.
Except it’s not, because they can’t perfectly recall everything.
It’s more like reading every book in the world, and someone asking you what comes next after “And I…”.
"will alwaaays love you...."
Easy. No other answer.
But the AI isn't "recalling" in the same way you do, it doesn't "remember" what it "read", it "reads" on demand and has instant access to essentially all of the information ~~available online~~ it was trained on (E: though it's becoming more or less the same thing, and is definitely the same when it comes to law books for example), from which it collects the necessary details if and when it needs it.
So yes, it is literally "sat" there with all the books open in front of it, and the ability to pinpoint a bit of information in any one of all the books in milliseconds.
and in conclusion an AI is very like an elephant, particularly the back end
I'm not a big AI guy but it's really not quite like that, models do NOT contain all the data they were trained on.
Edit: I have no idea what's going on down below this comment
we can tell
I'm not even going to engage in this thread cause it's a tar pit, but I do think I have the appropriate analogy.
When taking certain exams in my CS programme you were allowed to have notes but with two restrictions:
The idea was that you needed to actually put a lot of work into making it, since the entire material was obviously the size of a fucking book and not an A4 page, and you couldn't just print/copy it from somewhere. So you really needed to distill the information and make a thought map or an index for yourself.
Compare that to an ML model that is allowed to train on data however long it wants, as long as the result is a fixed-dimension matrix with parameters that helps it answer questions with high reliability.
It's not the same as an open book, but it's definitely not closed book either. And the LLMs have billions of parameters in the matrix, literal gigabytes of data on their notes. The entire text of War and Peace is ~3MB for comparison. An LLM is a library of trained notes.