27
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 16 Sep 2024
27 points (100.0% liked)
TechTakes
1481 readers
323 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 1 year ago
MODERATORS
This quote flashbanged me a little
From this thread: https://www.reddit.com/r/gamedev/comments/1fkn0aw/chatgpt_is_still_very_far_away_from_making_a/lnx8k9l/
Instead of improving LLMs, they are working backwards to prove that all other things are actually word prediction tasks. It is so annoying and also quite dumb. No chemisty isn't like coding/legos. The law isn't invalid because it doesn't have gold fringes and you use magical words.
None of these fucking goblins have learned that analogies aren’t equivalences!!! They break down!!! Auuuuuuugggggaaaaaaarghhhh!!!!!!
The problem is that there could be any number of possible next words, and the available results suggest that the appropriate context isn't covered in the statistical relationships between prior words for anything but the most trivial of tasks i.e. automating the writing and parsing of emails that nobody ever wanted to read in the first place.
This is just standard promptfondler false equivalence: "when people (including me) speak, they just select the next most likely token, just like an LLM"