this post was submitted on 16 Jun 2025
55 points (100.0% liked)
SneerClub
1125 readers
110 users here now
Hurling ordure at the TREACLES, especially those closely related to LessWrong.
AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)
This is sneer club, not debate club. Unless it's amusing debate.
[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]
See our twin at Reddit
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've been thinking about this post for a full day now. It's truly bizzare, in a "I'd like to talk to this person and study their brain" kind of way.
Put aside the technical impossibility of LLMs acting as the agents he describes. That's small potatoes. The only thing that stays in my mind is this:
I can't even put into words the full nonsense of this statement. How do you think this would work? This is not how learning works. This is not how research works. This is not how anything works.
I can't understand this. Like yes, of course, some times there's this moment where you think "god I remember there was this particular chart I saw" or "how many people lived in Tokio again?" or "I read exactly the solution to this problem on StackOverflow once". In the days of yore you'd write one Google query and you'd get it. Nowadays maybe you can find it on Wikipedia. Sure. But that doesn't actually take two minutes either, it's like an instant one-second thought of "oh I know I saw exactly this factoid somewhere". You don't read books for that though. Does this person think books are just sequences of facts you're supposed to memorise?
How on earth do you think of "precisely the information you need". What does that mean? How many problems are there in your life where you precisely know how the solution would look like, you just need an elaborate query through an encyclopedia to get it? Maybe this is useful if your entire goal is creating a survey of existing research into a topic, but that's a really small fraction of applications for reading a fucking book. How often do you precisely know what you don't know? Like genuinely. How can your curiosity be distilled into a precise, well-structured query? Don't you ever read something and go "oh, I never even thought about this", "I didn't know this was a problem", "I wouldn't have thought of this myself". If not then what the fuck are you reading??
I am also presuming this is about purely non-fiction technical books, because otherwise this gets more nonsensical. Like what do you ask your agents for, "did they indeed take the hobbits to Isengard? Prepare a comprehensive review of conflicting points of view."
This single point presumes that none of the reasons for you absorbing knowledge from other people is to use it in a creative way, get inspired by something, or just find out about something you didn't know you didn't know. It's something so alien to me, so detached from what I consider the human experience, I simply don't comprehend this. Is this a real person? How does the day-to-day life of this person look like? What goes on in their head when they read a book? What are we moving towards as a species?
It’s *maxxing. Just like getting your jaw surgically resculpted can be a sigma-male shortcut to outcompeting hyper-eugenic alpha-chads, getting Clippy to make you a thought smoothie from all the world’s knowledge can catapult you into the tier of the intellectual MMA fighters of stoic philosophy or whatever.
I think i have something shaped like counterexample. Large literature reviews and compilations of data tables and such can work like this, and grepping them will get you a feel what is possible and a single practical example per, but even then you're supposed to read them in order to get not only what is possible, but also what is not (or at least what wasn't tested) and what fails and how and why. Actually reading through also gives you a bigger picture and allows for drawing your own conclusions ofc like you notice
even then feeding them to chatbot is valleybrain nonsense because grep will be more than enough and much faster, and you naturally know what's inside only after reading it
even then, just having right snippet is not enough because presumably result would be only apparent after testing irl, or perhaps building a model or simulation or what have you. even then, getting to the point where you need to do any of that requires degree of curiosity and ability to put information from different sources together that would exclude promptfondlers. it's like these people try on purpose to think as little as possible
He has Dune on his list of worlds to live in, though...
edit: I know. he fed his post to AI and asked it to list the fictional universes he'd want to live in, and that's how he got Dune. Precisely the information he needed.
What's more, there's a reason people write books. Information is best conveyed via narrative with context. You can throw all the stats you want at someone, if you don't give them a story to tie them all together it's just noise. That's why we have teachers instead of just textbooks and textbooks instead of just encyclopedias
I think they consider "being well-read" solely as a flex, not as a means of acquiring actual knowledge and wisdom.
@HedyL
"I've consumed all the ideologically appropriate materials"
This part threw me as well. If you can think of it, why read for it? Didn’t make sense and so I stopped looking into this particular abyss until you pointed it out again.
I think the only interpretation of what this person said that approaches some level of rationality on their part is essentially a form of confirmation bias. They aren’t thinking of information that is in the text, they are thinking “I want this text to confirm X for me”, then they prompt and get what they want. LLMs are biased to be people-pleasers and will happily spin whatever hallucinated tokens the user throws at them. That’s my best guess.
That you didn’t think of the above just goes to show the failure of your unfeeble mind’s logic and reason to divine such a truth. Just kidding, sorta, in the sense that you can’t expect to understand an irrational thought process using rationality.
But if it’s not that I’m still thrown.
I think it's either that, or they want an answer they could impress other people with (without necessarily understanding it themselves).
Oh, that's a good angle too. Prompt the LLM with "what insights does this book have about B2B sales" or something.
It may need to be removed for the purpose, but they'll be fine because you can just ask an LLM how to put it back again when you're done.
the end
I think the end is way too generous. I don't think we deserve an end.
Yea, it's not The End, as in humanity dies off, but the end, as in civilization collapses back in to a disgusting dark age and has mass dieoffs.
It wouldn't even be the first time in human history.