So seeing the reaction on lesswrong to Eliezer's book has been interesting. It turns out, even among people that already mostly agree with him, a lot of them were hoping he would make their case better than he has (either because they aren't as convinced as him, or they are, but were hoping for something more palatable to the general public).
This review (lesswrong discussion here), calls out a really obvious issue: Eliezer's AI doom story was formed before Deep Learning took off, and in fact was mostly focusing on more GOFAI than neural networks, yet somehow, the details of the story haven't changed at all. The reviewer is a rationalist that still believes in AI doom, so I wouldn't give her too much credit, but she does note this is a major discrepancy from someone that espouses a philosophy that (nominally) features a lot of updating your beliefs in response to evidence. The reviewer also notes that "it should be illegal to own more than eight of the most powerful GPUs available in 2024 without international monitoring" is kind of unworkable.
This reviewer liked the book more than they expected to, because Eliezer and Nate Soares gets some details of the AI doom lore closer to the reviewer's current favored headcanon. The reviewer does complain that maybe weird and condescending parables aren't the best outreach strategy!
This reviewer has written their own AI doom explainer which they think is better! From their limited description, I kind of agree, because it sounds like the focus on current real world scenarios and harms (and extrapolate them to doom). But again, I wouldn't give them too much credit, it sounds like they don't understand why existential doom is actually promoted (as a distraction and source of crit-hype). They also note the 8 GPUs thing is batshit.
Overall, it sounds like lesswrongers view the book as an improvement to the sprawling mess of arguments in the sequences (and scattered across other places like Arbital), but still not as well structured as they could be or stylistically quite right for a normy audience (i.e. the condescending parables and diversions into unrelated science-y topics). And some are worried that Nate and Eliezer's focus on an unworkable strategy (shut it all down, 8 GPU max!) with no intermediate steps or goals or options might not be the best.
This comment is gold:
I haven't read the damn book and I never will, but I have a hard time imagining there's any modern science that can't be explained to 100IQ smoothbrains, assuming the author is good enough.
Same here. The main things stopping the LWers are that
(a) what they're doing is utterly divorced from modern science
(b) they are godawful writers, to the point where it took years of billionaire funding and an all-consuming economic bubble to break them into the mainstream
Here's a few examples of scientifically-evidenced concepts that provoke Whorfian mind-lock, where people are so attached to existing semantics that they cannot learn new concepts. If not even 60% of folks get it, then that's more than within one standard deviation of average.
@gerikson@awful.systems Please reconsider the use of "100IQ smoothbrain" as a descriptor. 100IQ is average, assuming IQ is not bogus. (Also if IQ is not bogus then please y'all get the fuck off my 160+IQ ~~lawn~~ pollinator's & kitchen garden.)
It's a microcosm of lesswrong's dysfunction: IQ veneration, elitism, and misunderstanding the problem in the first place. And even overlooking those problems, I think intellect only moderately correlates with an appreciation for science and an ability to understand science. Someone can think certain scientific subjects are really cool but only have a layman's grasp of the technical details. Someone can do decently in introductory college level physics with just a willingness to work hard and being decent at math. And Eliezer could have avoided tangents about nuclear reactors or whatever to focus on stuff relevant to AI.
To be fair, you have to have a very high IQ to understand ~~Rick and Morty~~ If Anyone Builds It, Everyone Dies. The humor is extremely subtle, and without a solid grasp of theoretical physics most of the jokes will go over a typical viewer's head. (I'm doing a variant of this meme)
There's also Eliezer's nihilistic outlook, which is deftly woven into his parables-- his personal philosophy draws heavily from Godel Escher Bach, for instance. The fans understand this stuff; they have the intellectual capacity to truly appreciate the depths of his parables, to realize that they're not just entertaining- they say something deep about the nature of Intelligence. As a consequence people who dislike IABIED truly ARE idiots- of course they wouldn't appreciate, for instance, the motivation in Eliezier's existencial catchphrase "Tsuyoku Naritai!," which itself is a cryptic reference to Japanese culture. I'm smirking right now just imagining one of those addlepated simpletons scratching their heads in confusion as Nate Soares genius unfolds itself on their copy of IABIED. What fools... how I pity them. 😂 And yes by the way, I DO have a rationalist tattoo. And no, you cannot see it. It's for the math pet's eyes only- And even they have to demonstrate that they're within 5 IQ points of my own (preferably lower) beforehand.
Took 3d6 SAN damage.
translator's note: IABIED means "plan"
there's ied in this acronym so everyone using it gets on a watch list
yabaied
The Yude iabieds
Fuck, that's good
I grew up nearby the hometown of one of the Rick and Morty creators, I think the one that got fired for excessive drunkenness and harassment. And when I found that out, my immediate reaction was, "yup, of course a guy from Manteca would make a cartoon about having an alcoholic grandpa"