this post was submitted on 20 Sep 2025
17 points (100.0% liked)

SneerClub

1194 readers
81 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

So seeing the reaction on lesswrong to Eliezer's book has been interesting. It turns out, even among people that already mostly agree with him, a lot of them were hoping he would make their case better than he has (either because they aren't as convinced as him, or they are, but were hoping for something more palatable to the general public).

This review (lesswrong discussion here), calls out a really obvious issue: Eliezer's AI doom story was formed before Deep Learning took off, and in fact was mostly focusing on more GOFAI than neural networks, yet somehow, the details of the story haven't changed at all. The reviewer is a rationalist that still believes in AI doom, so I wouldn't give her too much credit, but she does note this is a major discrepancy from someone that espouses a philosophy that (nominally) features a lot of updating your beliefs in response to evidence. The reviewer also notes that "it should be illegal to own more than eight of the most powerful GPUs available in 2024 without international monitoring" is kind of unworkable.

This reviewer liked the book more than they expected to, because Eliezer and Nate Soares gets some details of the AI doom lore closer to the reviewer's current favored headcanon. The reviewer does complain that maybe weird and condescending parables aren't the best outreach strategy!

This reviewer has written their own AI doom explainer which they think is better! From their limited description, I kind of agree, because it sounds like the focus on current real world scenarios and harms (and extrapolate them to doom). But again, I wouldn't give them too much credit, it sounds like they don't understand why existential doom is actually promoted (as a distraction and source of crit-hype). They also note the 8 GPUs thing is batshit.

Overall, it sounds like lesswrongers view the book as an improvement to the sprawling mess of arguments in the sequences (and scattered across other places like Arbital), but still not as well structured as they could be or stylistically quite right for a normy audience (i.e. the condescending parables and diversions into unrelated science-y topics). And some are worried that Nate and Eliezer's focus on an unworkable strategy (shut it all down, 8 GPU max!) with no intermediate steps or goals or options might not be the best.

you are viewing a single comment's thread
view the rest of the comments
[–] BlueMonday1984@awful.systems 12 points 2 days ago (1 children)

I have a hard time imagining there’s any modern science that can’t be explained to 100IQ smoothbrains, assuming the author is good enough.

Same here. The main things stopping the LWers are that

(a) what they're doing is utterly divorced from modern science

(b) they are godawful writers, to the point where it took years of billionaire funding and an all-consuming economic bubble to break them into the mainstream

[–] corbin@awful.systems 1 points 8 hours ago

Here's a few examples of scientifically-evidenced concepts that provoke Whorfian mind-lock, where people are so attached to existing semantics that they cannot learn new concepts. If not even 60% of folks get it, then that's more than within one standard deviation of average.

  • There are four temporal tenses in a relativistic setting, not three. "Whorfian mind-lock" was originally coined during a discussion where a logician begs an astrophysicist to understand relativity. Practically nobody accepts this at first, to the point where there aren't English words for discussing or using the fourth tense.
  • Physical reality is neither objective nor subjective, but contextual (WP, nLab) or participatory. For context, only about 6-7% of philosophers believe this at most, from a 2020 survey. A friend-of-community physicist recently missed this one too, and it's known to be a very subtle point despite its bluntness.
  • Classical logic is not physically realizable (WP, nLab) and thus not the ultimate tool for all deductive work. This one does much better, at around 45% of philosophers at most, from the same 2020 survey.

@gerikson@awful.systems Please reconsider the use of "100IQ smoothbrain" as a descriptor. 100IQ is average, assuming IQ is not bogus. (Also if IQ is not bogus then please y'all get the fuck off my 160+IQ ~~lawn~~ pollinator's & kitchen garden.)