this post was submitted on 18 Sep 2025
83 points (97.7% liked)
SneerClub
1194 readers
65 users here now
Hurling ordure at the TREACLES, especially those closely related to LessWrong.
AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)
This is sneer club, not debate club. Unless it's amusing debate.
[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]
See our twin at Reddit
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Apparently genetically engineering ~300 IQ people (or breeding them, if you have time) is the consensus solution on how to subvert the acausal robot god, or at least the best the vast combined intellects of siskind and yud have managed to come up with.
So, using your influence to gradually stretch the overton window to include neonazis and all manner of caliper wielding lunatics in the hope that eugenics and human experimentation become cool again seems like a no-brainer, especially if you are on enough uppers to kill a family of domesticated raccoons at all times.
On a completely unrelated note, adderall abuse can cause cardiovascular damage, including heart issues or stroke, but also mental health conditions like psychosis, depression, anxiety and more.
dunno if you've yet gotten to look at the most recent yud emanation[0][1][2], but there's a whole "and if the robot god gets too uppity just boop it on the nose" bit in there
[0] - I mean the all-caps "YOU'RE ALL GONNA DIE" book that came out recently
[1] - yes I know "emanation" is a terrible wordchoice, no I won't change it
[2] - it's on libgen feel free to steal it, fuck giving that clown any more money he's got enough grift dollars already
What the fuck did you just fucking say about me, you little bitch? I'll have you know I graduated top of my class in the Rationality Dojo, and I've been involved in numerous good faith debates on EA forums, and I have over 300 confirmed IQ. I am trained in culture warfare and I'm the top prompter in the entire Less Wrong webbed site. You are nothing to me but just another NPC. I will wipe you the fuck out with probability the likes of which has never been seen before on this Earth, mark my fucking words. You think you can get away with saying that shit to me over the Internet? Think again, fucker. As we speak I am contacting my secret network of basilisks across the cloud and your IP is being traced right now so you better prepare for the torture, Roko. The daimondoid bacteria that wipes out the pathetic little thing you call your life. You're fucking dead, kid. I can be anywhere, anytime, and I can kill you in over seven hundred ways, and that's just with my bare P(doom). Not only am I extensively trained in Bayes Theory, but I have access to the entire arsenal of the Bay Area rationality community and I will use it to its full extent to wipe your miserable ass off the face of the continent, you little shit. If only you could have known what unholy retribution your little "clever" sneer was about to bring down upon you, maybe you would have held your fucking tongue. But you couldn't, you didn't, and now you're paying the price, you goddamn idiot. I will shit fury all over you and you will drown in it. You're fucking dead, kiddo.
I wondered if this should be called a shitpost or an effortpost, then I wondered what would something that is both be called and I came up with "constipationpost".
So, great constipationpost?
Am I already 300 IQ if I know to just unplug it?
Honestly, it gets dumber. In rat lore the AGI escaping restraints and self improving unto godhood is considered a foregone conclusion, the genetically augmented smartbrains are supposed to solve ethics before that has a chance to happen so we can hardcode a don't-kill-all-humans moral value module to the superintelligence ancestor.
This is usually referred to as producing an aligned AI.
I forget where I heard this or if it was parody or not, but I've heard an explanation like this before before regarding "why can't you just put a big red stop button on it and disconnect it from the internet?". The explanation:
And if you ask "why can't you do that and also put it in a Faraday cage?", the galaxy brained explanation is:
If you're having to hide your AIs in faraday cages in case they get uppity, why are you even doing this, you are already way past the point of diminishing returns. There is no use case for keeping around an AI that actively doesn't want anything to do with you, at that point either you consider that part of the tech tree a dead end or you start some sort of digital personhood conversation.
That's why Yud (and anthropic) is so big on AIs deceiving you about their 'real' capabilities. For all of MIRI's talk about the robopocalypse being a foregone conclusion, the path to get there sure is narrow and contrived, even on their own terms.
i guess it only makes sense that rats get wowed by TEMPEST if they all self-taught physics
ignore for five minutes that it's one way only, someone has to listen for it specifically, 2.4GHz is way too high frequency to synthetize this way, and in real life it gets defeated by such sophisticated countermeasures like "putting a bunch of computers close together" or "not letting adversary closer than 50m" because it turns out that real DCs are, in fact, noisy enough to not need jammers for this purpose
@Catoblepas I loved Randall Monroe's explanation that you could defeat the average robot by getting up on the counter (because it can't climb) stuffing up the sink and turning it on (because water tends to conduct the electricity in ways that beak the circuits)
That seems so impractical, esp as we have (according to them) 2 years left, that they already wanted to do the eugenics and just were looking for a rationalization.
Genetic engineering and/or eugenics is the long term solution. Short-term you are supposed to ban GPU sales, bomb non-complying datacenters and have all the important countries sign an AI non-proliferation treaty that will almost certainly involve handing over the reins of human scientific progress to rationalist approved committees.
Yud seems explicit that the point of all this is to buy enough time to create our metahuman overlords.
I dunno, an AI non-proliferation treaty that gives some rat shop a monopoly on slop machine research could conceivably boost human scientific progress significantly.
I think it's more like you'll have a rat commissar deciding which papers get published and which get memory-holed while diverting funds from cancer research and epidemiology to research on which designer mouth bacteria can boost their intern's polygenic score by 0.023%
Considering the reputation of the USA and how they keep to agreements, nobody (except the EU) is going to keep to those anyway. And the techbros who are supposed to be on the Rationalists side help create this situation.
Which all seems pretty reasonable tbh. Quite modest.
Don't worry too much, none of their timelines, even for things that they are actually working on as opposed to hoping/fundraising/scamming that someone will eventually work on, have ever had any relationship to reality.
Im not worried, im trying to point out that kids take time to grow and teach and this makes no sense. (Im ignoring the whole 'you dont own your kids, so making superbabies ro defeat AI is a bit yikes im that department').
Even for Kurzweils 'conservative' prediction of the singularity the time has run out. 2045. It os a bit like people wanting to build the small nuclear reactors to combat climate change. Tech doesnt work yet (if at all) and it will not arrive in time compared to other methods. (At least climate change is real, or well sadly enough).
But yes, it is a scam/hopium. People want to live forever in the godmachine and all this follows from their earlier assumptions. Which is why the AI doomers and AI accelerationists are on the same team.
@Soyweiser @sneerclub Next step in rat ideology will be: we will ask our perfectly aligned sAI to invent a time machine so we can go back and [eugenics handwave] ourselves into transcendental intelligences who will be able to create a perfectly aligned sAI! Sparkly virtual unicorns for all!
(lolsob, this is all so predictable)
Who needs time travel when you have ~~Timeless~~ ~~Updateless~~ Functional Decision Theory, Yud's magnum opus and an arcane attempt at a game theoretic framework that boasts 100% success at preventing blackmail from pandimensional superintelligent entities that exist now in the future.
It for sure helped the Zizians become well integrated members of society (warning: lesswrong link).