this post was submitted on 23 Aug 2025
44 points (100.0% liked)

SneerClub

1182 readers
34 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

this is Habryka talking about how his moderating skills are so powerful it takes lesswrong three fucking years to block a poster who's actively being a drain on the site

here's his reaction to sneerclub (specifically me - thanks Oliver!) calling LessOnline "wordy racist fest":

A culture of loose status-focused social connection. Fellow sneerers are not trying to build anything together. They are not relying on each other for trade, coordination or anything else. They don't need to develop protocols of communication that produce functional outcomes, they just need to have fun sneering together.

He gets us! He really gets us!

you are viewing a single comment's thread
view the rest of the comments
[–] diz@awful.systems 15 points 3 days ago* (last edited 3 days ago) (4 children)

Lol I literally told these folks, something like 15 years ago, that paying to elevate a random nobody like Yudkowsky as the premier “ai risk” researcher, in so much that there is any AI risk, would only increase it.

Boy did I end up more right on that than my most extreme imagination. All the moron has accomplished in life was helping these guys raise cash due to all his hype about how powerful the AI would be.

The billionaires who listened are spending hundreds of billions of dollars - soon to be trillions, if not already - on trying to prove Yudkowsky right by having an AI kill everyone. They literally tout “our product might kill everyone, idk” to raise even more cash. The only saving grace is that it is dumb as fuck and will only make the world a slightly worse place.

[–] BlueMonday1984@awful.systems 3 points 1 day ago

The billionaires who listened are spending hundreds of billions of dollars - soon to be trillions, if not already - on trying to prove Yudkowsky right by having an AI kill everyone. They literally tout “our product might kill everyone, idk” to raise even more cash. The only saving grace is that it is dumb as fuck and will only make the world a slightly worse place.

Given they're going out of their way to cause as much damage as possible (throwing billions into the AI money pit, boiling oceans of water and generating tons of CO~2~, looting the commons through Biblical levels of plagiarism, and destroying the commons by flooding the zone with AI-generated shit), they're arguably en route to proving Yud right in the dumbest way possible.

Not by creating a genuine AGI that turns malevolent and kills everyone, but in destroying the foundations of civilization and making the world damn-nigh uninhabitable.

[–] BlueMonday1984@awful.systems 1 points 1 day ago

The billionaires who listened are spending hundreds of billions of dollars - soon to be trillions, if not already - on trying to prove Yudkowsky right by having an AI kill everyone. They literally tout “our product might kill everyone, idk” to raise even more cash. The only saving grace is that it is dumb as fuck and will only make the world a slightly worse place.

Given they're going out of their way to cause as much damage as possible (throwing billions into the AI money pit, boiling oceans of water and generating tons of CO~2~, looting the commons through Biblical levels of plagiarism, and destroying the commons by flooding the zone with AI-generated shit), they're arguably en route to proving Yud right in the dumbest way possible.

Not by creating a genuine AGI that turns malevolent and kills everyone, but in destroying the foundations of civilization and making the world damn-nigh uninhabitable.

Consider, however, the importance of building the omnicidal AI God before the Chinese.

[–] froztbyte@awful.systems 8 points 3 days ago* (last edited 3 days ago) (2 children)

some UN-associated ACM talk I was listening to recently had someone cite a number at (iirc) ~~$1.5tn total estimated investment~~ $800b[0]. haven't gotten to fact-check it but there's a number of parts of that talk I wish to write up and make more known

one of the people in it made some entirely AGI-pilled comments, and it's quite concerning

this talk; looks like video is finally up on youtube too (at the time I yanked it by pcap-ing a zoom playout session - turns out zoom recordings are hella aggressive about not being shared)

the question I asked was:

To Csaba (the current speaker): it seems that a lot of the current work you're engaged in is done presuming that AGI is a certainty. what modelling you have you done without that presumption?

response is about here

[0] edited for correctness; forget where I saw the >$1.5t number

[–] diz@awful.systems 6 points 2 days ago* (last edited 2 days ago)

Yeah a new form of apologism that I started seeing online is “this isn’t a bubble! Nobody expects an AGI, its just Sam Altman, it will all pay off nicely from 20 million software developers worldwide spending a few grand a year each”.

Which is next level idiotic, besides the numbers just not adding up. There’s only so much open source to plagiarize. It is a very niche activity! It’ll plateau and then a few months later tiny single GPU models catch up to this river boiling shit.

The answer to that has always been the singularity bullshit where the biggest models just keep staying ahead by such a large factor nobody uses the small ones.

[–] dgerard@awful.systems 12 points 3 days ago

hearing him respond like that in real time and carefully avoiding the point makes clear the attraction of ChatGPT