2
top 1 comments
sorted by: hot top controversial new old
[-] swlabr@awful.systems 1 points 1 year ago

Why did the Alignment community not prepare tools and plans for convincing the wider infosphere about AI safety years in advance?

Did you not read HPMOR, the greatest story ever reluctantly told to reach the wider infosphere about rationalism and, by extension, AI alignment????

Why were there no battle plans in the basement of the pentagon that were written for this exact moment?

It's almost like AGI isn't a credible threat!

Heck, 20+ years is enough time to educate, train, hire and surgically insert an entire generation of people into key positions in the policy arena specifically to accomplish this one goal like sleeper cell agents. Likely much, much, easier than training highly qualified alignment researchers.

At MIRI, we don't do things because they are easy. We don't do things because we are grifters.

Didn't we pretty much always know it was going to come from one or a few giant companies or research labs? Didn't we understand how those systems function in the real world? Capitalist incentives, Moats, Regulatory Capture, Mundane utility, and International Coordination problems are not new.

This is how they look at all other problems in the world, and it's fucking exasperating. Climate change? I would simply implement 'Capitalist Incentives'. Wealth inequality? Have you tried a 'Moat'? Racism? It sounds like a job for 'Regulatory Capture'. Yes, all problems are easily solvable with 200 IQ and buzzwords. All problems except the hardest problem in the world, preventing Skynet from being invented. Ignore all those other problems; someone will 'Mundane Utility' them away. For now, we need your tithe; we're definitely going to use it for 'International Coordination', by which I totally don't mean buying piles of meth and cocaine for our orgies.

Why was it not obvious back then? Why did we not do this? Was this done and I missed it?

We tried nothing and we're all out of ideas!

this post was submitted on 16 Jul 2023
2 points (100.0% liked)

SneerClub

1003 readers
1 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS