Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 5 points 1 week ago* (last edited 1 week ago)

That he cites as if it were a philosophy paper, to non-rationalists.

[–] Architeuthis@awful.systems 10 points 1 week ago (1 children)

Yep, from what I can tell second hand dath ilan world building definitely skews towards it doesn't count as totalitarianism if the enforced orthodoxy is in line with my Obviously Objectively Correct and Overdetermined opinions.

[–] Architeuthis@awful.systems 9 points 1 week ago* (last edited 1 week ago)

I can smell the 'rape (play) is the best kind of sex actually' from over here.

[–] Architeuthis@awful.systems 16 points 1 week ago (8 children)

OG Dune actually had some complex and layered stuff to say about AI before the background lore was retconned to dollar store WH40K by the current handlers of the IP.

There was no superintelligence, thinking machines were gatekept by specialists who formed entrenched elites, overreliance to them was causing widespread intellectual stagnation, and people were becoming content with letting unknowable algorithms decide on matters of life and death.

The Butlerian Jihad was first and foremost a cultural revolution.

[–] Architeuthis@awful.systems 13 points 1 week ago* (last edited 1 week ago) (40 children)

I'm still not sure if they actually grasp the totalitarian implications of going ham on tech companies and research this way. He sure doesn't get called out about his 'solutions' that imply that some sort of world government has to happen that will also crown him Grand Central Planner of All Technology.

It's possible they just believe the eight [specific consumer electronic goods] per household is doable, and at worst equally authoritarian with the tenured elites snubbing their noses at HBD research.

[–] Architeuthis@awful.systems 4 points 1 week ago* (last edited 1 week ago)

If you're having to hide your AIs in faraday cages in case they get uppity, why are you even doing this, you are already way past the point of diminishing returns. There is no use case for keeping around an AI that actively doesn't want anything to do with you, at that point either you consider that part of the tech tree a dead end or you start some sort of digital personhood conversation.

That's why Yud (and anthropic) is so big on AIs deceiving you about their 'real' capabilities. For all of MIRI's talk about the robopocalypse being a foregone conclusion, the path to get there sure is narrow and contrived, even on their own terms.

[–] Architeuthis@awful.systems 6 points 2 weeks ago* (last edited 2 weeks ago)

Who needs time travel when you have ~~Timeless~~ ~~Updateless~~ Functional Decision Theory, Yud's magnum opus and an arcane attempt at a game theoretic framework that boasts 100% success at preventing blackmail from pandimensional superintelligent entities that exist now in the future.

It for sure helped the Zizians become well integrated members of society (warning: lesswrong link).

[–] Architeuthis@awful.systems 4 points 2 weeks ago (1 children)

I for one don't mind if my reddit crap poisons future LLMs.

[–] Architeuthis@awful.systems 4 points 2 weeks ago* (last edited 2 weeks ago)

To be fair to Mr. Gay, he went in with the noblest of intentions, to get a chance to ask Thiel how in the hell does he not see that if anyone around here is the antichrist, it's him.

[–] Architeuthis@awful.systems 8 points 2 weeks ago

He kind of left his prime I think, the humor becoming alternatingly a bit too esoteric or a bit too obvious, and kind of stale in general. Nothing particularly objectionable about the author comes to mind otherwise.

[–] Architeuthis@awful.systems 7 points 2 weeks ago

I think it's more like you'll have a rat commissar deciding which papers get published and which get memory-holed while diverting funds from cancer research and epidemiology to research on which designer mouth bacteria can boost their intern's polygenic score by 0.023%

 

edited to add tl;dr: Siskind seems ticked off because recent papers on the genetics of schizophrenia are increasingly pointing out that at current miniscule levels of prevalence, even with the commonly accepted 80% heritability, actually developing the disorder is all but impossible unless at least some of the environmental factors are also in play. This is understandably very worrisome, since it indicates that even high heritability issues might be solvable without immediately employing eugenics.

Also notable because I don't think it's very often that eugenics grievances breach the surface in such an obvious way in a public siskind post, including the claim that the whole thing is just HBD denialists spreading FUD:

People really hate the finding that most diseases are substantially (often primarily) genetic. There’s a whole toolbox that people in denial about this use to sow doubt. Usually it involves misunderstanding polygenicity/omnigenicity, or confusing GWAS’ current inability to detect a gene with the gene not existing. I hope most people are already wise to these tactics.

 

... while at the same time not really worth worrying about so we should be concentrating on unnamed alleged mid term risks.

EY tweets are probably the lowest effort sneerclub content possible but the birdsite threw this to my face this morning so it's only fair you suffer too. Transcript follows:

Andrew Ng wrote:

In AI, the ratio of attention on hypothetical, future, forms of harm to actual, current, realized forms of harm seems out of whack.

Many of the hypothetical forms of harm, like AI "taking over", are based on highly questionable hypotheses about what technology that does not currently exist might do.

Every field should examine both future and current problems. But is there any other engineering discipline where this much attention is on hypothetical problems rather than actual problems?

EY replied:

I think when the near-term harm is massive numbers of young men and women dropping out of the human dating market, and the mid-term harm is the utter extermination of humanity, it makes sense to focus on policies motivated by preventing mid-term harm, if there's even a trade-off.

 

Sam Altman, the recently fired (and rehired) chief executive of Open AI, was asked earlier this year by his fellow tech billionaire Patrick Collison what he thought of the risks of synthetic biology. ‘I would like to not have another synthetic pathogen cause a global pandemic. I think we can all agree that wasn’t a great experience,’ he replied. ‘Wasn’t that bad compared to what it could have been, but I’m surprised there has not been more global coordination and I think we should have more of that.’

 

original is here, but you aren't missing any context, that's the twit.

I could go on and on about the failings of Shakespear... but really I shouldn't need to: the Bayesian priors are pretty damning. About half the people born since 1600 have been born in the past 100 years, but it gets much worse that that. When Shakespear wrote almost all Europeans were busy farming, and very few people attended university; few people were even literate -- probably as low as ten million people. By contrast there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564? The Bayesian priors aren't very favorable.

edited to add this seems to be an excerpt from the fawning book the big short/moneyball guy wrote about him that was recently released.

view more: ‹ prev next ›