[-] titotal@awful.systems 25 points 8 months ago

Oxford instituted a fundraising freeze. They knew the org could have gotten oodles funding from any number of strange tech people, they disliked it so much they didn't care.

[-] titotal@awful.systems 17 points 8 months ago

Fun revelations that SBF was going to try and invest in Elon buying twitter because he thought it would make money (lol), and was seriously proposing "put twitter on the blockchain" as his pitch. One of the dumbest ideas I've ever heard, right behind every other "X on blockchain" proposal

[-] titotal@awful.systems 22 points 8 months ago

I feel really bad for the person behind the "notkilleveryonism" account. They've been completely taken in by AI doomerism and are clearly terrified by it. They'll either be terrified for their entire life even as the predicted doom fails to appear, or realise at some point that they wasted an entire portion of their life and their entire system of belief is a lie.

False doomerism is really harming people, and that sucks.

[-] titotal@awful.systems 20 points 8 months ago* (last edited 8 months ago)

The future of humanity institute is the EA longtermist organisation at oxford run by swedish philosopher Nick Bostrom, who got in trouble for an old racist email and subsequent bad apology. It is the one that is rumored to be shutting down.

The Future of Life institute is the EA longtermist organisation run by swedish physicist Max Tegmarck, who got in trouble for offering to fund a neo-nazi newspaper (He didn't actually go through with it and claimed ignorance). It is the one that got the half a billion dollar windfall.

I can't imagine how you managed to conflate these two highly different institutions.

[-] titotal@awful.systems 16 points 8 months ago

This definitely reads like the tedious "april fools" posts where you can tell they are actually 90% serious but want the cover of a joke.

[-] titotal@awful.systems 28 points 9 months ago

The "malaria nets" side of it has done legitimate good, because they didn't try to reinvent the wheel from scratch, stuck to actual science and existing, well performing charitable organisations.

global poverty still gets a good portion of the EA funding, but is slowly falling out of the movement because it's boring to discuss and you can't make any dubiously effective startups out of it.

[-] titotal@awful.systems 19 points 10 months ago

For people who don't want to go to twitter, heres the thread:

Doomers: "YoU cAnNoT dErIvE wHaT oUgHt fRoM iS" 😵‍💫

Reality: you literally can derive what ought to be (what is probable) from the out-of-equilibrium thermodynamical equations, and it simply depends on the free energy dissipated by the trajectory of the system over time.

While I am purposefully misconstruing the two definitions here, there is an argument to be made by this very principle that the post-selection effect on culture yields a convergence of the two

How do you define what is "ought"? Based on a system of values. How do you determine your values? Based on cultural priors. How do those cultural priors get distilled from experience? Through a memetic adaptive process where there is a selective pressure on the space of cultures.

Ultimately, the value systems that survive will be the ones that are aligned towards growth of its ideological hosts, i.e. according to memetic fitness.

Memetic fitness is a byproduct of thermodynamic dissipative adaptation, similar to genetic evolution.

18

Brain genius Beff Jezos manages to butcher both philosophy and physics at the same time!

[-] titotal@awful.systems 16 points 10 months ago

Solomonoff induction is a big rationalist buzzword. It's meant to be the platonic ideal of bayesian reasoning which if implemented would be the best deducer in the world and get everything right.

It would be cool if you could build this, but it's literally impossible. The induction method is provably incomputable.

The hope is that if you build a shitty approximation to solomonoff induction that "approaches" it, it will perform close to the perfect solomonoff machine. Does this work? Not really.

My metaphor is that it's like coming to a river you want to cross, and being like "Well Moses, the perfect river crosser, parted the water with his hands, so if I just splash really hard I'll be able to get across". You aren't Moses. Build a bridge.

[-] titotal@awful.systems 17 points 11 months ago

ahh, I fucking haaaate this line of reasoning. Basically saying "If we're no worse than average, therefore there's no problem", followed by some discussion of "base rates" of harrassment or whatever.

Except that the average rate of harrassment and abuse, in pretty much every large group, is unacceptably high unless you take active steps to prevent it. You know what's not a good way to prevent it? Downplaying reports of harrassment and calling the people bringing attention to it biased liars, and explicitly trying to avoid kicking out harmful characters.

Nothing like a so-called "effective altruist" crowing about having a C- passing grade on the sexual harrassment test.

[-] titotal@awful.systems 32 points 1 year ago

Hidden away in the appendix:

A quick note on how we use quotation marks: we sometimes use them for direct quotes and sometimes use them to paraphrase. If you want to find out if they’re a direct quote, just ctrl-f in the original post and see if it is or not.

This is some real slimy shit. You can compare the "quotes" to Chloe's account, and see how much of a hitjob this is.

[-] titotal@awful.systems 16 points 1 year ago* (last edited 1 year ago)

I roll a fair 100 sided dice.

Eliezer asks me to state my confidence that I won't roll a 1.

I say I am 99% confident I won't roll a 1, using basic math.

Eliezer says "AHA, you idiot, I checked all of your past predictions and when you predicted something with confidence 99%, it only happened 90% of the time! So you can't say you're 99% confident that you won't roll a 1"

I am impressed by the ability of my past predictions to affect the roll of a dice, and promptly run off to become a wizard.

22
[-] titotal@awful.systems 16 points 1 year ago

As a physicist, this quote got me so mad I wrote an excessively detailed debunking a while back. It's staggeringly wrong.

view more: next ›

titotal

joined 1 year ago