[-] titotal@awful.systems 14 points 7 months ago

I'm sure they could have found someone in the EA ecoystem to throw them money if it weren't for the fundraising freeze. This seems like a case of Oxford killing the institute deliberately. The 2020 freeze predates the Bostrom email, this guy who was consulted by oxford said there was a dysfunctional relationship for many years.

It's not like oxford is hurting for money, they probably just decided FHI was too much of a pain to work with and hurt the oxford brand.

[-] titotal@awful.systems 13 points 7 months ago

The committed Rationalists often point out the flaws in science as currently practiced: the p-hacking, the financial incentives, etc. Feeding them more data about where science goes awry will only make them more smug.

The real problem with the Rationalists is that they* think they can do better*, that knowing a few cognitive fallacies and logicaltricks will make you better than the doctors at medicine, better than the quantum physicists at quantum physics, etc.

We need to explain that yes, science has it's flaws, but it still shits all over pseudobayesianism.

[-] titotal@awful.systems 14 points 7 months ago

To be honest, I'm just kinda annoyed that he ended on the story about his mate Aaron who went on surfing trips to indonesia and gave money to his new poor village friends. The author says aaron is "accountable" to the village, but that's not true, because Aaron is a comparatively rich first world academic that can go home at any time. Is Aaron "shifting power" to the village? No, because they if they don't treat him well, he'll stop coming to the village and stop funding their water supply upgrades. And he personally benefits with praise and friendship from his purchases.

I'm sure Aaron is a fine guy, and I'm not saying he shouldn't give money to his village mates, but this is not a good model for philanthropy! I would argue that a software developer who just donates a bunch of money unconditionally to the village (via givedirectly or something) is arguably more noble than Aaron here, donating without any personal benefit or feel good surfer energy.

[-] titotal@awful.systems 13 points 8 months ago

I enjoyed the takedowns (wow, this guy really hates Macaskill), but the overall conclusions of the article seem a bit lost. If malaria nets are like a medicine with side-effects, then the solution is not to throw away the medicine. (Giving away free nets to people probably does not have a signficant death toll!). At the end they seem to suggest, like, voluntourism as the preferred alternative? I don't think Africa needs to be flooded with dorky software engineers personally going to villages to "help out".

[-] titotal@awful.systems 10 points 8 months ago

Apparently there's a new coding AI that is supposedly pretty good. Zvi does the writeup, and logically extrapolates what will happen for future versions, which will obviously self improve and... solve cold fusion?

James: You can just 'feel' the future. Imagine once this starts being applied to advanced research. If we get a GPT5 or GPT6 with a 130-150 IQ equivalent, combined with an agent. You're literally going to ask it to 'solve cold fusion' and walk away for 6 months.

...

Um. I. Uh. I do not think you have thought about the implications of ‘solve cold fusion’ being a thing that one can do at a computer terminal?

Yep. The recursive self improving AI will solve cold fucking fusion from a computer terminal.

[-] titotal@awful.systems 11 points 8 months ago

years later was shown to be correct

Take a guess at what prompted this statement.

Did one side of the conflict confess? Did major expert organization change their minds? Did new, conclusive evidence arise that was unseen for years?

Lol no. The "confirmation" is that a bunch of random people did their own analysis of existing evidence and decided that it was the rebels based on a vague estimate of rocket trajectories. I have no idea who these people are, although I think the lead author is this guy currently stanning for Russia's war on ukraine?

[-] titotal@awful.systems 14 points 8 months ago

The sole funder is the founder, Saar Wilf. The whole thing seems like a vanity project for him and friends he hired to give their opinion on random controversial topics.

[-] titotal@awful.systems 11 points 8 months ago

The video and slides can be found here, I watched a bit of it as it happened and it was pretty clear that rootclaim got destroyed.

Anyone actually trying to be "bayesian" should have updated their opinion by multiple orders of magnitude as soon as it was fully confirmed that the wet market was the first superspreader event. Like, at what point does occams razor not kick in here?

[-] titotal@awful.systems 14 points 11 months ago

Thanks! I strive for accuracy, clarity, humility, and good faith. Aka, everything I learned not to do from reading the sequences.

[-] titotal@awful.systems 12 points 11 months ago* (last edited 11 months ago)

EA as a movement was a combination of a few different groups (This account says Giving what we can/80000 hours, Givewell, and yudkowsky's MIRI). However, the main source of early influx of people was the rationalist movement, as Yud had heavily promoted EA-style ideas in the sequences.

So if you look at surveys, right now a a relatively small percentage (like 15%) of EA's first heard about it through lesswrong or SSC. But back in 2014, and earlier, Lesswrong was the number one onroad into the movement (like 30%) . (I'm sure a bunch of the other answers may have heard about it from rationalist friends as well). I think it would have been even more if you go back earlier.

Nowadays, most of the recruiting is independent from the rationalists, so you have a bunch of people coming in and being like, what's with all the weird shit? However they still adopt a ton of rationalist ideas and language, and the EA forum is run by the same people as Lesswrong. It leads to some tension: someone wrote a post saying that "yudkowsky is frequently confidently, egregiousl wrong", and it was somewhat upvoted on EA forum but massively downvoted on Lesswrong.

[-] titotal@awful.systems 11 points 1 year ago

If you want more of this, I wrote a full critique of his mangled intro to quantum physics, where he forgets the whole "conservation of energy" thing.

[-] titotal@awful.systems 12 points 1 year ago

My impression is that the toxicity within EA is mainly concentrated in the bay area rationalists, and in a few of the actual EA organizations. If it's just a local meetup group, it's probably just going to be some regular-ish people with some mistaken beliefs that are genuinely concerned about AI.

Just be polite and present arguments, and you might actually change minds, at least among those who haven't been sucked too far into Rationalism.

view more: ‹ prev next ›

titotal

joined 1 year ago