[-] Architeuthis@awful.systems 17 points 3 weeks ago

Hopefully the established capitalists will protect us from the fascists' worst excesses hasn't been much of a winning bet historically.

[-] Architeuthis@awful.systems 17 points 2 months ago

It had dumb scientists, a weird love conquers all theme, a bathetic climax that was also on the wrong side of believable and an extremely tacked on epilogue.

Wouldn't say that I hated it, but it was pretty flawed for what it was. magnificent black hole cgi notwithstanding.

[-] Architeuthis@awful.systems 16 points 3 months ago

Summizing Emails is a valid purpose.

Or it would have been if LLMs were sufficiently dependable anyway.

[-] Architeuthis@awful.systems 18 points 3 months ago* (last edited 3 months ago)

Stephanie Sterling of the Jimquisition outlines the thinking involved here. Well, she swears at everyone involved for twenty minutes. So, Steph.

She seems to think the AI generates .WAD files.

I guess they fell victim to one of the classic blunders: never assume that it can't be that stupid, and someone must be explaining it wrong.

[-] Architeuthis@awful.systems 18 points 5 months ago* (last edited 5 months ago)

Ah yes, Alexander's unnumbered hordes, that endless torrent of humanity that is all but certain to have made a lasting impact on the sparsely populated subcontinent's collective DNA.

edit: Also, the absolute brain on someone who would think that before entertaining a random recent western ancestor like a grandfather or whateverthefuckjesus.

[-] Architeuthis@awful.systems 18 points 5 months ago

IKR like good job making @dgerard look like King Mob from the Invisibles in your header image.

If the article was about me I'd be making Colin Robinson feeding noises all the way through.

edit: Obligatory only 1 hour 43 minutes of reading to go then

[-] Architeuthis@awful.systems 17 points 8 months ago* (last edited 8 months ago)

Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute’s organizational home). Starting in 2020, the Faculty imposed a freeze on fundraising and hiring. In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down.

Sound like Oxford increasingly did not want anything to do with them.

edit: Here's a 94 page "final report" that seems more geared towards a rationalist audience.

Wonder what this was about:

Why we failed [...] There also needs to be an understanding of how to communicate across organizational communities. When epistemic and communicative practices diverge too much, misunderstandings proliferate. Several times we made serious missteps in our communications with other parts of the university because we misunderstood how the message would be received. Finding friendly local translators and bridgebuilders is important.

[-] Architeuthis@awful.systems 17 points 11 months ago* (last edited 11 months ago)

you’re seriously missing the point of what he’s trying to say. He’s just talking about [extremely mundane and self evident motte argument]

Nah, we're just not giving him the benefit of a doubt and also have a lot of context to work with.

Consider the fact that he explicitly writes that you are allowed to reconsider your assumptions on domestic terrorism if a second trans mass shooter incident "happens in a row" but a few paragraphs later Effective Altruists blowing up both FTX and OpenAI in the space of a year the second incident is immediately laundered away as the unfortunate result of them overcorrecting in good faith against unchecked CEO power.

This should stick out even to one approaching this with a blank slate perspective in my opinion.

[-] Architeuthis@awful.systems 18 points 11 months ago* (last edited 11 months ago)

Hi, my name is Scott Alexander and here's why it's bad rationalism to think that widespread EA wrongdoing should reflect poorly on EA.

The assertion that having semi-frequent sexual harassment incidents go public is actually an indication of health for a movement since it's evidence that there's no systemic coverup going on and besides everyone's doing it is uh quite something.

But surely of 1,000 sexual harassment incidents, the movement will fumble at least one of them (and often the fact that you hear about it at all means the movement is fumbling it less than other movements that would keep it quiet). You’re not going to convince me I should update much on one (or two, or maybe even three) harassment incidents, especially when it’s so easy to choose which communities’ dirty laundry to signal boost when every community has a thousand harassers in it.

[-] Architeuthis@awful.systems 17 points 1 year ago* (last edited 1 year ago)

'We are the sole custodians of this godlike technology that we can barely control but that we will let you access for a fee' has been a mainstay of OpenAI marketing as long as Altman has been CEO, it's really no surprise this was 'leaked' as soon as he was back in charge.

It works, too! Anthropic just announced they are giving chat access to a 200k token context model (chatgtp4 is <10k I think) where they supposedly cut the rate of hallucinations in half and it barely made headlines.

[-] Architeuthis@awful.systems 17 points 1 year ago* (last edited 1 year ago)

This certainly looks like both venture and established capital saying that while it was fun pretending to take EA concerns about AI seriously, it's time to move on.

Also the increasing number of anti-EA effortposts that started cropping up in the OpenAI subreddit over the last few days is deilghtful.

[-] Architeuthis@awful.systems 17 points 1 year ago

Every ends-justify-the-means worldview has a defense for terrorism readily baked in.

view more: ‹ prev next ›

Architeuthis

joined 1 year ago