27
submitted 6 months ago* (last edited 6 months ago) by dgerard@awful.systems to c/techtakes@awful.systems

Ilya tweet:

After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm. It was an honor and a privilege to have worked together, and I will miss everyone dearly. So long, and thanks for everything. I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time.

Jan tweet:

I resigned

this comes precisely 6mo after Sam Altman's job at OpenAI was rescued by the Paperclip Maximiser. NYT: "Dr. Sutskever remained an OpenAI employee, but he never returned to work." lol

orange site discussion: https://news.ycombinator.com/item?id=40361128

lesswrong discussion: https://www.lesswrong.com/posts/JSWF2ZLt6YahyAauE/ilya-sutskever-and-jan-leike-resign-from-openai

you are viewing a single comment's thread
view the rest of the comments
[-] froztbyte@awful.systems 17 points 6 months ago

Reasons are unclear (as usual when safety people leave OpenAI).

no, you fucking dipshit. the reason is crystal clear. he was on the team that attempted to oust sammyboi, and was on borrowed time from the moment it failed

jfc how are these people this clueless

[-] froztbyte@awful.systems 14 points 6 months ago* (last edited 6 months ago)

Cade Metz was the NYT journalist who doxxed Scott Alexander

"bro if you just keep saying it it'll become true. trust me bro I've done it hundreds of times bro"

[-] Soyweiser@awful.systems 8 points 6 months ago

Also all the weirdness re Metz comes from not understanding how media like that works (A thing which iirc is warned for in the sequences of all things) and N=1. All these advanced complex ideas rushing around in their mind, talk of being aware of your own bias, bayes, bla bla defeated by an N=1 perceived anti-grey tribe event.

Also lol how much they fall back into talking like robots when talking about this event:

Enforcing social norms to prevent scapegoating also destroys information that is valuable for accurate credit assignment and causally modelling reality.

[-] froztbyte@awful.systems 7 points 6 months ago

grr outgroup, outgroup bad

this post was submitted on 15 May 2024
27 points (100.0% liked)

TechTakes

1403 readers
122 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS