33
all 9 comments
sorted by: hot top controversial new old
[-] Shitgenstein1@awful.systems 24 points 6 months ago

s'alright, tho. It was always a cynical marketing strat to convert hyper-online nerd anxiety into investor hype. may want to check on Big Yud. Idk if anyone has heard from him since his Time Mag article coming onto a year old now, not that I tried.

Some of the risks the team worked on included "misuse, economic disruption, disinformation, bias and discrimination, addiction, and overreliance."

Conspicuous lack of grey goo or hyper-persuasive brainhacking. Still really good at being confidently wrong about basic shit!

[-] YouKnowWhoTheFuckIAM@awful.systems 10 points 6 months ago

you just made me extremely aware of where i was, what i was doing, and how i was feeling, when i found out that the yud had an article in Time, and I am going to sue for the whiplash of realising how short a time ago that was

[-] Shitgenstein1@awful.systems 5 points 6 months ago

and yet still no international agreement to first strike rogue data centers smh

[-] YouKnowWhoTheFuckIAM@awful.systems 5 points 6 months ago

[what 2024 would look like if Big Yud had been allowed to first strike data centers in the mid-2010s meme - wizzy flying cars, big tubular buildings, and so on]

[-] skillissuer@discuss.tchncs.de 3 points 6 months ago

i've always been of opinion that the best defense against bitcoin mining facility is 120mm mortar

[-] Evinceo@awful.systems 9 points 6 months ago

Lol said the scorpion, lmao

[-] Shitgenstein1@awful.systems 5 points 6 months ago
[-] autotldr 1 points 6 months ago

This is the best summary I could come up with:


The team reportedly disbanded days after its leaders, Ilya Sutskever and Jan Leike, announced their resignations earlier this week.

The former executive published a series of posts on Friday explaining his departure, which he said came after disagreements about the company's core priorities for "quite some time."

He said building generative AI is "an inherently dangerous endeavor" and OpenAI was more concerned with releasing "shiny products" than safety.

The Superalignment team's objective was to "solve the core technical challenges of superintelligence alignment in four years," a goal that the company admitted was "incredibly ambitious."

Some of the risks the team worked on included "misuse, economic disruption, disinformation, bias and discrimination, addiction, and overreliance."

Axel Springer, Business Insider's parent company, has a global deal to allow OpenAI to train its models on its media brands' reporting.


The original article contains 467 words, the summary contains 135 words. Saved 71%. I'm a bot and I'm open source!

this post was submitted on 18 May 2024
33 points (100.0% liked)

SneerClub

983 readers
2 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS