26
submitted 1 year ago* (last edited 1 year ago) by froztbyte@awful.systems to c/techtakes@awful.systems

archive

"There's absolutely no probability that you're going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It's going to take years, if not many decades, but I still think the time to focus on safety is now," he said.

just days after poor lil sammyboi and co went out and ran their mouths! the horror!

Sources told Reuters that the warning to OpenAI's board was one factor among a longer list of grievances that led to Altman's firing, as well as concerns over commercializing advances before assessing their risks.

Asked if such a discovery contributed..., but it wasn't fundamentally about a concern like that.

god I want to see the boardroom leaks so bad. STOP TEASING!

“What we really need are safety brakes. Just like you have a safety break in an elevator, a circuit breaker for electricity, an emergency brake for a bus – there ought to be safety breaks in AI systems that control critical infrastructure, so that they always remain under human control,” Smith added.

this appears to be a vaguely good statement, but I'm gonna (cynically) guess that it's more steered by the fact that MS now repeatedly burned their fingers on human-interaction AI shit, and is reaaaaal reticent about the impending exposure

wonder if they'll release a business policy update about usage suitability for *GPT and friends

you are viewing a single comment's thread
view the rest of the comments
[-] autotldr 1 points 1 year ago

This is the best summary I could come up with:


LONDON, Nov 30 (Reuters) - The president of tech giant Microsoft (MSFT.O) said there is no chance of super-intelligent artificial intelligence being created within the next 12 months, and cautioned that the technology could be decades away.

OpenAI cofounder Sam Altman earlier this month was removed as CEO by the company’s board of directors, but was swiftly reinstated after a weekend of outcry from employees and shareholders.

Reuters last week exclusively reported that the ouster came shortly after researchers had contacted the board, warning of a dangerous discovery they feared could have unintended consequences.

The internal project named Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one source told Reuters.

However, Microsoft President Brad Smith, speaking to reporters in Britain on Thursday, rejected claims of a dangerous breakthrough.

Sources told Reuters that the warning to OpenAI's board was one factor among a longer list of grievances that led to Altman's firing, as well as concerns over commercializing advances before assessing their risks.


The original article contains 327 words, the summary contains 173 words. Saved 47%. I'm a bot and I'm open source!

this post was submitted on 01 Dec 2023
26 points (100.0% liked)

TechTakes

1489 readers
29 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS