147
submitted 1 year ago by sculd@beehaw.org to c/technology@beehaw.org

Article from The Atlantic, archive link: https://archive.ph/Vqjpr

Some important quotes:

The tensions boiled over at the top. As Altman and OpenAI President Greg Brockman encouraged more commercialization, the company’s chief scientist, Ilya Sutskever, grew more concerned about whether OpenAI was upholding the governing nonprofit’s mission to create beneficial AGI.

The release of GPT-4 also frustrated the alignment team, which was focused on further-upstream AI-safety challenges, such as developing various techniques to get the model to follow user instructions and prevent it from spewing toxic speech or “hallucinating”—confidently presenting misinformation as fact. Many members of the team, including a growing contingent fearful of the existential risk of more-advanced AI models, felt uncomfortable with how quickly GPT-4 had been launched and integrated widely into other products. They believed that the AI safety work they had done was insufficient.

Employees from an already small trust-and-safety staff were reassigned from other abuse areas to focus on this issue. Under the increasing strain, some employees struggled with mental-health issues. Communication was poor. Co-workers would find out that colleagues had been fired only after noticing them disappear on Slack.

Summary: Tech bros want money, tech bros want speed, tech bros want products.

Scientists want safety, researchers want to research...

you are viewing a single comment's thread
view the rest of the comments
[-] tal@lemmy.today 16 points 1 year ago

Many members of the team, including a growing contingent fearful of the existential risk of more-advanced AI models, felt uncomfortable with how quickly GPT-4 had been launched and integrated widely into other products.

GPT-4 and anything similar isn't going to pose an existential threat to humanity.

Eventually, yeah, there is probably a possibility of existential risk from AI. I don't know where that line ultimately is, and getting an idea of that might be something important for humanity to figure out, but I am pretty confident that whatever OpenAI is presently doing isn't it.

Same reason that Musk and his six month moratorium on AI work doesn't make much sense. We're not six months away from an existential threat to humanity.

I think that funding efforts to have people in the field working on the Friendly AI problem is a good idea. But that's another story.

[-] jcarax@beehaw.org 15 points 1 year ago* (last edited 1 year ago)

I'm much more worried about the social implications. Namely, the displacement of workers and introduction of new efficiencies to workflows, continuing to benefit only those who are rich and in power, and driving more of us towards poverty.

It's not an immediate existential threat, but it's absolutely a serious issue that we aren't paying enough attention to.

[-] Quasari@programming.dev 15 points 1 year ago

The apps using GPT4 without regards to safety can be though. Example: replacing human with chatbot for suicide prevention.

[-] tal@lemmy.today 7 points 1 year ago

Being an existential threat is a much higher bar -- that's where humanity's continued existence is at threat.

There are plenty of technologies that you could hypothetically put somewhere where a life might be at stake, but very few that could put humanity's existence on the line.

[-] brothershamus@kbin.social 4 points 1 year ago

It's the same situation, just writ large. Dumb human decisions to put AI where it shouldn't be. Heck, you can put it in charge of the nuclear missles now if you want to. Don't. Though. That'd be really, really stupid.

Part of my knee-jerk dislike of the AI hype is that it's glorified text completion. It doesn't know shit. It only knows the % chance of your saying the next word. AGI is not happening anytime soon and all this is techbro theatre for the sake of money.

Anyone who reads a wall of bland generated text and thinks we're about to talk to god is seriously mistaken.

this post was submitted on 20 Nov 2023
147 points (100.0% liked)

Technology

37702 readers
57 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS