0
Thoughts? (techcrunch.com)
top 4 comments
sorted by: hot top controversial new old
[-] 332@lemmy.world 11 points 1 year ago

These guys are literally the last people in the world I would trust to regulate themselves responsibily.

[-] Xanthobilly@lemmy.world 4 points 1 year ago* (last edited 1 year ago)

This is true for corporations in general. They’re organized to maximize profits, unless regulated. Even if you’re pro-business, you’ll know that statement as true. Without realizing it, we’re currently in an Oppenheimer-like arms race with various state sponsored interests including tech corporations.

[-] FantasticFox@lemmy.world 4 points 1 year ago

The biggest risks I see with AI are making misinfomation, scams etc. a lot easier.

I remember as a kid you knew that people could just make stuff up, but a photograph was fairly reliable. Then along came Photoshop and it was trivial to make convincing fake photographs.

AI is able to do this with audio, soon with full video (perhaps already?) - so then it becomes much harder to trust anything.

[-] Capricorny90210@lemmy.world 1 points 1 year ago

Four of the preeminent AI players are coming together to form a new industry body designed to ensure “safe and responsible development” of so-called “frontier AI” models.

In response to growing calls for regulatory oversight, ChatGPT developer OpenAI, Microsoft, Google, and Anthropic have announced the Frontier Model Forum, a coalition that draws on the expertise of member companies to develop technical evaluations and benchmarks, and promote best practices and standards.

New frontier At the core of the forum’s mission is something that OpenAI has previously termed “frontier AI,” which are advanced AI and machine learning models deemed dangerous enough to pose “severe risks to public safety.” They argue that such models pose a unique regulatory challenge, as “dangerous capabilities can arise unexpectedly,” making it difficult to prevent models from being misappropriated.

The self-stated goals for the new forum include:

i) Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety. ii) Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology. iii) Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks. iiii) Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.

Although the Frontier Model Forum counts just four members at present, the collective says it’s open to new members. Qualifying organizations should be developing and deploying frontier AI models, and demonstrate a “strong commitment to frontier model safety.”

In the immediate term, the founding members say that the first steps will be to establish an advisory board to steer its strategy, alongside a charter, governance and funding structure.

“We plan to consult with civil society and governments in the coming weeks on the design of the Forum and on meaningful ways to collaborate,” the companies wrote in a joint statement today.

Regulation While the Frontier Model Forum is designed to demonstrate that the AI industry is taking safety concerns seriously, it also highlights Big Tech’s desires to stave off incoming regulation through voluntary initiatives — and perhaps go some way toward writing its own rules.

Indeed, today’s news comes as Europe pushes ahead with what is set to be the first concrete AI rulebook, designed to enshrine safety, privacy, transparency, and anti-discrimination at the heart of companies’ AI development ethos.

And last week, President Biden met with seven AI companies at the White House — including the four founding members of the Frontier Model Forum — to agree voluntary safeguards against the burgeoning AI revolution, though critics argue that the commitments were somewhat vague.

However, Biden also indicated that regulatory oversight would be on the cards in the future.

“Realizing the promise of AI by managing the risk is going to require some new laws, regulations, and oversight,” Biden said at the time. “In the weeks ahead, I’m going to continue to take executive action to help America lead the way toward responsible innovation. And we’re going to work with both parties to develop appropriate legislation and regulation.”

this post was submitted on 26 Jul 2023
0 points (50.0% liked)

Technology

59419 readers
3259 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS