this post was submitted on 12 Aug 2025
222 points (89.6% liked)

Technology

74130 readers
3599 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] wizardbeard@lemmy.dbzer0.com 9 points 4 days ago* (last edited 4 days ago) (1 children)

We are so far away from a paperclip maximizer scenario that I can't take anyone concerned about that seriously.

We have nothing even approaching true reasoning, despite all the misuse going on that would indicate otherwise.

Alignment? Takeoff? None of our current technologies under the AI moniker come anywhere remotely close to any reason for concern, and most signs point to us rapidly approaching a wall with our current approaches.

Each new version from the top companies in the space right now has less and less advancement in capability compared to the last, with costs growing at a pace where "exponentially" doesn't feel like an adequate descriptor.

There's probably lateral improvements to be made, but outside of taping multiple tools together there's not much evidence for any more large breakthroughs in capability.

[–] bacon_pdp@lemmy.world -4 points 4 days ago (2 children)

I agree currently technology is extremely unlikely to achieve general intelligence but my expression was that we never should try to achieve AGI; it is not worth the risk until after we solve the alignment problem.

[–] chobeat@lemmy.ml 3 points 4 days ago (1 children)

"alignment problem" is what CEOs use as a distraction to take responsibility away from their grift and frame the issue as a technical problem. That's another word that make you lose any credibility

[–] bacon_pdp@lemmy.world -1 points 4 days ago

I think we are talking past each other. Alignment with human values is important; otherwise we end up with a paper clip optimizer wanting humans only as a feedstock of atoms or deciding to pull a “With Folded Hands” situation.

None of the “AI” companies are even remotely interested in or working on this legitimate concern.

[–] balder1991@lemmy.world 1 points 4 days ago (1 children)

Unfortunately game theory says we’re gonna do it whenever it’s technologically possible.

[–] bacon_pdp@lemmy.world 0 points 4 days ago

Only for zero sum games