324
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

OpenAI's offices were sent thousands of paper clips in an elaborate prank to warn about an AI apocalypse::The prank was a reference to the "paper clip maximizer" scenario – the idea that AI could destroy humanity if it were told to build as many paper clips as possible.

you are viewing a single comment's thread
view the rest of the comments
[-] kbal@fedia.io 5 points 1 year ago

Presumably the same people who thought that the Large Hadron Collider was going to create a black hole that would destroy the world.

[-] MotoAsh@lemmy.world 19 points 1 year ago

Nah, AI doing weird stuff is actually possible. Armageddon isn't likely, but it's more on the table than a black hole ever was.

[-] uriel238@lemmy.blahaj.zone 8 points 1 year ago

We have an imminent apocalypse (imminent in civilization terms: next few centuries) even without AI.

[-] tabular@lemmy.world 1 points 1 year ago

Yes, but is that the "AI" which they are working on?

[-] captainjaneway@lemmy.world 5 points 1 year ago* (last edited 1 year ago)

Drones that target people with image analysis. Facial detection is trivial these days. Drones have proven to be one of Ukraine's best guerilla warfare techniques. Isis was less successful but Ukraine has a lot more capital to make "off the shelf" solutions more meaningful. Just look around. Plenty of private organizations are selling mass organized drones which use various ML models to target individuals. Either for finding a person in a forest fox hole or for searching a town for a particular individual.

Eg: this random company I found on Google

[-] tabular@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

It's difficult to draw a clear line between a simple neural network and a human brain when it comes to "intelligence". The rouge, paperclip-making "AI" seems be far closer to an intelligence, while flying autos or text prediction seems closer to mere hand-written code.

[-] MotoAsh@lemmy.world 0 points 1 year ago

I think part of the wisdom in the warning is that any kind of "intelligence" (read: NOT specifically artificial general intelligence) is capable of running away with unforseen scenarios.

Hell, even normal ol' algorithms can have some pretty nasty edge cases that noone spots until it's running in production... Sure it's uncommon, but it's not exactly rare. (just look up the list of zero-day exploits over the years)

[-] Ataraxia@sh.itjust.works -2 points 1 year ago

Yeah and vaccines make you magnetic. Science bad.

this post was submitted on 24 Nov 2023
324 points (95.0% liked)

Technology

59590 readers
2850 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS