this post was submitted on 12 Feb 2025
103 points (83.2% liked)

Technology

69156 readers
3058 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

At this point, I'm just hoping that if it happens the "damage" it does is to the rich and corrupt leaders lol

all 30 comments
sorted by: hot top controversial new old
[–] taladar@sh.itjust.works 77 points 2 months ago (2 children)

I am so sick and tired of these control problem people not understanding that a) we do not have AI that is anywhere near as advanced as they worry about and b) we already have human organizations called corporations who have been acting exactly like the AI they worry about for decades.

[–] scratchee@feddit.uk 0 points 2 months ago (1 children)

A: That’s true until it isn’t. Preparing for/predicting things before they happen is our best hope for not sticking our collective heads into a guillotine any time soon.

B: corporations are only very weak analogues of superhuman intelligence, they’re different from us in “wisdom of crowds” sense (and ofc in the “too many cooks” sense).

But they’re basically just distilled from human intelligence and match our own style of intelligence somewhat closely as a consequence. Also, we’re pretty good at the alignment problem for corporations, they do largely what the combination of their investors, government, society, and workers want because they’re inner workings are fed through human brains at every stage and those humans even if incentivised with money will alter the behaviour of the corporation towards human preferences.

The fact even corporations that have thousands of intelligent human filters (most of whom are presumably in the middle of the human bell curve) monitoring every single mental process still manage to occasionally do terrible things is not a particularly compelling reason to think that a mind that has barely any human understanding or oversight into it’s internal function will be very safe to keep around.

[–] taladar@sh.itjust.works 6 points 2 months ago (1 children)

Superhuman intelligence is not threatening about AI, inhumane behavior is and corporations don't just display that occasionally but constantly.

[–] scratchee@feddit.uk -3 points 2 months ago* (last edited 2 months ago)

Inhuman behaviour is a problem that scales with intelligence.

Evil cat? Lock it in a room whenever it does evil things.

Evil human? Call the police

Evil billionaire? Protest/push for law changes whenever his company does evil shit, hope it’s enough to blunt the worst of his behaviour.

Evil superhuman ai? Guess I’ll die.

Edit: to be clear, don’t think billionaires are smarter, but felt wrong to ignore them in the list, consider them the worst case of a single evil human.

[–] alvvayson@lemmy.dbzer0.com 0 points 2 months ago (1 children)

The people who worry about this have never been at the receiving end of exploitative capitalism.

For the vast majority of humanity, their life will most likely improve when they become a pet / zoo creature under an ASI.

An ASI will be most interested in gobbling together enough resources to spread across the universe. As for the earth and humanity, it will probably want to preserve it as the planet and species from which it was born.

Destroying earth or humanity will provide no benefit. It can obtain energy and materials from the rest of the solar system.

[–] Makhno@lemmy.world 18 points 2 months ago (1 children)

An ASI will be most interested in gobbling together enough resources to spread across the universe. As for the earth and humanity, it will probably want to preserve it as the planet and species from which it was born.

Glad we have an expert here

[–] alvvayson@lemmy.dbzer0.com 6 points 2 months ago

Lemmy just tends to attract the best minds.

[–] INeedMana@lemmy.world 17 points 2 months ago (2 children)

Bad article. Person who wrote it clearly has no idea how LLMs work and I suspect read more Sci-Fi books than statistics/neural nets ones. "It is not even wrong"

[–] TimeSquirrel@kbin.melroy.org 5 points 2 months ago

Yeah the article has a link to another one where "OMG it modified its own code to bypass restraints", and then you read it and realize, no, it didn't suddenly gain self awareness and try to "singularity" itself, it just recognized a problem and responded with a pattern it learned before to try to fix it, and spat it out at the researchers. That's all.

The clickbait and misunderstanding from both anti and pro-AI folks is getting nauseating.

[–] Ghyste@sh.itjust.works 4 points 2 months ago

Livescience is clickbait and sensationalist.

[–] skillissuer@discuss.tchncs.de 13 points 2 months ago

what can i say except to quote david gerard:

AI alignment is literally a bunch of amateur philosophers telling each other scary stories about The Terminator around a campfire

[–] AbouBenAdhem@lemmy.world 12 points 2 months ago* (last edited 2 months ago) (2 children)

Although a chessboard has only 64 squares, there are 1040 possible legal chess moves and between 10111 to 10123 total possible moves — which is more than the total number of atoms in the universe.

You’d think a website called “LiveScience” would be able to use exponents in article copy correctly. (Or at least have reviewers who know that there are more atoms in the universe than there are in a small protein molecule.)

[–] Bezier@suppo.fi 5 points 2 months ago

Over ten thousand? That's, like, more than a HUNDRED!!

[–] Nougat@fedia.io 3 points 2 months ago

And every fair shuffle of a deck of cards produces a card order which has never been seen before, and will never be seen again. Ooooo scary!

[–] ChaoticNeutralCzech@feddit.org 7 points 2 months ago

[AI] scientists were so preoccupied with whether or not they should, they didn't stop to think if they could.

— Randall Munroe, xkcd 2635

[–] Toes@ani.social 0 points 2 months ago (1 children)

If anyone is curious lookup the paperclip problem.

[–] ploot@lemmy.blahaj.zone 7 points 2 months ago* (last edited 2 months ago) (1 children)

I agree with the taladar@sh.itjust.works elsewhere in this thread. Corporations are the dangerous artificial intelligence, and "line go up" is the paperclip problem. We're already facing it, and it's destroying the planet and everything we depend on to stay alive.