this post was submitted on 09 Jun 2025
9 points (80.0% liked)

Technology

39151 readers
445 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] sleepundertheleaves@infosec.pub 2 points 2 days ago (2 children)

None of this is to say there are absolutely no concerns about LLMs. Obviously there are. But there is no reason to suspect LLMs are going to end humanity unless some moron hooks one up to nuclear weapons.

And what are the odds LLMs will be used to write code for DoD systems? Or AI agents integrated into routine nuclear power plant operations? I think the odds of some moron hooking up a nuke to a brainless language generator are greater than you think.

[–] MagicShel@lemmy.zip 3 points 2 days ago

Sure but that's really the fault of the moron, not the AI for existing. Definitely could blame the AI sellers who would be happy to say AI can do it.

It's a useful tool but like fire, if idiots get their hands on it bad things will happen.

[–] AbelianGrape@beehaw.org 1 points 2 days ago

I'd argue that the resulting tragedy is the moron's fault in all of the ways that matter. The things the post are "warning" about are still alarmism.