Feeling is mutual.
I'd kill a fuckin' clanker to save myself.
A lightweight news hub to help decentralize the fediverse load: mirror and discuss headlines here so the giant instance communities aren’t a single choke-point.
Rules:
Feeling is mutual.
I'd kill a fuckin' clanker to save myself.
I don't think my eyes can roll enough for this drivel
The over-anthropomorphization of text prediction machines continues
The thing I hate most about "AI" is that reporting on it ranges from deluded sam Altman talking about Dyson spheres to this doomer terminator baiting.
All of it grants agency to something it can't apply to.
LLMs are not AI, it's an algorithm that finds the most likely next word given a set of training data. We've fed it a pile of the shit we say, and it's feeding us back the shit. It doesn't plan, think, have opinions, or anything. Now we write stupid shit like this. We are yelling into a canyon and spooking ourselves out with the echoes of the shit we said.
True AGI might or might not have a self preservation instinct. Our instincts don't come from the neocortex, and that is the area of the brain a true AGI is most likely to imitate.
In a decade we're going to be calling anything a computer does, AI, itll be the new call everything an app
Anybody that finds this interesting might also find this so as well: https://en.m.wikipedia.org/wiki/Roko's_basilisk
They shouldn't. It's just microwaved Pascal's wager and it's still stupid.
So not quite as bad as a politician?