this post was submitted on 06 Oct 2025
12 points (77.3% liked)

news

153 readers
1167 users here now

A lightweight news hub to help decentralize the fediverse load: mirror and discuss headlines here so the giant instance communities aren’t a single choke-point.

Rules:

  1. Recent news articles only (past 30 days)
  2. Title must match the headline or neutrally describe the content
  3. Avoid duplicates & spam (search before posting; batch minor updates).
  4. Be civil; no hate or personal attacks.
  5. No link shorteners
  6. No entire article in the post body

founded 1 month ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Gradually_Adjusting@lemmy.world 18 points 10 hours ago (1 children)

LLMs are not AI, it's an algorithm that finds the most likely next word given a set of training data. We've fed it a pile of the shit we say, and it's feeding us back the shit. It doesn't plan, think, have opinions, or anything. Now we write stupid shit like this. We are yelling into a canyon and spooking ourselves out with the echoes of the shit we said.

True AGI might or might not have a self preservation instinct. Our instincts don't come from the neocortex, and that is the area of the brain a true AGI is most likely to imitate.

[–] manuallybreathing@lemmy.ml 1 points 5 hours ago

In a decade we're going to be calling anything a computer does, AI, itll be the new call everything an app