this post was submitted on 08 Jun 2025
810 points (95.6% liked)

Technology

71143 readers
3198 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LOOK MAA I AM ON FRONT PAGE

(page 4) 50 comments
sorted by: hot top controversial new old
[–] ZILtoid1991@lemmy.world 11 points 1 day ago (1 children)

Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.

[–] TheFriar@lemm.ee 6 points 1 day ago (2 children)

Yeah, well there are a ton of people literally falling into psychosis, led by LLMs. So it’s unfortunately not that many people that already knew it.

[–] joel_feila@lemmy.world 3 points 1 day ago

Dude they made chat gpt a little more boit licky and now many people are convinced they are literal messiahs. All it took for them was a chat bot and a few hours of talk.

load more comments (1 replies)
[–] reksas@sopuli.xyz 37 points 1 day ago (4 children)

does ANY model reason at all?

[–] 4am@lemm.ee 34 points 1 day ago (3 children)

No, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.

load more comments (3 replies)
load more comments (3 replies)
[–] technocrit@lemmy.dbzer0.com 23 points 1 day ago* (last edited 1 day ago) (6 children)

Why would they "prove" something that's completely obvious?

The burden of proof is on the grifters who have overwhelmingly been making false claims and distorting language for decades.

[–] yeahiknow3 23 points 1 day ago* (last edited 1 day ago) (1 children)

They’re just using the terminology that’s widespread in the field. In a sense, the paper’s purpose is to prove that this terminology is unsuitable.

load more comments (1 replies)
[–] Mbourgon@lemmy.world 10 points 1 day ago (1 children)

Not when large swaths of people are being told to use it everyday. Upper management has bought in on it.

[–] limelight79@lemmy.world 4 points 1 day ago* (last edited 1 day ago)

Yep. I'm retired now, but before retirement a month or so ago, I was working on a project that relied on several hundred people back in 2020. "Why can't AI do it?"

The people I worked with are continuing the research and putting it up against the human coders, but...there was definitely an element of "AI can do that, we won't need people" next time. I sincerely hope management listens to reason. Our decisions would lead to potentially firing people, so I think we were able to push back on the "AI can make all of these decisions"...for now.

The AI people were all in, they were ready to build an interface that told the human what the AI would recommend for each item. Errrm, no, that's not how an independent test works. We had to reel them back in.

load more comments (4 replies)
[–] BlaueHeiligenBlume@feddit.org 8 points 1 day ago (1 children)

Of course, that is obvious to all having basic knowledge of neural networks, no?

load more comments (1 replies)
[–] LonstedBrowryBased@lemm.ee 12 points 1 day ago (14 children)

Yah of course they do they’re computers

load more comments (14 replies)
[–] surph_ninja@lemmy.world 8 points 1 day ago (38 children)

You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

load more comments (38 replies)
[–] sp3ctr4l@lemmy.dbzer0.com 17 points 1 day ago* (last edited 1 day ago) (2 children)

This has been known for years, this is the default assumption of how these models work.

You would have to prove that some kind of actual reasoning capacity has arisen as... some kind of emergent complexity phenomenon.... not the other way around.

Corpos have just marketed/gaslit us/themselves so hard that they apparently forgot this.

load more comments (2 replies)
[–] flandish@lemmy.world 18 points 1 day ago

stochastic parrots. all of them. just upgraded “soundex” models.

this should be no surprise, of course!

load more comments
view more: ‹ prev next ›