this post was submitted on 08 Jun 2025
759 points (95.8% liked)

Technology

71083 readers
4088 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LOOK MAA I AM ON FRONT PAGE

(page 2) 50 comments
sorted by: hot top controversial new old
[–] melsaskca@lemmy.ca 7 points 6 hours ago (1 children)

It's all "one instruction at a time" regardless of high processor speeds and words like "intelligent" being bandied about. "Reason" discussions should fall into the same query bucket as "sentience".

[–] MangoCats@feddit.it 2 points 5 hours ago

My impression of LLM training and deployment is that it's actually massively parallel in nature - which can be implemented one instruction at a time - but isn't in practice.

[–] Harbinger01173430@lemmy.world 8 points 7 hours ago

XD so, like a regular school/university student that just wants to get passing grades?

[–] minoscopede@lemmy.world 52 points 10 hours ago* (last edited 10 hours ago) (7 children)

I see a lot of misunderstandings in the comments 🫤

This is a pretty important finding for researchers, and it's not obvious by any means. This finding is not showing a problem with LLMs' abilities in general. The issue they discovered is specifically for so-called "reasoning models" that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that's a flaw that needs to be corrected before models can actually reason.

[–] Knock_Knock_Lemmy_In@lemmy.world 13 points 7 hours ago (1 children)

When given explicit instructions to follow models failed because they had not seen similar instructions before.

This paper shows that there is no reasoning in LLMs at all, just extended pattern matching.

[–] MangoCats@feddit.it 2 points 5 hours ago (1 children)

I'm not trained or paid to reason, I am trained and paid to follow established corporate procedures. On rare occasions my input is sought to improve those procedures, but the vast majority of my time is spent executing tasks governed by a body of (not quite complete, sometimes conflicting) procedural instructions.

If AI can execute those procedures as well as, or better than, human employees, I doubt employers will care if it is reasoning or not.

[–] Knock_Knock_Lemmy_In@lemmy.world 3 points 4 hours ago (1 children)

Sure. We weren't discussing if AI creates value or not. If you ask a different question then you get a different answer.

[–] MangoCats@feddit.it 2 points 3 hours ago (2 children)

Well - if you want to devolve into argument, you can argue all day long about "what is reasoning?"

load more comments (2 replies)
[–] REDACTED@infosec.pub 8 points 7 hours ago* (last edited 7 hours ago) (3 children)

What confuses me is that we seemingly keep pushing away what counts as reasoning. Not too long ago, some smart alghoritms or a bunch of instructions for software (if/then) was officially, by definition, software/computer reasoning. Logically, CPUs do it all the time. Suddenly, when AI is doing that with pattern recognition, memory and even more advanced alghoritms, it's no longer reasoning? I feel like at this point a more relevant question is "What exactly is reasoning?". Before you answer, understand that most humans seemingly live by pattern recognition, not reasoning.

https://en.wikipedia.org/wiki/Reasoning_system

[–] stickly@lemmy.world 2 points 3 hours ago

If you want to boil down human reasoning to pattern recognition, the sheer amount of stimuli and associations built off of that input absolutely dwarfs anything an LLM will ever be able to handle. It's like comparing PhD reasoning to a dog's reasoning.

While a dog can learn some interesting tricks and the smartest dogs can solve simple novel problems, there are hard limits. They simply lack a strong metacognition and the ability to make simple logical inferences (eg: why they fail at the shell game).

Now we make that chasm even larger by cutting the stimuli to a fixed token limit. An LLM can do some clever tricks within that limit, but it's designed to do exactly those tricks and nothing more. To get anything resembling human ability you would have to design something to match human complexity, and we don't have the tech to make a synthetic human.

[–] MangoCats@feddit.it 1 points 5 hours ago (1 children)

I think as we approach the uncanny valley of machine intelligence, it's no longer a cute cartoon but a menacing creepy not-quite imitation of ourselves.

load more comments (1 replies)
load more comments (1 replies)
[–] Tobberone@lemm.ee 3 points 7 hours ago

What statistical method do you base that claim on? The results presented match expectations given that Markov chains are still the basis of inference. What magic juice is added to "reasoning models" that allow them to break free of the inherent boundaries of the statistical methods they are based on?

[–] theherk@lemmy.world 12 points 10 hours ago

Yeah these comments have the three hallmarks of Lemmy:

  • AI is just autocomplete mantras.
  • Apple is always synonymous with bad and dumb.
  • Rare pockets of really thoughtful comments.

Thanks for being at least the latter.

[–] Zacryon@feddit.org 8 points 10 hours ago (4 children)

Some AI researchers found it obvious as well, in terms of they've suspected it and had some indications. But it's good to see more data on this to affirm this assessment.

[–] jj4211@lemmy.world 1 points 2 hours ago

Particularly to counter some more baseless marketing assertions about the nature of the technology.

load more comments (3 replies)
load more comments (2 replies)
[–] skisnow@lemmy.ca 24 points 13 hours ago (1 children)

What's hilarious/sad is the response to this article over on reddit's "singularity" sub, in which all the top comments are people who've obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don't understand AI or "reasoning". It's a weird cult.

load more comments (1 replies)
[–] Xatolos@reddthat.com 6 points 10 hours ago (1 children)

So, what your saying here is that the A in AI actually stands for artificial, and it's not really intelligent and reasoning.

Huh.

[–] coolmojo@lemmy.world 1 points 5 hours ago

The AI stands for Actually Indians /s

[–] FreakinSteve@lemmy.world 19 points 13 hours ago (4 children)

NOOOOOOOOO

SHIIIIIIIIIITT

SHEEERRRLOOOOOOCK

[–] jj4211@lemmy.world 0 points 2 hours ago

Without being explicit with well researched material, then the marketing presentation gets to stand largely unopposed.

So this is good even if most experts in the field consider it an obvious result.

load more comments (3 replies)
[–] NostraDavid@programming.dev -2 points 5 hours ago (3 children)

OK, and? A car doesn't run like a horse either, yet they are still very useful.

I'm fine with the distinction between human reasoning and LLM "reasoning".

[–] Brutticus@midwest.social 8 points 4 hours ago

Then use a different word. "AI" and "reasoning" makes people think of Skynet, which is what the weird tech bros want the lay person to think of. LLMs do not "think", but that's not to say I might not be persuaded of their utility. But thats not the way they are being marketed.

load more comments (2 replies)
load more comments
view more: ‹ prev next ›