The term "reasoning model" is as gaslighting a marketing term as "hallucination". When an LLM is "Reasoning" it is just running the model multiple times. As this report implies, using more tokens appears to increase the probability of producing a factually accurate response, but the AI is not "reasoning", and the "steps" of it "thinking" are just bullshit approximations.
this post was submitted on 01 Jul 2025
42 points (100.0% liked)
Fuck AI
3332 readers
1488 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 1 year ago
MODERATORS
AI agent that can do anything you want!
looks inside
state machines and if statements
Okay I bite the bullet: without pressure from shareholders, Apple would not have released Apple Intelligence in that state, maybe never.