this post was submitted on 21 May 2025
978 points (97.6% liked)

Technology

70267 readers
3994 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] DoPeopleLookHere@sh.itjust.works 0 points 1 day ago (1 children)

Okay, here's a non apple source since you want it.

https://arxiv.org/abs/2402.12091

5 Conclusion In this study, we investigate the capacity of LLMs, with parameters varying from 7B to 200B, to com- prehend logical rules. The observed performance disparity between smaller and larger models indi- cates that size alone does not guarantee a profound understanding of logical constructs. While larger models may show traces of semantic learning, their outputs often lack logical validity when faced with swapped logical predicates. Our findings suggest that while LLMs may improve their logical reason- ing performance through in-context learning and methodologies such as COT, these enhancements do not equate to a genuine understanding of logical operations and definitions, nor do they necessarily confer the capability for logical reasoning.

[–] pinkapple@lemmy.ml 0 points 18 hours ago (1 children)

Another unpublished preprint that hasn't published peer review? Funny how that somehow doesn't matter when something seemingly supports your talking points. Too bad it doesn't exactly mean what you want it to mean.

"Logical operations and definitions" = Booleans and propositional logic formalisms. You don't do that either because humans don't think like that but I'm not surprised you'd avoid mentioning the context and go for the kinda over the top and easy to misunderstand conclusion.

It's really interesting how you get people constantly doubling down on specifically chatbots being useless citing random things from google but somehow Palantir finds great usage in their AIs for mass surveillance and policing. What's the talking point there, that they're too dumb to operate and that nobody should worry?

[–] DoPeopleLookHere@sh.itjust.works 0 points 9 hours ago (1 children)

As apposed to the nothing you've cited that context tokens actually improve reasoning?

I love how you keep going further and further away from the education topic at hand, and now brining in police survalinece, which everyone knows is 100% accurate.

[–] pinkapple@lemmy.ml 0 points 8 hours ago (1 children)

You're less coherent than a broken LLM lol. You made the claim that transformer-based AIs are fundamentally incapable of reasoning or something vague like that using gimmicky af "I tricked the chatbot into getting confused therefore it can't think" unpublished preprints (while asking for peer review). Why would I need to prove something? LLMs can write code, that's an undeniable demonstration that they understand abstract logic fairly well that can't be faked using probability and it would be a complete waste of time to explain it to anyone who is either having issues with cognitive dissonance or less often may be intentionally trying to spread misinformation.

Are the AIs developed by Palantir "fundamentally incapable" of their demonstrated effectiveness or not? It's a pretty valid question when we're already surveilled by them but some people like you indirectly suggest that this can't be happening. Should people not care about predictive policing?

How about the industrial control AIs that you "critics" never mention, do power grid controllers fake it? You may need to tell Siemens, they're not aware their deployed systems work. And while on that, we shouldn't be concerned about monopolies controlling public infrastructure with closed source AI models because they're "fundamentally incapable" to operate?

I don't know, maybe this "AI skepticism" thing is lowkey intentional industry misdirection and most of you fell for it?

My larger point, AI replacing teachers is at least a decade away.

You've given no evidence that it is. You've just said you hate my sources, while not actually making a single argument that it is.

You said well it stores context, but who cares? I showed that it doesn't translate to what you think, and you said you don't like, without providing any evidence that it means anything beyond looking good on a graph.

I've said several times, SHOW ME ITS CLOSE. I don't care what law enforcement buys, because that has nothing to do with education.