You made huge claims using a non peer reviewed preprint with garbage statistics and abysmal experimental design where they put together 21 bikes and 4 race cars to bury openAI flagship models under the group trend and go to the press with it. I'm not going to go over all the flaws but all the performance drops happen when they spam the model with the same prompt several times and then suddenly add or remove information, while using greedy decoding which will cause artificial averaging artifacts. It's context poisoning with extra steps i.e. not logic testing but prompt hacking.
This is Apple (that is falling behind in its AI research) attacking a competitor with fake FUD and doesn't even count as research, which you'd know if you looked it up and saw you know, opinions of peers.
You're just protecting an entrenched belief based on corporate slop so what would you do with peer reviewed anything? You didn't bother to check the one you posted yourself.
Or you post corporate slop on purpose and now trying to turn the conversation away from that. Usually the case when someone conveniently bypasses absolutely all your arguments lol.
Another unpublished preprint that hasn't published peer review? Funny how that somehow doesn't matter when something seemingly supports your talking points. Too bad it doesn't exactly mean what you want it to mean.
"Logical operations and definitions" = Booleans and propositional logic formalisms. You don't do that either because humans don't think like that but I'm not surprised you'd avoid mentioning the context and go for the kinda over the top and easy to misunderstand conclusion.
It's really interesting how you get people constantly doubling down on specifically chatbots being useless citing random things from google but somehow Palantir finds great usage in their AIs for mass surveillance and policing. What's the talking point there, that they're too dumb to operate and that nobody should worry?