this post was submitted on 08 Aug 2025
15 points (100.0% liked)

TechTakes

2107 readers
113 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] BlueMonday1984@awful.systems 12 points 3 days ago (1 children)

Sam Altman is touting GPT-5 as a “Ph.D level expert.” You might expect a Ph.D could count.

So let’s try the very first question: how many R’s are the in the word strawberry? GPT-5 can do the specific word “strawberry.” Cool.

But I suspect they hard-coded that question, because it fails hard on other words: [ChatGPT]

I LITERALLY SPECIAL-CASED THIS BASIC FUCKING SHIT TEN FUCKING MONTHS AGO AND I'M FUCKING DOGSHIT AS A PROGRAMMER HOW THE EVER-LOVING FUCK DID THEY COMPLETELY FUCKING FAIL TO SPECIAL-CASE THIS ONE SPECIFIC SITUATION WHAT THE ACTUAL FUCK

(Seriously, this is extremely fucking basic stuff, how the fuck can you be so utterly shallow and creatively sterile to fuck this u- oh, yeah, I forgot OpenAI is full of promptfondlers and Business Idiots like Sam Altman.)

[–] HedyL@awful.systems 8 points 3 days ago (1 children)

A while ago, I uploaded a .json file to a chatbot (MS Copilot, I believe). It was a perfectly fine .json, with just one semicolon removed (by me). The chatbot was unable to identify the problem. Instead, it claimed to have found various other "errors" in the file. Would be interesting to know if other models (such as GPT-5) would perform any better here, as to me (as a layperson) this sounds somewhat similar to the letter counting problem.

[–] dgerard@awful.systems 8 points 3 days ago

I've tested Gemini on this stuff. Sometimes it spots the syntax error and even suggests a more elegant rewrite. Sometimes it just completely shits itself.