Feels like overlooking the same issue as with every other AI use
When a human makes a mistake and is called out, they can usually fix the mistake. When genAI outputs nonsense, it's fucking nonsense, you can't fix something that's fundamentally made up, and if you try to "ask it" to fix it it'll just respond with more nonsense. I hallucinated this case? Certainly! Here's 3 other cases you could cite instead: 3 new made up cases