this post was submitted on 28 May 2025
36 points (72.5% liked)
Fuck AI
2872 readers
895 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I can agree with a lot here but I also have to admit that I fell at the first hurdle.
Hard disagree here. If you're using so-called AI today, the responsibility to scrutinize everything it throws at you is yours. No matter how neatly packaged or convincingly worded it is. There is a failure rate - the news is full of stories. You're setting off to climb a mountain. You cannot trust the 1s and 0s.
As for the sat nav culpability, Google gives elevation information when they have it. I would not be surprised when we found out that was the case for these dumdums. It's a bit like reading an old paper map though. If you don't know more saturated colors mean higher elevation you might have set off 30 years ago to climb this 12k ft mountain in flip-flops as well. I don't think we should blame sat navs for the ignorance here either. Unless they hide that info maliciously.
I think you have to at least feedback to satnav companies for it to maybe get better - whether you call that blame or not I dunno. Experienced navigators will report back to mapping agencies with map corrections too.
What i really don't like about satnavs is that they behave like a navigator, so some people use them as a substitute for one, develop trust, and never learn to develop their own navigation skills.
I can see the same with AI. Not all people have critical thinking like that, some people do trust other people and what they say, and they trust words written like authoritative humans. I wish they wouldn't , but some do seem to. Plenty of times the assistive tool will have plenty of data and give a decent answer about many things, and so build up trust - especially when they communicate in a convincing human like manner.
You can say that's the users fault for being too trusting , being stupid ignorant, or naive, maybe it is, maybe it's nature / nurture / laziness. I just say it's part of the variety of the species some people think differently, some people are more skeptical, some are more trusting and so on. Trust is a useful thing for social animals to have in many cases - it'd be a nightmare to live without it - but its a vulnerability too.
These AI tools, much like marketing people and con-artists and scammers will end up developing and exploiting trust, by accident or by design or by malice, or just by imitation - and I'd rather they didn't. Of course that isn't going to stop them.
I'd just like most of these assistive tools to present their uncertainty better and flag risks better. They seem to just give less info or say less when they're thin on data, that can be a bit dangerous, if it is thin on data it should be saying "I'm out of my comfort zone here, this is a guess, you need to take charge" . Try to prompt people not to get lazy and to try to do some thinking and observation of their own.
I dunno, hopefully more people will become more skeptical and develop more critical thinking skills. But i'm skeptical of that.