this post was submitted on 04 Jul 2025
5 points (100.0% liked)

Artificial Ignorance

190 readers
1 users here now

In this community we share the best (worst?) examples of Artificial "Intelligence" being completely moronic. Did an AI give you the totally wrong answer and then in the same sentence contradict itself? Did it misquote a Wikipedia article with the exact wrong answer? Maybe it completely misinterpreted your image prompt and "created" something ridiculous.

Post your screenshots here, ideally showing the prompt and the epic stupidity.

Let's keep it light and fun, and embarrass the hell out of these Artificial Ignoramuses.

All languages welcome, but an English explanation would be appreciated to keep a common method of communication. Maybe use AI to do the translation for you...

founded 6 months ago
MODERATORS
 

cross-posted from: https://lemmy.sdf.org/post/37949537

Archived

  • Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind. These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models.
  • Platforms developed by the biggest tech companies turned out to be the most privacy invasive, with Meta AI (Meta) being the worst, followed by Gemini (Google) and Copilot (Microsoft). DeepSeek.
  • Gemini, DeepSeek, Pi AI, and Meta AI don’t seem to allow users to opt out of having prompts used to train the models.
  • All investigated models collect users’ data from “publicly accessible sources, ” which could include personal information.

[...]

top 1 comments
sorted by: hot top controversial new old
[–] CameronDev@programming.dev 2 points 5 days ago

We'll likely never know, but I wonder if all of them are actually collecting the same info, but the big tech lot have better lawyers, so the privacy policies are more correct?

An opt-out of training button is worthless if you can't actually verify it anyway.