this post was submitted on 21 Aug 2025
1106 points (96.9% liked)

Microblog Memes

9096 readers
3239 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Redex68@lemmy.world 2 points 1 week ago (2 children)

One thing I don't get with people fearing AI is when something adds AI and suddenly it's a privacy nightmare. Yeah, in some cases it does make it worse, but in most cases, what was stopping the company from taking your data anyways? LLMs are just algorithms that process data and output something, they don't inherently give firms any additional data. Now, in some cases that means data that previously wasn't or that shouldn't be sent to a server is now being sent, but I've seen people complain about privacy so often in cases where I don't understand why AI is your tipping point, if you don't trust the company to not store your data when using AI, why trust it in the first place?

[–] bus_factor@lemmy.world 6 points 1 week ago (1 children)

It's more about them feeding it into an LLM which then decides to incorporate it in an answer to some random person.

[–] Redex68@lemmy.world 1 points 1 week ago (1 children)

Yeah but LLMs don't train off of data automatically, you need a separate dedicated process for that, it won't happen from just using them. In that sense, companies can still use your data to train them in the background, even if you aren't directly using an LLM, or they can not train them even when you are using them. I guess in the latter case there is a bigger incentive for them to train them than otherwise, but to me it seems basically the same thing privacy wise.

[–] bus_factor@lemmy.world 1 points 1 week ago

If they're exposing their LLM to the public, there's a higher chance of it leaking training data to the public. You don't know what they trained with, but there's a chance it's customer data. Sure they may not train with anything, but why assume they don't? If they have an internal LLM that's of lesser concern, because that LLM would probably only show them data those employees already have access to.

[–] homesweethomeMrL@lemmy.world 2 points 1 week ago

if you don't trust the company to not store your data when using AI, why trust it in the first place?

Policies, procedures, and common sense - three things AI is most assuredly not known for respecting. (Not that the whole topic of data privacy isn't a huge issue outside of AI)