this post was submitted on 03 Mar 2025
22 points (82.4% liked)

Asklemmy

46885 readers
1216 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

I already made some people mad by suggesting that I would I would make by computer run an ollama model. I suggested that they make a counter AI bot to find these accounts that don't disclose they're bots. What's lemmy opinion of Ai coming into fediverse?

you are viewing a single comment's thread
view the rest of the comments
[–] Lettuceeatlettuce@lemmy.ml 21 points 3 weeks ago (2 children)

In general, if it isn't open source in every sense of the term, GPL license, all weights and parts of the model, and all the training data and training methods, it's a non-starter for me.

I'm not even interested in talking about AI integration unless it passes those initial requirements.

Scraping millions of people's data and content without their knowledge or consent is morally dubious already.

Taking that data and using it to train proprietary models with secret methodologies, locking it behind a pay wall, then forcing it back onto consumers regardless of what they want in order to artificially boost their stock price and make a handful of people disgustingly wealthy is downright demonic.

Especially because it does almost nothing to enrich our lives. In its current form, it is an anti-human technology.

Now all that being said, if you want to run it totally on your own hardware, to play with and help you with your own tasks, that's your choice. Using in a way that you have total sovereignty over is good.

[–] brucethemoose@lemmy.world 1 points 2 weeks ago* (last edited 2 weeks ago)

There are totally open efforts like IBM Granite. Not sure what is SOTA these days.

There are some diffusion models like that too.

Problem is there’s a performance cost, and since LLMs are so finicky and hard to run, they’re not very popular so far.

Apache opens weights is good enough for many cases. Sometimes the training stack is open too, with only data being the morally dubious closed part.

[–] PixelPilgrim -3 points 3 weeks ago

I wondered if comments you post are, according to AI they're actually copyright protected. But it's funny that no one read the TOS and basically give copywrite of comments to meta and Reddit (maybe) so legally the comments can be scraped without the authors consent. So there's plenty of legally and pretty much (technically)ethical sources content for LLMs, if you're okay with capitalism and corporations.

I look at AI as a tool, the rich definitely look at as a tool too, so I'm not going to shy away from it. I found a way to use AI to discriminate if a post is about live stream or not and use that to boost the post on mastodon. And I built half a dozen scripts with perplexity and chat gpt, one of witch is a government watchdog to see if there's any ethical or legal violations https://github.com/solidheron/AI-Watchdog-city-council

I'm not advocate that you should be pro or anti AI, but if you're anti AI then you should be doing anti AI measures