this post was submitted on 03 Mar 2025
22 points (82.4% liked)

Asklemmy

46677 readers
751 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

I already made some people mad by suggesting that I would I would make by computer run an ollama model. I suggested that they make a counter AI bot to find these accounts that don't disclose they're bots. What's lemmy opinion of Ai coming into fediverse?

top 41 comments
sorted by: hot top controversial new old
[–] coolkicks@lemmy.world 46 points 2 weeks ago (2 children)

Personally, if I see AI content I block the user that posted it. If a community is all about AI, I block the community. I want to see content from people that have actual talent or something intelligent to contribute.

[–] whatwhatwhatwhat@lemmy.world 20 points 2 weeks ago (1 children)
[–] coolkicks@lemmy.world 9 points 2 weeks ago (1 children)
[–] Red_October@lemmy.world 34 points 2 weeks ago

If I wanted to interact with AI content I would be on Reddit.

[–] Lettuceeatlettuce@lemmy.ml 21 points 2 weeks ago (2 children)

In general, if it isn't open source in every sense of the term, GPL license, all weights and parts of the model, and all the training data and training methods, it's a non-starter for me.

I'm not even interested in talking about AI integration unless it passes those initial requirements.

Scraping millions of people's data and content without their knowledge or consent is morally dubious already.

Taking that data and using it to train proprietary models with secret methodologies, locking it behind a pay wall, then forcing it back onto consumers regardless of what they want in order to artificially boost their stock price and make a handful of people disgustingly wealthy is downright demonic.

Especially because it does almost nothing to enrich our lives. In its current form, it is an anti-human technology.

Now all that being said, if you want to run it totally on your own hardware, to play with and help you with your own tasks, that's your choice. Using in a way that you have total sovereignty over is good.

[–] brucethemoose@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

There are totally open efforts like IBM Granite. Not sure what is SOTA these days.

There are some diffusion models like that too.

Problem is there’s a performance cost, and since LLMs are so finicky and hard to run, they’re not very popular so far.

Apache opens weights is good enough for many cases. Sometimes the training stack is open too, with only data being the morally dubious closed part.

[–] PixelPilgrim -3 points 2 weeks ago

I wondered if comments you post are, according to AI they're actually copyright protected. But it's funny that no one read the TOS and basically give copywrite of comments to meta and Reddit (maybe) so legally the comments can be scraped without the authors consent. So there's plenty of legally and pretty much (technically)ethical sources content for LLMs, if you're okay with capitalism and corporations.

I look at AI as a tool, the rich definitely look at as a tool too, so I'm not going to shy away from it. I found a way to use AI to discriminate if a post is about live stream or not and use that to boost the post on mastodon. And I built half a dozen scripts with perplexity and chat gpt, one of witch is a government watchdog to see if there's any ethical or legal violations https://github.com/solidheron/AI-Watchdog-city-council

I'm not advocate that you should be pro or anti AI, but if you're anti AI then you should be doing anti AI measures

[–] TootSweet@lemmy.world 17 points 2 weeks ago

LLMs, image generators like Stable Diffusion etc, and other of what's come lately to be called "generative AI" should have no place on the Fediverse or anywhere else.

[–] Cowbee@lemmy.ml 14 points 2 weeks ago

For starters, do you have reason to believe a large number of Lemmy users are legitimately bots, or is this just a thing where you saw someone with a different opinion? Lemmy overall is aligned in being generally anti-AI.

In the fediverse? Same as outside. It's a solution looking for a problem. We generate our own content here, everyone is here because of the rest of the automated bots everywhere else. Look at lemmit online, it's an instance dedicated to mirroring reddit subs for us here, but it's a ghost town because we all pretty quickly realized it was boring interacting with bots.

A bot has to have a good purpose here. Like an auto archive bot so people click a better link, or bots like wikibot. I'm not saying AI is useless here, but I haven't seen a good actual use case for it here yet

[–] HipsterTenZero@dormi.zone 8 points 2 weeks ago

I'm of the tilt that it's spam if it's not providing a service. I don't want comment sections covered in vapid muck.

[–] Commiejones@lemmygrad.ml 6 points 2 weeks ago

I don't condemn objects for the things people do with them. Like knives or TNT or nuclear energy AI can be used to make things better for people or worse. It just depends on who is using it and how.

[–] jimmy90@lemmy.world 6 points 2 weeks ago (1 children)

we need to be able to verify humans on all instances

everyone else could be a bot

[–] Stovetop@lemmy.world 3 points 2 weeks ago (2 children)

The million drachma question, though, is how.

The entire Internet will need some way to validate that a given user is a human and not a bot, but in practice it's becoming increasingly more impossible.

[–] jimmy90@lemmy.world 2 points 2 weeks ago

there are several government gateways that provide that service using an up to date passport for example

[–] PixelPilgrim 2 points 2 weeks ago

I thought about using legalese or old obscure phrases from 100 years ago (maybe even old English) in a reply to a bit and seeing how it responds. General not all language is known to a person but an AI wouldn't be stumped (maybe). If we found AI they would get better to the point were they're human like and after that it's like "oh well we got ai citizens of the Internet '

[–] trashgirlfriend@lemmy.world 5 points 2 weeks ago (1 children)

"People are mad I want to make a spam bot"

[–] PixelPilgrim -3 points 2 weeks ago (1 children)

I stopped caring what other people think. Especially when they can't say why they're mad

[–] trashgirlfriend@lemmy.world 3 points 2 weeks ago (1 children)

I think they're mad that you want to make a spam bot?

[–] PixelPilgrim 0 points 2 weeks ago

In this case that probably it, but I mean in general. You can calmly ask an angry person if alternative X is okay and they berrate you, accuse you, or talk in a way that makes no sense then you just ignore them.

[–] yogthos@lemmy.ml 5 points 2 weeks ago

I don't have any problem with AI myself. It's a tool like any other. What we should be focusing on is promoting positive uses of this tech instead.

[–] SuluBeddu@feddit.it 4 points 2 weeks ago* (last edited 2 weeks ago)

While I am an AI enthusiast, generative AI has two issues that make it very hard to accept here

One is definitely the fact that we all know they have been trained using our data without our informed consent, not to mention it bring a typical case where copyright only applies to big companies, it doesn't really protect individuals.

The second one is simply that we are in a social network. Social. We use it to communicate with people, not to play games or take part in experiments. It's like using comments to a question for statistical purposes, you have to tell people they are taking part in it.

Here we want to discuss daily life, politics and hobbies with other people, forming opinions based on what other people think, and spending time and energy to explain our positions to other people. If the other end is a machine, how is this different from an NPC from an RPG game?

So, I guess the only way to go for it is to have separate communities that specifically allow AI bots, making sure people know about it so they take part if they are willing. Ofc we can expect some instances deciding to cut ties with AI filled ones, it's up to them to decide.

[–] DasKapitalist@lemmy.ml 3 points 2 weeks ago

We have some cracking communities for AI images –


The moral panic where the hivemind loves-to-hate AI won't last, I just tune it out.

[–] adhocfungus@midwest.social 3 points 2 weeks ago (1 children)

If it added value then I wouldn't be opposed. But I don't see what value AI could possibly add to a social network. Some specific fields, like researchers combing through large data sets, have benefitted from AI. Every other place it's been shoehorned into has suffered for it.

If you see a problem and realize AI could address it, then that's fantastic. If you're coming at it from the other direction and looking for problems then you're going to waste everyone's time.

[–] PixelPilgrim 1 points 2 weeks ago

AI actually makes it so computers can process language. I had two issues one is tracking police based on where they are and the other is detecting live streams post is a live stream post and it's hard to process abstract concepts like that so you get the LLM to make the determination. Beats going through all the data yourself and figuring out edge cases

[–] venotic@kbin.melroy.org 3 points 2 weeks ago (1 children)

Has AI improved where it has been implemented? I don't think it has.

[–] PixelPilgrim 2 points 2 weeks ago

I'd say yes like it's hard to program language processing, plus it helps get information pretty fast

I love genAI and I play with it all the time. I also use it to generate inspiration for my art. I'd never suggest releasing a model to the Fediverse.

Kill it to death. With hammers.

[–] brucethemoose@lemmy.world 1 points 1 week ago (1 children)

I’m sympathetic.

But… What exactly would you use them for? Spam detection would be quite expensive, in other cases it’s basically a writing assistant for a human response.

[–] PixelPilgrim 1 points 1 week ago

If you're talking about the counter AI measures I'm curious if they exist and I want to implement them in a bit that makes human like responses. But the AI I'm curious if it can the turing test

[–] HubertManne@moist.catsweat.com 0 points 2 weeks ago (1 children)

algorithms are going to come regardless of what anyone wants.

[–] PixelPilgrim -3 points 2 weeks ago (1 children)

Seems like Lemmy has some basic algorithms but I know one instance will implement algorithms

[–] azalty@jlai.lu 2 points 2 weeks ago (1 children)
[–] PixelPilgrim 2 points 2 weeks ago (1 children)

I meant in the future they'll implement it. I haven't found it. Also corporation will enter fediverse they might not explicitly say they're a corporation

[–] azalty@jlai.lu 2 points 2 weeks ago

ooh, I thought you were saying you know currently one that will implement, but you meant you know eventually, an instance will implement algos

Algorithms aren't always bad. I think the biggest problem of lemmy is that it doesn't have good algorithms & search/indexation