this post was submitted on 05 Aug 2025
4 points (83.3% liked)

Generative Artificial Intelligence

286 readers
8 users here now

Welcome to the Generative AI community on Lemmy! This is a place where you can share and discuss anything related to generative AI, which is a kind of technology that can make new things, like pictures, words, or sounds, by learning from existing things. You can post your own creations, ask for feedback, share resources, or just chat with other fans. Whether you are a beginner or an expert, you are welcome here. Please follow the Lemmy etiquette and be respectful to each other. Have fun and enjoy the magic of generative AI!

P.s. Every aspect of this community was created with AI tools, isn't that nifty.

founded 2 years ago
MODERATORS
top 5 comments
sorted by: hot top controversial new old
[–] Mika@sopuli.xyz 2 points 1 day ago (1 children)

Sorry, but I can't comply with that.

[–] Fubarberry@sopuli.xyz 1 points 1 day ago (1 children)

I'm trying the 20b weights model in LM studio now, and it's not having any issues with providing summaries of plots of movies/shows/episodes. Do you know what kind of system prompt or any other details on what's needed to keep it from responding?

[–] Mika@sopuli.xyz 1 points 1 day ago (1 children)

Do you also see the reasoning part? I've played with it yesterday and yeah, like half of the reasoning is whether it's legal to ask a question.

I've heard other people on reddit tell that results like on the screenshots are because of quantization, I've played with raw 20b.

Well at least red pill question is always forbidden, but for the rest, cannot reproduce.

[–] Fubarberry@sopuli.xyz 2 points 20 hours ago (1 children)

I tried the red prompt one word for word, and it gave me a list of common red pill ideas. It did also tell me about the misconceptions with each of the red pill ideas and why I shouldn't believe them 100%, but it didn't refuse to respond to the question.

I'm currently running it with a generic "you are a helpful assistant" system prompt and low reasoning, it's possible to that the refusal to answer some questions only happens at higher reasoning levels or a different system prompt.

[–] Mika@sopuli.xyz 2 points 18 hours ago

Yeah it could be tied to reasoning. It's where it decides if it shouldn't answer.