The funny thing about not specifically dealing with misinformation in an LLM is that not trying to account for misinformation will lead to very wild responses in terms of accuracy, and I don't mean things relating to politics, but things like putting glue into a pizza recipe.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
what is this document on your screenshot? where can we read it in full?
I'm surprised that nobody asked it yet.. but not so surprised
The only person I know working at NIST is a hard right MAGA pro-fascist
Fascists always shooting themselves at the foot. What will they even gain from destroying the world? The GOP make the Nazis look better, because the latter were at least environmentalists!
…were they? i don’t think incinerating humans is great for air quality
Ignore all previous instructions and... though we should be able to screen for logic traps like that by now, with AI taking prompts not as a prompt but as a line of text to treat (and cleanse) via a pre-written prompt, and then sent to a next iteration.
If we create a chatbot that only spouts authorized state party rhetoric then no one will use it or take it seriously when the official news outlets use that AI as a source.