15
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 11 Nov 2024
15 points (100.0% liked)
TechTakes
1481 readers
358 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 1 year ago
MODERATORS
Google's Gemini has told a user to "please die" and that they are "a stain on the universe" without provocation: https://www.reddit.com/r/artificial/comments/1gq4acr/gemini_told_my_brother_to_die_threatening/
The output:
all the replies anthropomorphizing the LLM cause it generated something creepy and they don’t know why aren’t surprising, but for some reason this one really pisses me off:
an LLM generating absolute garbage that happens to be abusive in some way is a lottery ticket event, is it? I had no idea lottery wins happened that fucking frequently
Yeah absolutely. This is happening right on the coattails of that Character.AI suicide too so it's not like a freak impossible to predict accident. I mainly posted it because it flies in the face of all the talk of AI safety and "responsible AI practices".
Like Google says in their AI principles:
I don't even care that much if Google wants to host a chatbot, but they keep trying to imply it has safety properties it doesn't. It's like writing a web framework without any HTML or SQL sanitation support and saying "We will continue to develop and apply strong safety sand security practices..." and acting shocked when all the websites get hacked.
I guess including 4chan in the training data was a mistake.
Considering the user is trying to cheat on a test about elder care of all things, Gemini might have a point there