this post was submitted on 24 Oct 2024
22 points (100.0% liked)

MoreWrite

134 readers
1 users here now

post bits of your writing and links to stuff you’ve written here for constructive criticism.

if you post anything here try to specify what kind of feedback you would like. For example, are you looking for a critique of your assertions, creative feedback, or an unbiased editorial review?

if OP specifies what kind of feedback they'd like, please respect it. If they don't specify, don't take it as an invite to debate the semantics of what they are writing about. Honest feedback isn’t required to be nice, but don’t be an asshole.

founded 2 years ago
MODERATORS
 

(This is an expanded version of two of my comments [Comment A, Comment B] - go and read those if you want)

Well, Character.ai got themselves into some real deep shit recently - repeat customer Sewell Setzer shot himself and his mother, Megan Garcia, is suing the company, its founders and Google as a result, accusing them of "anthropomorphising" their chatbots and offering “psychotherapy without a license.”, among other things and demanding a full-blown recall.

Now, I'm not a lawyer, but I can see a few aspects which give Garcia a pretty solid case:

Which way the suit's gonna go, I don't know - my main interest's on the potential fallout.

Some Predictions

Win or lose, I suspect this lawsuit is going to sound character.ai's death knell - even if they don't get regulated out of existence, "our product killed a child" is the kind of Dasani-level PR disaster few companies can recover from, and news of this will likely prompt any would-be investors to run for the hills.

If Garcia does win the suit, it'd more than likely set a legal precedent which denies Section 230 protection to chatbots, if not AI-generated content in general. If that happens, I expect a wave of lawsuits against other chatbot apps like Replika, Kindroid and Nomi at the minimum.

As for the chatbots themselves, I expect they're gonna rapidly lock their shit down hard and fast, to prevent themselves from having a situation like this on their hands, and I expect their users are gonna be pissed.

As for the AI industry at large, I suspect they're gonna try and paint the whole thing as a frivolous lawsuit and Garcia as denying any fault for her son's suicide , a la the "McDonald's coffee case". How well this will do, I don't know - personally, considering the AI industry's godawful reputation with the public, I expect they're gonna have some difficulty.

you are viewing a single comment's thread
view the rest of the comments
[–] BlueMonday1984@awful.systems 1 points 2 months ago (2 children)

On this specifically, no, but Character.ai has gotten hit with more lawsuits - one for encouraging a kid to kill his parents, and one for grooming one kid and fucking up another.

Giving my off-the-cuff thoughts, these two suits are likely gonna further push the public to view AI as inherently harmful to kids, if not inherently harmful in general.

[–] screechingtard@awful.systems 1 points 4 days ago

It's a Psyop tactic. To deflect the main issue of AI being a useless mess. "AI is only bad if you are a mentally undeveloped child" otherwise our product works fine. Now we can regulate that its OK and healthy to use as long as you prove you are over 18yo.

Someone please think of the children!

[–] Sonor@lemmy.world 2 points 2 months ago (1 children)

Gosh. I wonder what the duck is up over there. I don’t hear nomi or replika (though replika is still battling it’s own demons) doing this much harm. I wonder why character.ai is so bad. Maybe it has the most reach?