322
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 05 Feb 2024
322 points (98.8% liked)
Technology
59205 readers
2842 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
It's fine to not understand what "AI" is and how it works, but you should avoid making statements that highlight that lack of understanding.
If you feel one's knowledge is lacking then explaining it may convince them, or others reading your post.
Speaking of a broad category of useful technologies as inherently bad is a dead giveaway that someone doesn't know what they're talking about.
That's highly presumptive isn't it? I didn't make any statement about what AI is, or the mechanics behind it. I only made a statement regarding the owners and operators of AI. We're talking about the politics of using AI to aid in police accountability, and for those intents and purposes, AI need not be more than a black box. We could call it a sentient jar of kidney beans for all it matters.
So for the sake of argument - the one I made, not the one I didn't make - what did I misunderstand?
https://www.natlawreview.com/article/artificially-unintelligent-attorneys-sanctioned-misuse-chatgpt
Regardless of how ChatGPT made this error, be it "hallucination" or otherwise, I would submit this as exhibit A that AI, at least currently, is not reliable enough to do legal analysis.
Most of the large, large language models are owned and run by huge corporations: OpenAI's ChatGPT, Google's Bard, Microsoft's Copilot, etc. It is already almost impossible to hold these organizations accountable for their misdeeds, so how can we trust their creations to police the police?
The naive "at-best" scenario is that AI trained to identify unjustified police shootings sometimes fails to identify them properly. Some go unreported. Or perhaps it reports a "justified" police shooting (I am not here to debate that definition but let's say they occur) as unjustified, which gums up other investigation efforts.
The more conspiratorial "at-worst" scenario is that a company with a pro-cop/thin-blue-line sympathizing culture could easily sweep damning reports made by their AI under the rug, which facilitates aggressive police behavior under the guise of "monitoring" it.
How does Truleo determine what is "risky" behavior, what is an "interruption" to a civilian? What is a profanity? Does Truleo consider "crap" to be a profanity? More importantly, what if you disagree with Truleo's definitions? What recourse do you have against a company that has zero duty to protect you? If you file a lawsuit alleging officer misconduct, can Truleo's AI's conclusions be admissible as evidence, and can it be used against you?
(1/2)
https://www.wired.com/story/openai-bizarre-structure-4-people-the-power-to-fire-sam-altman/
Oh! Turns out I was wrong... "a handful of people with no financial stake in the company" doesn't sound like shareholders, and yet they could change the direction of the company at will. And just so we're clear, whether it's four faceless ghouls or Sam Altman, 1 or 4, the fact that the company is beholden to so few people, who themselves are not democratically elected, nor necessarily law experts, nor necessarily have any history being police officers... their AI is what decides whether or not to hold a police officer accountable for his misdeeds? Hard. Pass.
Oh, and lest we forget Microsoft is invested in OpenAI, and OpenAI has a quasi-profit-driven structure. Those 4 board directors aren't even my biggest concern with that arrangement.
(2/2)