-1
Google's Culture of Fear (www.piratewires.com)
top 2 comments
sorted by: hot top controversial new old
[-] fiohnah@lemmy.blahaj.zone 4 points 8 months ago

This is a garbage article that's presenting rushed low-quality software fixes as evidence of "the woke mind virus".

Evidence is links to their own publication, a Twitter screenshot, and "everyone I talked to said ...". The author is clearly talking about a small, trusted group that back the author's pre-existing views.

Trying to not have blantenly racist AI, another Tay, is a really difficult problem. Google's "throw it at the wall and see what sticks" approach to products, the features-first promotion process, and employees' honest desires to do the right are combined into a complex, awesome, and also flawed product.

It's simpler and more entertaining to believe that there's some conspiracy than to acknowledge the complexity, and the author uses that to further their victim complex.

[-] lvxferre@mander.xyz 4 points 8 months ago* (last edited 8 months ago)

I'll focus specifically on the image generation bias.

Imagine for a moment that you "trained" a model to output tools, based on the following four pics:

Green wrench, blue wrench, red hammer, red screwdriver.

Most tools in the pic are red, so the model "assumes" [NB: metaphor] that the typical tool is red. Most of them are wrenches, so it "assumes" that the typical tool is a wrench.

So once you ask it "I want a picture of a tool", here's what it'll show you:

A red wrench. All the fucking time.

That's a flaw of the technology - it'll exacerbate any bias from its training data set.

Now, instead of training the model with tools, train it with images of people doing multiple activities. I believe that users here should quickly get what would happen - "engineer" goes from "mostly men" to "always a man", "primary teacher" goes from "mostly women" to "always a woman", same shit with skin colour, the shape of your nose, clothing, and everything else.

Could be this solved? Yes; you'd need to pay extra attention to human pictures that you're "training" the image generator with and constantly check for biases. That's considerably slower, but it would solve the issue of image generators perpetrating stereotypes.

However big corporations give no flying fucks about social harm, even if they really want suckers like you and me to believe otherwise.

this post was submitted on 05 Mar 2024
-1 points (40.0% liked)

Hacker News

2171 readers
1 users here now

A mirror of Hacker News' best submissions.

founded 1 year ago
MODERATORS