487
submitted 8 months ago by L4s@lemmy.world to c/technology@lemmy.world

Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

you are viewing a single comment's thread
view the rest of the comments
[-] random9@lemmy.world 46 points 8 months ago

You don't do what Google seems to have done - inject diversity artificially into prompts.

You solve this by training the AI on actual, accurate, diverse data for the given prompt. For example, for "american woman" you definitely could find plenty of pictures of American women from all sorts of racial backgrounds, and use that to train the AI. For "german 1943 soldier" the accurate historical images are obviously far less likely to contain racially diverse people in them.

If Google has indeed already done that, and then still had to artificially force racial diversity, then their AI training model is bad and unable to handle that a single input can match to different images, instead of the most prominent or average of its training set.

[-] xantoxis@lemmy.world 16 points 8 months ago

Ultimately this is futile though, because you can do that for these two specific prompts until the AI appears to "get it", but it'll still screw up a prompt like "1800s Supreme Court justice" or something because it hasn't been trained on that. Real intelligence requires agency to seek out new information to fill in its own gaps; and a framework to be aware of what the gaps are. Through exploration of its environment, a real intelligence connects things together, and is able to form new connections as needed. When we say "AI doesn't know anything" that's what we mean--understanding is having a huge range of connections and the ability to infer new ones.

[-] TheGreenGolem@lemmy.dbzer0.com 9 points 8 months ago

That's why I hate that they started to call them artificial intelligence. There is nothing intelligent in them at all. They work on probability based on a shit ton of data, that's all. That's not intelligence, that's basically brute force. But there is no going back at this point, I know.

[-] kromem@lemmy.world 1 points 8 months ago* (last edited 8 months ago)

Oh really? Here's Gemini's response to "What would the variety of genders and skin tones of the supreme court in the 1800s have been?"

The Supreme Court of the United States in the 1800s was far from diverse in terms of gender and skin tone. Throughout the entire 19th century, all the justices were white men. Women were not even granted the right to vote until 1920, and there wasn't a single person of color on the Supreme Court until Thurgood Marshall was appointed in 1967.

Putting the burden of contextualization on the LLM would have avoided this issue.

this post was submitted on 22 Feb 2024
487 points (96.2% liked)

Technology

59242 readers
3346 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS