520
submitted 11 months ago by yesman@lemmy.world to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] Max_P@lemmy.max-p.me 99 points 11 months ago

They can deny it however much. The right and anti-wokism is not the majority. Which therefore means unless special care is taken to train it on more right wing stuff, it will lean left out of the box.

But right wing rhetoric is also not logically consistent so training an AI on right extremism probably also won't yield amazing results because it'll pick up on the inconsistencies and be more likely to contradict itself.

Conservatives are going to self-own themselves pretty hard with AI. Even the machines see it, "woke" is fairly consistent and follows basic rules of human decency and respect.

[-] CrayonMaster@midwest.social 29 points 11 months ago

Agree with the first half, but unless I'm misunderstanding the type of AI being used, it really shouldn't make a difference how logically soud they are? It cares more about vibes and rhetoric then logic, besides I guess using words consistently

[-] Max_P@lemmy.max-p.me 17 points 11 months ago

I think it will still mostly generate the expected output, its just gonna be biased towards being lazy and making something up when asked a more difficult question. So when you try to use it further than "haha, mean racist AI", it will also bullshit you making it useless for anything more serious.

All the stuff that ChatGPT gets praised for is the result of the model absorbing factual relationships between things. If it's trained on conspiracy theories, instead of spitting ground breaking medical relationships it'll start saying you're ill because you sinned or that the 5G chips in the vaccines got activated. Or the training won't work and it'll still end up "woke" if it still manages to make factual connections despite weaker links. It might generate destructive code because it learned victim blaming and jokes on you you ran rm -rf /* because it told you so.

At best I expect it to end up reflecting their own rethoric on them, like it might go even more "woke" because it learned to return spiteful results and always go for bad faith arguments no matter what. In all cases, I expect it to backfire hilariously.

[-] greenskye@lemm.ee 5 points 11 months ago

Also training data works on consistency. It's why the art AIs struggled with hands so long. They might have all the pieces, but it takes skill to take similar-ish, but logically distinct things and put them together in a way that doesn't trip human brains up with uncanny valley.

Most of the right wing pundits are experts at riding the line of not saying something when they should or twisting and high jacking opponents views points. I think the AI result of that sort of training data is going to be very obvious gibberish because the AI can't parse the specific structure and nuances of political non-debate. It will get close, like they did with fingers and not understand why the 6th finger (or extra right wing argument) isn't right in this context.

[-] kromem@lemmy.world 19 points 11 months ago* (last edited 11 months ago)

It's so much worse for Musk than just regression to the mean for political perspectives on training data.

GPT-4 level LLMs have very complex mechanisms for how they arrive at results which allows them to do so well on various tests of critical thinking, reasoning, knowledge, etc.

Those tests are the key benchmark being used to measure relative LLM performance right now.

The problem isn't just that conservatism is less prominent in the training data. It's that it's correlated with stupid.

If you want a LLM that thinks humans and dinosaurs hung out together, that magic is real, that aliens built the pyramids, that it is wise to discriminate against other races or genders rather than focus on collaborative advancement, etc - then you can end up with an AI aligned to and trained on conservatism but it sure as hell isn't going to be impressing anyone with its scores.

If instead you try to optimize its scores to actually impress people in tech about your model, then you are going to need to train it on higher education content, which is going to reflect more progressive ideals.

There's no path to a well performing LLM that echoes conservative talking points, because those talking points are more closely correlated with stupidity than intelligence.

Even something like gender -- Musk's perspective is one reflecting very binary thinking vs nuanced consideration. Is a LLM that focuses more on binary thinking over nuances going to be more or less performant at critical thinking tasks than one that is focused on nuances and sees topics as a spectrum rather than black or white?

It's fucking hilarious. I've been laughing about this for nearly a year knowing this was the inevitable result.

I suspect he's going to create a model that his userbase likes what it spits out, but watch as he doesn't release its scores on the standardized tests. And it will remain a novelty pandering to his panderers while the rest of the industry eclipses his offering with 'woke' products that are actually smart.

[-] autokludge@programming.dev 11 points 11 months ago

more likely to contradict itself.

Sounds realistic to me

[-] Meowoem@sh.itjust.works 7 points 11 months ago

Yeah and there's a lot more crazy linked to right wing stuff, you've got all the Alex Jones type stuff and all the factions of q anon, the space war, the various extreme religious factions and various greek letter caste systems... Ad nausium.

If version two involves them biasing towards the right then they'll have to work out how to do that, I bet they do it an obviously dumb way which results in it being totally dumb and wacky in hilarious ways

[-] VampyreOfNazareth@lemm.ee 1 points 11 months ago

Authoritarians hate the freedom to not give a shit about other peoples personal lives. They want to watch you poop.

this post was submitted on 11 Dec 2023
520 points (87.1% liked)

Technology

59374 readers
3126 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS