[-] s3p5r@lemm.ee 5 points 2 days ago

Yeah, that works for me. I'll check out some more of them. Thanks!

[-] s3p5r@lemm.ee 4 points 3 days ago

Borked link. Possibly unthrottled invidious version

I prefer less pop and bop in my industrial, but I am glad to see anybody else still enjoying this with the word industrial in it.

[-] s3p5r@lemm.ee 22 points 3 days ago

I don't toil in the mines of the big FAANG, but this tracks with what I've been seeing in my mine. I also predict it will end with lay-offs and companies collapsing.

Zitron thinks a lot about the biggest companies and how it will ultimately hurt them, which is reasonable. But, I think it ironically downplays the scale of the bubble, and in turn, the impacts of it bursting.

The expeditions into OpenAI's financials have been very educational. If I were an investigative reporter, my next move would be to look at the networks created by venture capitalists and what is happening inside the companies who share the same patrons as Open AI. I don't say that as someone who interacts with finances, just as someone who carefully watches organizational politics.

[-] s3p5r@lemm.ee 3 points 4 days ago

How convenient that a counterexample can't be named

[-] s3p5r@lemm.ee 3 points 4 days ago

I feel like Luthor was a better counterexample for this before the model for his billionaire redesign was elected President of the USA.

Even so, Luthor hasn't had quite the same volume of appearances as Iron Man, Batman, Captain America and the other rich superhero tropes.

[-] s3p5r@lemm.ee 22 points 4 days ago

People have grown up reading comic books and watching movies about generous billionaire superhero saviors. They want to believe that exists because it's what they've been taught justice looks like.

[-] s3p5r@lemm.ee 8 points 5 days ago

If only all my snark could elicit such absurd perfection.

[-] s3p5r@lemm.ee 11 points 6 days ago

He's still a party member, it's listed in his candidate information sheet. Badly Scanned PDF

[-] s3p5r@lemm.ee 8 points 1 week ago

You're both adorable

[-] s3p5r@lemm.ee 29 points 2 weeks ago

So long as you don't care about whether they're the right or relevant answers, you do you, I guess. Did you use AI to read the linked post too?

[-] s3p5r@lemm.ee 18 points 2 weeks ago

Joy isn't reserved for the young, but it's sure fucking easier to be joyful when your body hurts less because you're far less likely to have one or more chronic pain conditions in your youth.

Your heart won't harden? It might just with atherosclerosis and enough time.

So go enjoy the joy even more now while it's still easier.

[-] s3p5r@lemm.ee 11 points 2 weeks ago

References weren't paywalled, so I assume this is the paper in question:

Hofmann, V., Kalluri, P.R., Jurafsky, D. et al. AI generates covertly racist decisions about people based on their dialect. Nature (2024).

Abstract

Hundreds of millions of people now interact with language models, with uses ranging from help with writing^1,2^ to informing hiring decisions^3^. However, these language models are known to perpetuate systematic racial prejudices, making their judgements biased in problematic ways about groups such as African Americans^4,5,6,7^. Although previous research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time, particularly in the United States after the civil rights movement^8,9^. It is unknown whether this covert racism manifests in language models. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice, exhibiting raciolinguistic stereotypes about speakers of African American English (AAE) that are more negative than any human stereotypes about African Americans ever experimentally recorded. By contrast, the language models’ overt stereotypes about African Americans are more positive. Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death. Finally, we show that current practices of alleviating racial bias in language models, such as human preference alignment, exacerbate the discrepancy between covert and overt stereotypes, by superficially obscuring the racism that language models maintain on a deeper level. Our findings have far-reaching implications for the fair and safe use of language technology.

view more: next ›

s3p5r

joined 3 weeks ago