[-] s3p5r@lemm.ee 4 points 1 week ago

For anyone else wondering:

Female fingerprints typically contain more densely packed ridges than male prints in the same area. These measurements were then compared against ridge density patterns found in contemporary Egyptian populations. ... The sex could not be determined for children.

[-] s3p5r@lemm.ee 6 points 1 month ago

And sadder still, no friend or family can feed and house me. Economic coercion is very effective.

Even worse, this is still better treatment than when I worked state sector.

[-] s3p5r@lemm.ee 16 points 1 month ago

Imagine if you had to abandon your social life some years ago for the job and the only people you talk to on a daily basis are your coworkers on Slack.

Thanks for the reminder that my life is garbage, I guess. Unless you count the pleasantries I exchange with the person who makes my coffee in the morning?

I'm not employed by automattic, but this thread still cut deep with similar work culture.

[-] s3p5r@lemm.ee 11 points 1 month ago

For anyone else also interested, I went and had a look at the links Dessalines kindly provided.

The source on the graphs says "Sources: Daniel Cox, Survey Center on American Life; Gallup Poll Social Series; FT analysis of General Social Surveys of Korea, Germany & US and the British Election Study. US data is respondent’s stated ideology. Other countries show support for liberal and conservative parties All figures are adjusted for time trend in the overall population." Where FT is financial times.

It's not clear how the words "liberal" and "conservative" were chosen, whether they're intended to mean "socially progressive" and "socially traditional" or have other connotations bound with the political parties too, and whether the original data chose those descriptions or if they're FT's inference as being "close enough" for an American audience.

Unfortunately the FT data site is refusing to let me look at them without "legitimate interest" advertising cookies so I can't tell you much more or if there's any detail on methodology.

[-] s3p5r@lemm.ee 4 points 1 month ago

That has also always been my gut feel about Carmack, but it still sucks to see the evidence. I wish that gut feeling would stop being so dammed accurate, but it gets a lot of practise.

Doom was definitely christofascist fantasy porn. At least Quake you were defending invasion from the most literal manifestation of eugenicist Space-Nazis possible. Yes I am choosing to disregard the inherent US military fetishism because I don't want to ruin my formative media which I deep down always knew was problematic.

sigh

Can I at least keep the soundtracks as pleasant and untarnished memories?

[-] s3p5r@lemm.ee 5 points 2 months ago

Yeah, that works for me. I'll check out some more of them. Thanks!

[-] s3p5r@lemm.ee 8 points 2 months ago

If only all my snark could elicit such absurd perfection.

[-] s3p5r@lemm.ee 11 points 2 months ago

He's still a party member, it's listed in his candidate information sheet. Badly Scanned PDF

[-] s3p5r@lemm.ee 8 points 2 months ago

You're both adorable

[-] s3p5r@lemm.ee 11 points 2 months ago

References weren't paywalled, so I assume this is the paper in question:

Hofmann, V., Kalluri, P.R., Jurafsky, D. et al. AI generates covertly racist decisions about people based on their dialect. Nature (2024).

Abstract

Hundreds of millions of people now interact with language models, with uses ranging from help with writing^1,2^ to informing hiring decisions^3^. However, these language models are known to perpetuate systematic racial prejudices, making their judgements biased in problematic ways about groups such as African Americans^4,5,6,7^. Although previous research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time, particularly in the United States after the civil rights movement^8,9^. It is unknown whether this covert racism manifests in language models. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice, exhibiting raciolinguistic stereotypes about speakers of African American English (AAE) that are more negative than any human stereotypes about African Americans ever experimentally recorded. By contrast, the language models’ overt stereotypes about African Americans are more positive. Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death. Finally, we show that current practices of alleviating racial bias in language models, such as human preference alignment, exacerbate the discrepancy between covert and overt stereotypes, by superficially obscuring the racism that language models maintain on a deeper level. Our findings have far-reaching implications for the fair and safe use of language technology.

[-] s3p5r@lemm.ee 7 points 2 months ago

Maybe if Mr True wore his girdle he might understand why self-lacing (and the many layers of buttoned clothing women were obligated to wear) takes so damn long.

[-] s3p5r@lemm.ee 6 points 2 months ago

Some provide screen-reader instructions, but most places barely remember blind people exist. It's another example of people with disabilities being ignored and marginalised.

And then even if they do remember blind people exist, they probably forget there are people who aren't blind who can't do their tests for other reasons, like dyslexia or dexterity impairments.

And then you have hCaptcha who makes disabled people to sign up to their database to use their cookie.

view more: ‹ prev next ›

s3p5r

joined 2 months ago