this post was submitted on 10 Mar 2025
21 points (100.0% liked)

TechTakes

1727 readers
57 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] sailor_sega_saturn@awful.systems 11 points 1 week ago* (last edited 1 week ago) (11 children)

Google Translate having a normal one after I accidentally typed some German into the English side: A google translate UI showing that the "English" text 'Führen' translates to german  ': Es ist wichtig, dass Sie sich über die Details im Klaren sind.'

What's the over/under on an LLM being involved here?

(Aside: translation is IMO one of the use cases where LLMs actually have some use, but like any algorithmic translation there are a ton of limitations)

load more comments (11 replies)
[–] sc_griffith@awful.systems 10 points 1 week ago* (last edited 1 week ago) (6 children)

stumbled across an ai doomer subreddit, /r/controlproblem. small by reddit standards, 32k subscribers which I think translates to less activity than here.

if you haven't looked at it lately, reddit is still mostly pretty lib with rabid far right pockets. but after luigi and the trump inauguration it seems to have swung left significantly, and in particular the site is boiling over with hatred for billionaires.

the interesting bit about this subreddit is that it follows this trend. for example

 Why Billionaires Will Not Survive an AGI Extinction Event: As a follow up to my previous essays, of varying degree in popularity, I would now like to present an essay I hope we can all get behind - how billionaires die just like the rest of us in the face of an AGI induced human extinction... I would encourage anyone who would like to offer a critique or comment to read the full essay before doing so. I appreciate engagement, and while engaging with people who have only skimmed the sample here on Reddit can sometimes lead to interesting points, more often than not, it results in surface-level critiques that I’ve already addressed in the essay. I’m really here to connect with like-minded individuals and receive a deeper critique of the issues I raise - something that can only be done by those who have actually read the whole thing... Throughout history, the ultra-wealthy have insulated themselves from catastrophe. Whether it’s natural disasters, economic collapse, or even nuclear war, billionaires believe that their resources—private bunkers, fortified islands, and elite security forces—will allow them to survive when the rest of the world falls apart. In most cases, they are right. However, an artificial general intelligence (AGI) extinction event is different. AGI does not play by human rules. It does not negotiate, respect wealth, or leave room for survival. If it determines that humanity is an obstacle to its goals, it will eliminate us—swiftly, efficiently, and with absolute certainty. Unlike other threats, there will be no escape, no last refuge, and no survivors.

or the comments under this

Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models A directive from the National Institute of Standards and Technology eliminates mention of “AI safety” and “AI fairness.”

comments include "So no more patriarchy?" and "This tracks with the ideological rejection of western values by the Heritage Foundation's P2025 and their Dark Enlightenment ideals. Makes perfect sense that their orders directly reflect Yarvin's attacks on the "Cathedral". "

or the comments on a post about how elon has turned out to be a huge piece of shit because he's a ketamine addict

comments include "Cults, or to put it more nicely all-consuming social movements, can also revamp personality in a fairly short period of time. I've watched it happen to people going both far right and far left, and with more traditional cults, and it looks very similar in its effect on the person. And one of ketamine's effects is to make people suggestible; I think some kind of cult indoctrination wave happened in silicon valley during the pandemic's combo of social isolation, political radicalism, and ketamine use in SV." and "I can think of another fascist who used amphetamines, hormones and sedatives."

mostly though they're engaging in the traditional rationalist pastime of giving each other anxiety

cartoon. a man and a woman in bed. the man looks haggard and is sitting on the edge of the bed, saying "How can you think about that with everything that's going on in the field of AI?"

Comment from EnigmaticDoom: Yeah it can feel that way sometime... but knowing we probably have such a small amount of time left. You should be trying to enjoy every little sip left that you got rather than stressing ~

load more comments (6 replies)
[–] froztbyte@awful.systems 10 points 1 week ago (7 children)

the btb zizians series has started

surprisingly it's only 4 episodes

[–] swlabr@awful.systems 8 points 1 week ago

David Gborie! One of my fave podcasters and podcast guests. Adding this to the playlist

load more comments (6 replies)
[–] fasterandworse@awful.systems 10 points 1 week ago (1 children)

ICYI here's me getting a bit ranty about generative ai products https://www.youtube.com/watch?v=x5MQb-uNf2U

[–] swlabr@awful.systems 8 points 1 week ago (1 children)

With that voice? Rant all you like!

load more comments (1 replies)
[–] froztbyte@awful.systems 9 points 1 week ago

replacing all prior search engines with shitty chatbots is continuing to prove a remarkably good idea

...wait did I say good idea? I meant the other thing

[–] froztbyte@awful.systems 9 points 1 week ago (1 children)

the C-levels were promised intelligence! and it’s now a personal failing of the peons that intelligence is not present!

[–] self@awful.systems 8 points 1 week ago

Apple’s Siri Chief Calls AI Delays Ugly and Embarrassing, Promises Fixes

it’s not the delays that people seem to hate, it’s that the shipped features barely fucking work and nobody’s excited to burn battery life or buy new phones for any of them

[–] dgerard@awful.systems 8 points 1 week ago (1 children)
[–] self@awful.systems 8 points 1 week ago (1 children)

this article will most likely be how I (hopefully very rarely) start off conversations about rationalism in real life should the need once again arise (and somehow it keeps arising, thanks 2025)

but also, hoo boy what a painful talk page

[–] dgerard@awful.systems 8 points 1 week ago (5 children)

it's not actually any more painful than any wikipedia talk page, it's surprisingly okay for the genre really

remember: wikipedia rules exist to keep people like this from each others' throats, no other reason

load more comments (5 replies)
[–] BlueMonday1984@awful.systems 8 points 1 week ago (1 children)

In other news, BlueSky's put out a proposal on letting users declare how their data gets used, and BlueSky post announcing this got some pretty hefty backlash - not for the proposal itself, but for the mere suggestion that their posts were scraped by AI. Given this is the same site which tore HuggingFace a new one and went nuclear on ROOST, I'm not shocked.

Additionally, Molly White's put out her thoughts on AI's impact on the commons, and recommended building legal frameworks to enforce fair compensation from AI systems which make use of the commons.

Personally, I feel that building any kind of legal framework is not going to happen - AI corps' raison d'etre is to strip-mine the commons and exploit them in as unfair a manner as possible, and are entirely willing to tear apart any and all protection (whether technological or legal) to make that happen.

As a matter of fact, Brian Merchant's put out a piece about OpenAI and Google's assault on copyright as I was writing this.

load more comments (1 replies)
[–] BurgersMcSlopshot@awful.systems 7 points 1 week ago (3 children)

So I enjoy the Garbage Day newsletter, but this episode of Panic World with Casey Newton is just painful, in the way that Casey is just spitting out unproven assertions.

[–] BurgersMcSlopshot@awful.systems 8 points 1 week ago (1 children)

Got to a point in this where Casey Newton enumerated his "only two sides" of AI and well, fuck Casey Newton.

[–] YourNetworkIsHaunted@awful.systems 8 points 1 week ago (1 children)

Was he the one who wrote that awful "real and dangerous vs fake and sucks" piece? The one that pretended that critihype was actually less common than actual questions about utility and value?

[–] BurgersMcSlopshot@awful.systems 8 points 1 week ago (1 children)

Yeah, and a lot of the answers he gave seemed to originate from that point.

One particularly grating thing was saying that the left needs to embrace AI to fight facism because "facism embraced AI and they are doing well!" which is just so grating a conclusion to jump to.

[–] sc_griffith@awful.systems 9 points 1 week ago

fascism also embraced fascism and is doing well, so by that logic the left needs to embrace fascism

load more comments (2 replies)
load more comments