this post was submitted on 07 Sep 2025
18 points (95.0% liked)

TechTakes

2155 readers
99 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 38 comments
sorted by: hot top controversial new old
[–] mlen@awful.systems 9 points 1 day ago

Signal is finally close to releasing a cross platform backup system: https://signal.org/blog/introducing-secure-backups/

[–] BlueMonday1984@awful.systems 8 points 1 day ago* (last edited 1 day ago) (1 children)

GoToSocial recently put up a code of conduct that openly barred AI-"assisted" changes and fascist/capitalist involvement, prompting some concern trolling on the red site.

Got a promptfondler trying to paint basic human decency as ridiculous, and a Concerned Individual^tm^ who's pissed at GoToSocial refusing to become a Nazi bar.

[–] gerikson@awful.systems 5 points 1 day ago

Yet another example of how acceptance of GenAI is increasingly coded as right wing

[–] CinnasVerses@awful.systems 11 points 2 days ago* (last edited 21 hours ago) (2 children)

When it started in ’06, this blog was near the center of the origin of a “rationalist” movement, wherein idealistic youths tried to adapt rational styles and methods. While these habits did often impress, and bond this community together, they alas came to trust that their leaders had in fact achieved unusual rationality, and on that basis embraced many contrarian but not especially rational conclusions of those leaders. - Robin Hanson, 2025

I hear that even though Yud started blogging on his site, and even though George Mason University type economics is trendy with EA and LessWrong, Hanson never identified himself with EA or LessWrong as movements. So this is like Gabriele D'Annunzio insisting he is a nationalist not a fascist, not Nicholas Taleb denouncing phrenology.

[–] scruiser@awful.systems 6 points 1 day ago* (last edited 1 day ago) (10 children)

He had me in the first half, I thought he was calling out rationalist's problems (even if dishonestly disassociating himself from then). But then his recommended solution was prediction markets (a concept which rationalists have in fact been trying to play around with, albeit at a toy model level with fake money).

[–] blakestacey@awful.systems 9 points 1 day ago* (last edited 1 day ago) (1 children)

Also a concept that Scott Aaronson praised Hanson for.

https://web.archive.org/web/20210425233250/https://twitter.com/arthur_affect/status/994112139420876800

(Crediting the "Great Filter" to Hanson, like Scott Computers there, sounds like some fuckin' bullshit to me. In Cosmos, Carl Sagan wrote, "Why are they not here? There are many possible answers. Although it runs contrary to the heritage of Aristarchus and Copernicus, perhaps we are the first. Some technical civilization must be the first to emerge in the history of the Galaxy. Perhaps we are mistaken in our belief that at least occasional civilizations avoid self-destruction." And in his discussion of abiogenesis: "Life had arisen almost immediately after the origin of the Earth, which suggests that life may be an inevitable chemical process on an Earth-like planet. But life did not evolve beyond blue-green algae for three billion years, which suggests that large lifeforms with specialized organs are hard to evolve, harder even than the origin of life. Perhaps there are many other planets that today have abundant microbes but no big beasts and vegetables." Boom! There it is, in only the most successful pop-science book of the century.)

[–] swlabr@awful.systems 8 points 1 day ago* (last edited 1 day ago)

Most famously, Robin is […] also the inventor of futarchy

A futarchy, you say? Tell me more, Robin Hanson

[–] sailor_sega_saturn@awful.systems 7 points 1 day ago (1 children)

Honestly Hanson is so awful the rationalists almost make him look better by association.

[–] scruiser@awful.systems 9 points 1 day ago (1 children)

He's the one that used the phrase "silent gentle rape"? Yeah, he's at least as bad as the worst evo-psych pseudoscience misogyny posted on lesswrong, with the added twist he has a position in academia to lend him more legitimacy.

[–] swlabr@awful.systems 6 points 1 day ago

I started reading his post with that title to refresh myself. Just to get your feet wet:

DEC 01, 2010

Added Oct ’13:

Man, what happened in the three years it took for a content warning?

Anyway I skimmed it, the rest of the post is a huge pile of shit that I don’t want to read any more of, I’m sure it's been picked apart already. But JFC.

load more comments (8 replies)
[–] Tar_alcaran@sh.itjust.works 7 points 1 day ago (2 children)

I deeply regret I have made posts proclaiming LessWrong as amazing, in the past.

They do still have a decent article here and there, but that's like digging for strawberries in a pile of shit. Even if you find one, it won't be great.

[–] CinnasVerses@awful.systems 7 points 1 day ago* (last edited 1 day ago)

We have some threads of Vaccinations in Book/Article Form which try to share good pop science and textbooks without the cult shit and Dunning-Kruger. People who think they know everything and are mysteriously underemployed tend to have the most time to post though.

load more comments (1 replies)
[–] nfultz@awful.systems 8 points 1 day ago (2 children)

For those of you in the (West) LA area, there's a panel with Brian Merchant happening tomorrow. Probably no food this school year but still looks good.

https://law.ucla.edu/events/democracy-technology-salon

If anyone does turn up, codeword is banana bread, otherwise I'll assume you're a lawyer (not derogatory).

[–] aio@awful.systems 9 points 1 day ago

codeword is banana bread

Will there be statues to swap as well?

[–] BigMuffN69@awful.systems 8 points 1 day ago* (last edited 1 day ago)

Until proven otherwise, I assume everyone I encounter is a fellow sneerer (derogatory)

[–] gerikson@awful.systems 4 points 1 day ago (2 children)

MAGA hates AI (well, Big Tech):

https://archive.ph/mBo9I

(Originally The Verge: MAGA populists call for holy war against Big Tech)

https://www.theverge.com/politics/773154/maga-tech-right-ai-natcon

[–] fullsquare@awful.systems 3 points 1 day ago

i'd like to say "there is great fitna among republicans" but i can't feel like it'll blow over with thielbux recipient freaks just becoming more visible, and it's not like trump cares about common clay of the new west over his deals with billionaires either way

[–] bigfondue@lemmy.world 3 points 1 day ago* (last edited 1 day ago) (1 children)

I get a bit of Schadenfreude from seeing everyone that cozies up to Trump eventually get turned against. The only people who have stuck around from the first term seem to be the Steves (Miller and Cheung)

[–] EponymousBosh@awful.systems 15 points 2 days ago (4 children)
[–] BlueMonday1984@awful.systems 14 points 2 days ago (1 children)

I genuinely thought therapists were gonna avoid the psychosis-inducing suicide machine after seeing it cause psychosis and suicide. Clearly, I was being too optimistic.

[–] fullsquare@awful.systems 6 points 2 days ago

nah they're built different

[–] swlabr@awful.systems 6 points 2 days ago

Yeah, that headline and its writer can kick rocks.

[–] zogwarg@awful.systems 10 points 2 days ago
The future is now, and it is awful. 
Would any still wonder why, I grow so ever mournful.
[–] froztbyte@awful.systems 7 points 2 days ago

irl winced at this

[–] wizardbeard@lemmy.dbzer0.com 17 points 2 days ago (4 children)

Some poor souls who arguably have their hearts in the right place definitely don't have their heads screwed on right, and are trying to do hunger strikes outside Google's AI offices and Anthropic's offices.

https://programming.dev/post/37056928 contains links to a few posts on X by the folks doing it.

Imagine being so worried about AGI that you thought it was worth starving yourself over.

Now imagine feeling that strongly about it and not stopping to ask why none of the ideologues who originally sounded the alarm bells about it have tried anything even remotely as drastic.

On top of all that, imagine being this worried about what Anthropic and Google are doing in the research of AI, hopefully being aware of Google's military contracts, and somehow thinking they give a singular shit if you kill yourself over this.

And... where's the people outside fucking OpenAI? Bets on this being some corporate shadowplay shit?

[–] YourNetworkIsHaunted@awful.systems 10 points 2 days ago (3 children)

I mean, I try not to go full conspiratorial everything-is-a-false-fllag, but the fact that the biggest AI company that has been explicitly trying to create AGI isn't getting the business here is incredibly suspect. On the other hand, though, it feels like anything that publicly leans into the fears of evil computer God would be a self-own when they're in the middle of trying to completely ditch the "for the good of humanity, not just immediate profits" part of their organization.

[–] JFranek@awful.systems 6 points 1 day ago

It's two guys in London and one guy in San Francisco. In London there's presumably no OpenAI office, in SF, you can't be at two places at once and Anthropic has more true believers/does more critihype.

Unrelated, few minutes before writing this a bona-fide cultist replied to the programming dev post. Cultist with the handle "BussyGyatt @feddit.org". Truly the dumbest timeline.

[–] bigfondue@lemmy.world 7 points 2 days ago* (last edited 2 days ago) (1 children)

Didn't OpenAI just file court documents claiming that their opposition is funded by competitors? Accusing someone else of what they themselves are doing seems to be a pretty popular strategy these days.

[–] holdenweb@freeradical.zone 3 points 2 days ago

@bigfondue @YourNetworkIsHaunted every accusation is a confession!

[–] Soyweiser@awful.systems 6 points 2 days ago* (last edited 2 days ago)

I dont know anything about the locations of any offices, but could it he that openAI just didnt have any local places? Asking them why not all worked ld be a good journalist question

But otoh it is just ~~two~~ three of them, and the second ones photo gives off a weird vibe. Why is he smiling like it is a joke?

load more comments (3 replies)
[–] BlueMonday1984@awful.systems 8 points 2 days ago (1 children)

Starting this Stubsack off, I found a Substack post titled "Generative AI could have had a place in the arts", which attempts to play devil's advocate for the plagiarism-fueled slop machines.

Pointing to one particular lowlight, the author attempts to conflate AI with actually useful tech to try and make an argument:

While the idea of generative AI “democratizing” art is more or less a meme these days, there are in fact AI tools that do make certain artforms more accessible to low-budget productions. The first thing to come to mind is how computer vision-based motion capture give 3D animators access to clearer motion capture data from a live-action actor using as little as a smartphone camera and without requiring expensive mo-cap suits.

[–] froztbyte@awful.systems 4 points 2 days ago (1 children)

gigabyte selling shovels (and not even just random shovels, specialty shovels that need a fixed type of mobo to use)

not gonna spend much effort on it now but if someone runs into an actual worthwhile review showing training performance numbers I'd be keen to see (my expectations are that it still does not do very much, and that runtime quality still underperforms relative to VC-subsidised platforms)

[–] nightsky@awful.systems 5 points 2 days ago (1 children)

Fascinating how that product page is full of marketing fluff, but nowhere does it say what this actually is...? What does it do? It's some kind of.... memory expansion? But what's beneath the big heatsink then? All they say is that it's somehow amazing:

In the age of local AI, GIGABYTE AI TOP is the all-round solution to win advantages ahead of traditional AI training methods. It features a variety of groundbreaking technologies that can be easily adapted by beginners or experts, for most common open-source LLMs, in anyplace even on your desk.

A variety of groundbreaking technologies, uh huh, okay then. In so many ways this is the perfect companion product for AI.

[–] istewart@awful.systems 5 points 1 day ago

Oh, it's a CXL board, Compute Express Link. Basically a way to attach DRAM to PCI Express. I know some people working on this stuff for one of the big vendors, but in that context it was a rack-scale box capable of handling multiple terabytes' worth of DIMMs. Having this as a desktop expansion card seems like a bit of a marginal application, but Gigabyte's done weird shit before. For instance, I have an AMD-compatible Thunderbolt 3 card that was only made in limited quantities by them and ASRock.

load more comments
view more: ‹ prev next ›