this post was submitted on 25 May 2025
26 points (100.0% liked)

TechTakes

2077 readers
33 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] BlueMonday1984@awful.systems 9 points 1 month ago

Got two major pieces to show which caught my attention:

[–] bitofhope@awful.systems 9 points 1 month ago

So far away we wait for the AGI
For the billions all wasted and gone
We feel the pain of compute time lost in few thousand days
Through the sneering and the flames we carry on

[–] swlabr@awful.systems 9 points 1 month ago (2 children)

Opening up the sack with your new favourite uwu news influencer giving a quick shout-out to our old pals, the NRx. Hoped that we wouldn’t get here, but here we are, regardless.

[–] mountainriver@awful.systems 8 points 1 month ago (2 children)

I didn't know that uwu news influencer was a thing. Kind of a clash between style and topic there, but hey whatever gets the word out.

[–] swlabr@awful.systems 8 points 1 month ago

I didn’t know that uwu news influencer was a thing.

It's probably a thing where if you start thinking about it, it's always been around, but we've just never had the right vocabulary to describe it.

[–] nightsky@awful.systems 7 points 1 month ago

I didn’t know that uwu news influencer was a thing.

Same, and also I'm still trying to process that "uwu" breached out of furry spaces and became a widely understood term. (Although I'm not entirely sure what way it took, it's also possible that it breached out of anime-related communities. Maybe some day cyber-archeologists can figure this out.)

[–] Soyweiser@awful.systems 6 points 1 month ago

I had so hoped that the rise of Trump (and his fall due to Biden) on the back of the more numerous and popular seeming Alt-Right had been the end of all this. Showing that NRx was a sort of weaker evolutionary dead end so to speak. But sadly no.

[–] mii@awful.systems 8 points 1 month ago

Pretty good summary of why Alex Karp is as much a horrible fucking shithead as Thiel.

https://www.thenation.com/article/culture/alex-karp-palantir-tech-republic/tnamp/

[–] fasterandworse@awful.systems 7 points 1 month ago
[–] Architeuthis@awful.systems 7 points 1 month ago (1 children)

In an completely unprecedented turn of events, the word prediction machine has a hard time predicting numbers.

https://www.wired.com/story/google-ai-overviews-says-its-still-2024/

load more comments (1 replies)

Further evidence emerging that the effort to replace government employees with the Great Confabulatron are well at hand and the presumed first-order goal of getting a yes-man to sign off on whatever bullshit is going well.

Now we wait for the actual policy implications and the predictable second-order effects. Which is to say dead kids.

[–] BlueMonday1984@awful.systems 7 points 1 month ago

New Bluesky post from Baldur Bjarnason:

What’s missing from the now ubiquitous “LLMs are good for code” is that code is a liability. The purpose of software is to accomplish goals with the minimal amount of code that’s realistically possible

LLMs may be good for code, but they seem to be a genuine hazard for collaborative software dev

[–] BlueMonday1984@awful.systems 6 points 1 month ago (4 children)

New artcle from Brian Merchant: An 'always on' OpenAI device is a massive backlash waiting to happen

Giving my personal thoughts on the upcoming OpenAI Device^tm^, I think Merchant's correct to expect mass-scale backlash against the Device^tm^ and public shaming/ostracisation of anyone who decides to use it - especially considering its an explicit repeat of the widely clowned on Humane AI Pin.

headlines of Device^tm^ wearers getting their asses beaten in the street to follow soon afterwards. As Brian's noted, a lot of people would see wearing an OpenAI Device^tm^ as an open show of contempt for others, and between AI's public image becoming utterly fouled by the bubble and Silicon Valley's reputation going into the toilet, I can see someone seeing a Device^tm^ wearer as an opportunity to take their well-justified anger at tech corps out on someone who openly and willingly bootlicks for them.

load more comments (4 replies)
[–] gerikson@awful.systems 6 points 1 month ago (1 children)

I hate I'm so terminally online I found out about the rumor that Musk and Stephen Miller's wife are bumping uglies through a horrorfic parody account

https://mastodon.social/@bitterkarella@sfba.social/114593332907413196

load more comments (1 replies)
[–] scruiser@awful.systems 6 points 1 month ago

Loose Mission Impossible Spoilers

The latest Mission Impossible movie features a rogue AI as one of the main antagonists. But on the other hand, the AI's main powers are lies, fake news, and manipulation, and it only gets as far as it does because people allow fear to make themselves manipulable and it relies on human agents to do a lot of its work. So in terms of promoting the doomerism narrative, I think the movie could actually be taken as opposing the conventional doomer narrative in favor of a calm, moderate, internationally coordinated (the entire plot could have been derailed by governments agreeing on mutual nuclear disarmament before the AI subverted them) response against AI's that ultimately have only moderate power.

Adding to the post-LLM hype predictions: I think post LLM bubble popping, "Terminator" style rogue AI movie plots don't go away, but take on a different spin. Rogue AI's strength's are going to be narrower, their weaknesses are going to get more comical and absurd, and idiotic human actions are going to be more of a factor. For weaknesses it will be less "failed to comprehend love" or "cleverly constructed logic bomb breaks its reasoning" and more "forgets what it was doing after getting drawn into too long of a conversation". For human actions it will be less "its makers failed to anticipate a completely unprecedented sequence of bootstrapping and self improvement" and more "its makers disabled every safety and granted it every resource it asked for in the process of trying to make an extra dollar a little bit faster".

load more comments
view more: ‹ prev next ›