this post was submitted on 11 Aug 2025
21 points (100.0% liked)

TechTakes

2112 readers
121 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Previous week

top 50 comments
sorted by: hot top controversial new old
[–] blakestacey@awful.systems 10 points 10 hours ago (1 children)

Idea: a programming language that controls how many times a for loop cycles by the number of times a letter appears in a given word, e.g., "for each b in blueberry".

[–] Soyweiser@awful.systems 3 points 2 hours ago

Only dutch/german people can create the very long loops.

[–] o7___o7@awful.systems 5 points 10 hours ago

Palantir's public relations team explains how it helped America win the Global War on Terror

https://news.ycombinator.com/item?id=44894910

[–] shapeofquanta@lemmy.vg 8 points 18 hours ago (1 children)

Not a sneer but a question: Do we have any good idea on what the actual cost of running AI video generators are? They're among the worst internet polluters out there, in my opinion, and I'd love it if they're too expensive to use post-bubble but I'm worried they're cheaper than you'd think.

[–] scruiser@awful.systems 5 points 10 hours ago (1 children)

I know like half the facts I would need to estimate it... if you know the GPU vRAM required for the video generation, and how long it takes, then assuming no latency, you could get a ballpark number looking at nVida GPU specs on power usage. For instance, if a short clip of video generation needs 90 GB VRAM, then maybe they are using an RTX 6000 Pro... https://www.nvidia.com/en-us/products/workstations/professional-desktop-gpus/ , take the amount of time it takes in off hours which shouldn't have a queue time... and you can guessestimate a number of Watt hours? Like if it takes 20 minutes to generate, then at 300-600 watts of power usage that would be 100-200 watt hours. I can find an estimate of $.33 per kWh (https://www.energysage.com/local-data/electricity-cost/ca/san-francisco-county/san-francisco/ ), so it would only be costing $.03 to $.06.

IDK how much GPU-time you actually need though, I'm just wildly guessing. Like if they use many server grade GPUs in parallel, that would multiply the cost up even if it only takes them minutes per video generation.

[–] Soyweiser@awful.systems 3 points 2 hours ago

This does leave out the constant cost (per video generated) of training the model itself right. Which pro genAI people would say you only have to do once, but we know everything online gets scraped repeatedly now so there will be constant retraining. (I am mixing video with text here so, lot of big unknowns).

[–] mirrorwitch@awful.systems 14 points 23 hours ago* (last edited 22 hours ago) (2 children)

I've often called slop "signal-shaped noise". I think the damage already done by slop pissed all over the reservoirs of knowledge, art and culture is irreversible and long-lasting. This is the only thing generative "AI" is good at, making spam that's hard to detect.

It occurs to me that one way to frame this technology is as a precise inversion of Bayesian spam filters for email; no more and no less. I remember how it was a small revolution, in the arms race against spammers, when statistical methods came up; everywhere we took of the load of straining SpamAssassin with rspamd (in the years before gmail devoured us all). I would argue "A Plan for Spam" launched Paul Graham's notoriety, much more than the Lisp web stores he was so proud of. Filtering emails by keywords was not being enough, and now you could train your computer to gradually recognise emails that looked off, for whatever definition of "off" worked for your specific inbox.

Now we have the richest people building the most expensive, energy-intensive superclusters to use the same statistical methods the other way around, to generate spam that looks like not-spam, and is therefore immune to all filtering strategies we had developed. That same blob-like malleability of spam filters makes the new spam generators able to fit their output to whatever niche they want to pollute; the noise can be shaped like any signal.

I wonder what PG is saying about gen-"AI" these days? let's check:

“AI is the exact opposite of a solution in search of a problem,” he wrote on X. “It’s the solution to far more problems than its developers even knew existed … AI is turning out to be the missing piece in a large number of important, almost-completed puzzles.”
He shared no examples, but […]

Who would have thought that A Plan for Spam was, all along, a plan for spam.

[–] Soyweiser@awful.systems 9 points 22 hours ago

It occurs to me that one way to frame this technology is as a precise inversion of Bayesian spam filters for email.

This is a really good observation, and while I had lowkey noticed it (one of those feeling things), I never had verbalized it in anyway. Good point imho. Also in how it bypasses and wrecks the old anti-spam protections. It represents a fundamental flipping of sides of the tech industry. While before they were anti-spam it is now pro-spam. A big betrayal of consumers/users/humanity.

[–] swlabr@awful.systems 7 points 22 hours ago

Signal shaped noise reminds me of a wiener filter.

Aside: when I took my signals processing course, the professor kept drawing diagrams that were eerily phallic. Those were the most memorable parts of the course

[–] bitofhope@awful.systems 6 points 22 hours ago (1 children)

The beautiful process of dialectics has taken place on the butterfly site, and we have reached a breakthrough in moral philosophy. Only a few more questions remain before we can finally declare ethics a solved problem. The most important among them is, when an omnipotent and omnibenevolent basilisk simulates Roko Mijic getting kicked in a nuts eternally by a girl with blue hair and piercings, would the girl be barefoot or wearing heavy, steel-toed boots? Which kind of footwear of lack thereof would optimize the utility generated?

[–] antifuchs@awful.systems 5 points 21 hours ago (1 children)

The last conundrum of our time: of course steel capped work boots would hurt more but barefoot would allow faster (and therefore more) kicks.

[–] Soyweiser@awful.systems 5 points 21 hours ago (2 children)

You have not taken the lessons of the philosopher Piccolo to mind. You should wear even heavier boots in your day to day. Why do you think goths wear those huge heavy boots? For looks?

[–] antifuchs@awful.systems 6 points 20 hours ago

And thus I was enlightened

load more comments (1 replies)
[–] BlueMonday1984@awful.systems 9 points 1 day ago (18 children)

Ed Zitron's given his thoughts on GPT-5's dumpster fire launch:

Personally, I can see his point - the Duke Nukem Forever levels of hype around GPT-5 set the promptfondlers up for Duke Nukem Forever levels of disappointment with GPT-5, and the "deaths" of their AI waifus/therapists this has killed whatever dopamine delivery mechanisms they've set up for themselves.

[–] nfultz@awful.systems 7 points 15 hours ago (1 children)

In a similar train of thought:

A.I. as normal technology (derogatory) | Max Read

But speaking descriptively, as a matter of long precedent, what could be more normal, in Silicon Valley, than people weeping on a message board because a UX change has transformed the valence of their addiction?

I like the DNF / vaporware analogy, but did we ever have a GPT Doom or Duke3d killer app in the first place? Did I miss it?

[–] BlueMonday1984@awful.systems 9 points 14 hours ago (1 children)

I like the DNF / vaporware analogy, but did we ever have a GPT Doom or Duke3d killer app in the first place? Did I miss it?

In a literal sense, Google did attempt to make GPT Doom, and failed (i.e. a large language model can't run Doom).

In a metaphorical sense, the AI equivalent to Doom was probably AI Dungeon, a roleplay-focused chatbot viewed as quite impressive when it released in 2020.

[–] nfultz@awful.systems 9 points 13 hours ago

In April 2021, AI Dungeon implemented a new algorithm for content moderation to prevent instances of text-based simulated child pornography created by users. The moderation process involved a human moderator reading through private stories.[49][41][50][51] The filter frequently flagged false positives due to wording (terms like "eight-year-old laptop" misinterpreted as the age of a child), affecting both pornographic and non-pornographic stories. Controversy and review bombing of AI Dungeon occurred as a result of the moderation system, citing false positives and a lack of communication between Latitude and its user base following the change.[40]

Haha. Good find.

[–] hmwilker@social.tchncs.de 5 points 20 hours ago (1 children)

@BlueMonday1984 Oh, großartig - thank you for this expression. I hope I’ll remember “promptfondlers” for relevant usage opportunities.

load more comments (16 replies)
load more comments
view more: next ›