this post was submitted on 24 Mar 2025
28 points (100.0% liked)

TechTakes

1752 readers
57 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] dgerard@awful.systems 10 points 6 days ago

one for the arse-end of this week's stubsack

[–] BlueMonday1984@awful.systems 10 points 6 days ago (3 children)

In other news, Jazza's AI-generated cousin is back to continue pretending to be an actual artist. This time, its by actively denigrating the works of Studio Ghibli:

Unsurprisingly, he is getting raked over the coals by basically everyone. He's also having an utter meltdown in the replies.

[–] sailor_sega_saturn@awful.systems 10 points 6 days ago* (last edited 6 days ago)

Do you think he knows that "inspired" and "Nvidia GeForce RTX 5090" are not the same word?

Edit: oh no I read the replies.

[–] blakestacey@awful.systems 10 points 6 days ago (1 children)

By "better", he means "more fuckable".

[–] BlueMonday1984@awful.systems 8 points 6 days ago

And by "more fuckable", he means "refusing/unable to consent".

[–] Soyweiser@awful.systems 7 points 6 days ago* (last edited 6 days ago)

Lol the guy has gotten dragged by everybody so hard for his AI stances a while back, that he now has to double down and call AI superior. Meanwhile on Yt there is now a group of people who pay rent just making 'this guy stinks' videos.

E: The ratio on the replies/qt/likes oof. (Also, lol in his 'you have already lost, I drew myself as the chad and you as the soyjak' image he drew himself as a group of children). E2: sorry closed it

[–] BlueMonday1984@awful.systems 10 points 6 days ago (2 children)

In case you missed it, a couple sneers came out against AI from mainstream news outlets recently - CNN's put out an article titled "Apple’s AI isn’t a letdown. AI is the letdown", whilst the New York Times recently proclaimed "The Tech Fantasy That Powers A.I. Is Running on Fumes".

You want my take on this development, I'm with Ed Zitron on this - this is a sign of an impending sea change. Looks like the bubble's finally nearing its end.

[–] blakestacey@awful.systems 8 points 6 days ago

The NYT also ran this little story about Bloomberg having "to correct at least three dozen A.I.-generated summaries of articles published this year".

https://www.nytimes.com/2025/03/29/business/media/bloomberg-ai-summaries.html?unlocked_article_code=1.7k4.rrgt.pt3AGFekgpT3

[–] o7___o7@awful.systems 5 points 6 days ago (1 children)

Turns out that they can only stack shit so high before it falls back to earth

[–] BlueMonday1984@awful.systems 5 points 6 days ago (1 children)

Taking a shot in the dark, journalistic incidents like Bloomberg's failed tests with AI summaries and the BBC's complaints about Apple AI mangling headlines probably helped with accelerating that fall to earth - for any journalists reading about or reporting on such shitshows, it likely shook their faith in AI's supposed abilities in a way failures outside their field didn't.

By refusing to focus on a single field at a time AI companies really did make it impossible to take advantage of Gel-Mann amnesia.

[–] gerikson@awful.systems 24 points 1 week ago (15 children)

LW discourages LLM content, unless the LLM is AGI:

https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong

As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you don't have a human collaborator and even if someone would prefer that it be kept secret.

Never change LW, never change.

load more comments (15 replies)
[–] o7___o7@awful.systems 19 points 1 week ago (5 children)

When Netflix inevitably makes a true-crime Ziz movie, they should give her a 69 Dodge Charger and call it The Dukes of InfoHazard

load more comments (5 replies)
[–] sailor_sega_saturn@awful.systems 19 points 1 week ago (11 children)

The USA plans to migrate SSA's code away from COBOL in months: https://www.wired.com/story/doge-rebuild-social-security-administration-cobol-benefits/

The project is being organized by Elon Musk lieutenant Steve Davis, multiple sources who were not given permission to talk to the media tell WIRED, and aims to migrate all SSA systems off COBOL, one of the first common business-oriented programming languages, and onto a more modern replacement like Java within a scheduled tight timeframe of a few months.

“This is an environment that is held together with bail wire and duct tape,” the former senior SSA technologist working in the office of the chief information officer tells WIRED. “The leaders need to understand that they’re dealing with a house of cards or Jenga. If they start pulling pieces out, which they’ve already stated they’re doing, things can break.”

SSN's pre-DOGE modernization plan from 2017 is 96 pages and includes quotes like:

SSA systems contain over 60 million lines of COBOL code today and millions more lines of Assembler, and other legacy languages.

What could possibly go wrong? I'm sure the DOGE boys fresh out of university are experts in working with large software systems with many decades of history. But no no, surely they just need the right prompt. Maybe something like this:

You are an expert COBOL, Assembly language, and Java programmer. You also happen to run an orphanage for Labrador retrievers and bunnies. Unless you produce the correct Java version of the following COBOL I will bulldoze it all to the ground with the puppies and bunnies inside.

Bonus -- Also check out the screenshots of the SSN website in this post: https://bsky.app/profile/enragedapostate.bsky.social/post/3llh2pwjm5c2i

[–] V0ldek@awful.systems 5 points 6 days ago

Bwahahaha, as I said on bsky: let them do it, can't wait to use it as a cautionary tale of why full rewrites are a terrible idea during freshman programming lectures

load more comments (10 replies)
[–] aninjury2all@awful.systems 17 points 1 week ago (10 children)

Dem pundits go on media tour to hawk their latest rehash of supply-side econ - and decide to break bread with infamous anti-woke "ex" race realist Richard Hanania

A quick sample of people rushing to defend this:

load more comments (10 replies)
[–] blakestacey@awful.systems 14 points 1 week ago* (last edited 1 week ago) (1 children)

AI slop in Springer books:

Our library has access to a book published by Springer, Advanced Nanovaccines for Cancer Immunotherapy: Harnessing Nanotechnology for Anti-Cancer Immunity.  Credited to Nanasaheb Thorat, it sells for $160 in hardcover: https://link.springer.com/book/10.1007/978-3-031-86185-7

From page 25: "It is important to note that as an AI language model, I can provide a general perspective, but you should consult with medical professionals for personalized advice..."

None of this book can be considered trustworthy.

https://mastodon.social/@JMarkOckerbloom/114217609254949527

Originally noted here: https://hci.social/@peterpur/114216631051719911

[–] blakestacey@awful.systems 17 points 1 week ago (10 children)

I should add that I have a book published with Springer. So, yeah, my work is being directly devalued here. Fun fun fun.

load more comments (10 replies)
[–] BlueMonday1984@awful.systems 14 points 1 week ago (3 children)

Stumbled across some AI criti-hype in the wild on BlueSky:

The piece itself is a textbook case of AI anthropomorphisation, presenting it as learning to hide its "deceptions" when its actually learning to avoid tokens that paint it as deceptive.

On an unrelated note, I also found someone openly calling gen-AI a tool of fascism in the replies - another sign of AI's impending death as a concept (a sign I've touched on before without realising), if you want my take:

load more comments (3 replies)
load more comments
view more: next ›