this post was submitted on 24 Aug 2025
22 points (100.0% liked)

TechTakes

2128 readers
110 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

you are viewing a single comment's thread
view the rest of the comments
[–] Architeuthis@awful.systems 13 points 1 day ago* (last edited 1 day ago) (4 children)

This hits differently over the recent news that ChatGPT encouraged and aided a teen suicide.

transcriptKelsey Piper xhitted: Never thought I'd become a 'take you relationship problems to ChatGPT' person but when the 8yo and I have an argument it actually works really well to mutually agree on an account of events for Claude and the ask for its opinion

I think she considers the AIs far more knowledgeable than me about reasonable human behavior so if I say something that's no reason to think it's true but if Claude says it then it at least merits serious consideration

[–] blakestacey@awful.systems 9 points 18 hours ago (1 children)

"When the 8-year-old and I have an argument, it actually works really well to mutually agree on an account of events... and then take cocaine together."

Acknowledging/validating each other's feelings and finding a mutually-agreeable understanding of the conflict is already the hard part that most parents and kids aren't willing to do. Talking to a chatbot after that just seems like you don't understand the fucking point and are still trying to "be right" or whatever.

[–] gerikson@awful.systems 11 points 23 hours ago

In 12 years we'll get a book "My Mom Outsourced Raising Me to AI and it Broke Me"

[–] nightsky@awful.systems 15 points 1 day ago (2 children)

When an 8 year old thinks an AI is "far more knowledgeable than me about reasonable human behavior" that could lead a person to self-reflection. Could.

[–] madengineering@mastodon.cloud 5 points 21 hours ago

@nightsky @Architeuthis Computer programs are generally written to be basically polite for customer service purposes.

A generative text program that opened chats with "Ugh, what do you want THIS time, loser?" would likely find itself ....corrected.... by the parent company on short order.

[–] kgMadee2@mathstodon.xyz 5 points 1 day ago

@nightsky @Architeuthis the kid's not wrong, though 😭

[–] BlueMonday1984@awful.systems 10 points 1 day ago (6 children)
[–] blakestacey@awful.systems 5 points 11 hours ago

From elsewhere in the replies:

My husband and I took a disagreement to Claude today, by happenstance. It was very helpful!

---TracingWoodgrains, who apparently feels that he is qualified to judge any other human being, ever, in spite of evidence to the contrary

[–] scruiser@awful.systems 8 points 13 hours ago* (last edited 13 hours ago) (1 children)

I have context that makes this even more cringe! "Lawfulness concerns" refers to like, Dungeons and Dragons lawfulness. Specifically the concept of lawfulness developed in the Pathfinder fanfiction we've previously discussed (the one with deliberately bad BDSM and eugenics). Like a proper Lawful Good Paladin of Iomedae wouldn't put you in a position where you had to trust they hadn't rigged the background prompt if you went to them for spiritual counseling. (Although a Lawful Evil cleric of Asmodeus totally would rig the prompt... Lawfulness as a measuring stick of ethics/morality is a terrible idea even accepting the premise of using Pathfinder fanfic to develop your sense of ethics.)

[–] blakestacey@awful.systems 7 points 12 hours ago

Not everything that Yud writes feels like it should be read in the voice of Augustus St. Cloud, but a lot of it sure does.

[–] Soyweiser@awful.systems 10 points 21 hours ago

That is his concern and not the billionaires behind it messing with the systems so much you cant prompt override it? Please tell me this guy doesnt work in AI alignment.

[–] Amoeba_Girl@awful.systems 11 points 1 day ago (2 children)

Yes dude, that's the main thing you should be concerned about of course. AI tools couldn't possibly be bad in and of themselves, it has to be human tampering. You've always been very clear about that part.

[–] froztbyte@awful.systems 8 points 18 hours ago* (last edited 16 hours ago)

it has to be human tampering.

and of course we all have root on the prompt, where - at will - we can just instantly impose all manner of will on the corporate vendor chatbot. y'know, the chatbot operating in a service structured as much as possible to try to do what the corporate vendor wants to desperately maintain

(it continues to astound me that anyone takes yud seriously, at all, ever)

[–] Amoeba_Girl@awful.systems 16 points 1 day ago (1 children)

tfw your gifted child syndrome resentment of adults is powerful enough to make you forget about your life's work

[–] scruiser@awful.systems 5 points 13 hours ago

I was trying to figure out why he hadn't turned this into an opportunity to lecture (or write a mini-fanfic) about giving more attack surface to the AGI to manipulate you... I was stumped until I saw your comment. I think that is it, expressing his childhood distrust of authority trumps lecturing us on the AI-God's manipulations.

[–] swlabr@awful.systems 6 points 23 hours ago

KP writing a paper for the journal “New Frontiers In Gaslighting Children”

[–] Architeuthis@awful.systems 10 points 1 day ago (1 children)

I feel dumber for having read that, and not in the intellectually humbled way.

[–] YourNetworkIsHaunted@awful.systems 4 points 20 hours ago (2 children)

I mean, tampering with the system prompt is definitely a kind of concern, given what we've seen happen with Grok's tenure as mechahitler or Replika users finding their girlfriend no longer wanted them. But "messing with system memory" is the kind of sci-fi nonsense that should stay in a cyberpunk novel.

[–] scruiser@awful.systems 6 points 13 hours ago (1 children)

system memory

System memory is just the marketing label for "having an LLM summarize a bunch of old conversations and shoving it into a hidden prompt". I agree that using that term is sneer-worthy.

Thanks for the clarification. I had definitely assumed that he meant some kind of God-AI-level attack that revolved around live editing the data or state in RAM or something.

[–] Architeuthis@awful.systems 7 points 16 hours ago (1 children)

He has capital L Lawfulness concerns. About the parent and the child being asymmetrically skilled in context engineering. Which apparently is the main reason kids shouldn't trust LLM output.

Him showing his ass with the memory comment is just a bonus.

[–] blakestacey@awful.systems 5 points 13 hours ago

"I have Lawfulness concerns" is just another way of saying "I deserve to be shoved into a locker".