this post was submitted on 28 Sep 2025
16 points (100.0% liked)

SneerClub

1196 readers
2 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

An opposition between altruism and selfishness seems important to Yud. 23-year-old Yud said "I was pretty much entirely altruistic in terms of raw motivations" and his Pathfinder fic has a whole theology of selfishness. His protagonists have a deep longing to be world-historical figures and be admired by the world. Dreams of controlling and manipulating people to get what you want are woven into his community like mould spores in a condemned building.

Has anyone unpicked this? Is talking about selfishness and altrusm common in LessWrong like pretending to use Bayesian statistics?

you are viewing a single comment's thread
view the rest of the comments
[–] swlabr@awful.systems 7 points 3 days ago* (last edited 3 days ago) (1 children)

This paragraph caught my interest. It used some terms I wasn’t familiar with, so I dove in.

Ego gratification as a de facto supergoal (if I may be permitted to describe the flaw in CFAImorphic terms)

TL note: “CFAI” is thisbook-length document” titled “Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures”, in case you forgot. It’s a little difficult to quickly distill what a supergoal is, despite it being defined in the appendix. It’s one of two things:

  1. A big picture type of goal that might require making “smaller” goals to achieve. In the literature this is also known as a “parent goal” (vs. a “child goal”)

  2. An “intrinsically desirable” world (end) state, which probably requires reaching other “world states” to bring about. (The other “world states” are known as “subgoals”, which are in turn “child goals”)

Yes, these two things look pretty much the same. I’d say the second definition is different because it implies some kind of high-minded “desirability”. It’s hard to quickly figure out if Yud actually ever uses the second definition instead of the first because that would require me reading more of the paper.

is a normal emotion, leaves a normal subjective trace, and is fairly easy to learn to identify throughout the mind if you can manage to deliberately "catch" yourself doing it even once.

So Yud isn’t using “supergoal” on the scale of a world state here. Why bother with the cruft of this redundant terminology? Perhaps the rest of the paragraph will tell us.

Anyway this first sentence is basically the whole email. “My brain was able to delete ego gratification as a supergoal”.

Once you have the basic ability to notice the emotion,

Ah, are we weaponising CBT? (cognitive behavioral therapy, not cock-and-ball torture)

you confront the emotion directly whenever you notice it in action, and you go through your behavior routines to check if there are any cases where altruism is behaving as a de facto child goal of ego gratification; i.e., avoidance of altruistic behavior where it would conflict with ego gratification, or a bias towards a particular form of altruistic behavior that results in ego gratification.

Yup we are weaponising CBT.

All that being said, here’s what I think. We know that Yud believes that “aligning AI” is the most altruistic thing in the world. Earlier I said that “ego gratification” isn’t something on the “world state” scale, but for Yud, it is. See, his brain is big enough to change the world, so an impure motive like ego gratification is a “supergoal” in his brain. But at the same time, his certainty in AI-doomsaying is rooted in belief of his own super-intelligence. I’d say that the ethos of ego-gratification has far transcended what can be considered normal.