swlabr

joined 2 years ago
[–] swlabr@awful.systems 5 points 1 week ago

maybe 2, for good measure

[–] swlabr@awful.systems 8 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

This paragraph caught my interest. It used some terms I wasn’t familiar with, so I dove in.

Ego gratification as a de facto supergoal (if I may be permitted to describe the flaw in CFAImorphic terms)

TL note: “CFAI” is thisbook-length document” titled “Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures”, in case you forgot. It’s a little difficult to quickly distill what a supergoal is, despite it being defined in the appendix. It’s one of two things:

  1. A big picture type of goal that might require making “smaller” goals to achieve. In the literature this is also known as a “parent goal” (vs. a “child goal”)

  2. An “intrinsically desirable” world (end) state, which probably requires reaching other “world states” to bring about. (The other “world states” are known as “subgoals”, which are in turn “child goals”)

Yes, these two things look pretty much the same. I’d say the second definition is different because it implies some kind of high-minded “desirability”. It’s hard to quickly figure out if Yud actually ever uses the second definition instead of the first because that would require me reading more of the paper.

is a normal emotion, leaves a normal subjective trace, and is fairly easy to learn to identify throughout the mind if you can manage to deliberately "catch" yourself doing it even once.

So Yud isn’t using “supergoal” on the scale of a world state here. Why bother with the cruft of this redundant terminology? Perhaps the rest of the paragraph will tell us.

Anyway this first sentence is basically the whole email. “My brain was able to delete ego gratification as a supergoal”.

Once you have the basic ability to notice the emotion,

Ah, are we weaponising CBT? (cognitive behavioral therapy, not cock-and-ball torture)

you confront the emotion directly whenever you notice it in action, and you go through your behavior routines to check if there are any cases where altruism is behaving as a de facto child goal of ego gratification; i.e., avoidance of altruistic behavior where it would conflict with ego gratification, or a bias towards a particular form of altruistic behavior that results in ego gratification.

Yup we are weaponising CBT.

All that being said, here’s what I think. We know that Yud believes that “aligning AI” is the most altruistic thing in the world. Earlier I said that “ego gratification” isn’t something on the “world state” scale, but for Yud, it is. See, his brain is big enough to change the world, so an impure motive like ego gratification is a “supergoal” in his brain. But at the same time, his certainty in AI-doomsaying is rooted in belief of his own super-intelligence. I’d say that the ethos of ego-gratification has far transcended what can be considered normal.

[–] swlabr@awful.systems 8 points 2 weeks ago

I hear GPT-8 will broadcast a dyson sphere circumnavigation race

[–] swlabr@awful.systems 15 points 2 weeks ago (11 children)

Opening the sack with this shit that spawned in front of me:

OpenAI CEO Sam Altman says GPT-8 will be true AGI if it solves quantum gravity — the father of quantum computing agrees

Guess it won’t be true AGI!

[–] swlabr@awful.systems 9 points 2 weeks ago

Kind of a fluff story (archive) where salty’s douchiness is on full display.

I referenced it because fake book titles are throwaway jokes, you can reference something hyperspecific and not have to worry about whether or not someone will get it, because they might not even notice it at all.

[–] swlabr@awful.systems 7 points 2 weeks ago

The spectre of Marx nods in approval

[–] swlabr@awful.systems 10 points 2 weeks ago

Ah, gotcha. fwiw I wasn’t saying that to say “joyless people are bad”; burnout also tends to look like joylessness.

[–] swlabr@awful.systems 13 points 2 weeks ago (6 children)

Man, knowing nothing else about your coworker, they sound like a completely joyless person. Coming up with fake titles for things is like, such a high fun-to-effort ratio. “Creativity and the essence of Human Experience” by Chat GPT. Boom, there’s one. “Cooking With Olive Oil” by Sam Altman. “IQ184” by Harukiezer Murakowsky. This is so fun and easy that it’s basically hack outside of situations where it is solicited.

[–] swlabr@awful.systems 4 points 2 weeks ago

Putting cream in my carbonara to see how my 8K 120Hz Nonna reacts

[–] swlabr@awful.systems 8 points 2 weeks ago (1 children)

Well unfortunately it’s not diluted enough to be homeopathic, so it’s just off^3^ broadway

[–] swlabr@awful.systems 7 points 2 weeks ago (5 children)

My current iteration of the etymology is that "dath ilani" anagrams to "HD Italian". As in, the dath ilani are an idealised version of Italians, making dath ilan a utopian version of Italy.

[–] swlabr@awful.systems 3 points 2 weeks ago

there's No Such Feasible Way for that imo

 

Don't expect a good or deep analysis of FTX or any TREACLESy stuff here.

view more: ‹ prev next ›