Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 18 points 1 month ago* (last edited 1 month ago) (13 children)

Saltman has a new blogpost out he calls 'Three Observations' that I feel too tired to sneer properly but I'm sure will be featured in pivot-to-ai pretty soon.

Of note that he seems to admit chatbot abilities have plateaued for the current technological paradigm, by way of offering the "observation" that model intelligence is logarithmically dependent on the resources used to train and run it (i = log( r )) so it's officially diminishing returns from now on.

Second observation is that when a thing gets cheaper it's used more, i.e. they'll be pushing even harded to shove it into everything.

Third observation is that

The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.

which is hilarious.

The rest of the blogpost appears to mostly be fanfiction about the efficiency of their agents that I didn't read too closely.

[–] Architeuthis@awful.systems 10 points 1 month ago* (last edited 1 month ago) (4 children)

a lot of kids

They had 3 kids last time they came up, which despite their posturing is not really a notable amount, and they're both nearing their 40s, so it's unlikely they'll hit quiverfull numbers.

[–] Architeuthis@awful.systems 10 points 1 month ago

"Genetic Enhancement: Prediction Markets for Future People" by Jonathan Anomaly

What a completely cursed presentation title. According to the first youtube transcription service that pops up on google, he means that we should use prediction markets to find out which diseases will be curable/treatable in the next however many years so we can prioritize accordingly when doing polygenetic embryo screening based family planning.

Eugenics enjoyer quotient: Mr Anomaly is an iq enthusiast who goes on to talk about how genetic screening starts at choosing a suitable partner. Also, we should establish something like a polygenic health index that represents an individual's genetic health to better systematize selection. This will be based on the individual's known genetics as well as family history, I'm assuming because getting tricked into marrying someone with a schizophrenic great uncle or an obese cousin is a serious concern for him.

This presentation came up on the subject of how Cremieux/TP0/Lasker got invited to give a talk in Stanford if he's only known for his race science bullshit while otherwise unaffiliated, and the answer is that the school of business faculty who organized the talks was into forecasting markets and almost definitely met him in this event.

So we have the broader rationalist cultic milieu to once again thank for bringing terrible people together, I guess.

[–] Architeuthis@awful.systems 17 points 2 months ago (1 children)

Penny Arcade weighs in on deepseek distilling chatgpt (or whatever actually the deal is):

[–] Architeuthis@awful.systems 6 points 2 months ago

You misunderstand, they escalate to the max to keep themselves (including selves in parallel dimensions or far future simulations) from being blackmailed by future super intelligent beings, not to survive shootouts with border patrol agents.

I am fairly certain Yud had said something very close to that effect in reference to preventing blackmail from the basilisk, even though he tries to no-true-scotchman zizians wrt his functional decision 'theory' these days.

[–] Architeuthis@awful.systems 6 points 2 months ago* (last edited 2 months ago)

Distilling is supposed to be a shortcut to creating a quality training dataset by using the output of an established model as labels, i.e. desired answers.

The end result of the new model ending up with biases inherited from the reference model should hold, but using as a base model the same model you are distilling from would seem to be completely pointless.

[–] Architeuthis@awful.systems 3 points 2 months ago (2 children)

The 671B model although 'open sourced' is a 400+GB download and is definitely not runnable on household hardware.

[–] Architeuthis@awful.systems 14 points 2 months ago* (last edited 2 months ago)

Taylor said the group believes in timeless decision theory, a Rationalist belief suggesting that human decisions and their effects are mathematically quantifiable.

Seems like they gave up early if they don't bring up how it was developed specifically for deals with the (acausal, robotic) devil, and also awfully nice of them to keep Yud's name out of it.

edit: Also in lieu of explanation they link to the wikipedia page on rationalism as a philosophical movement which of course has fuck all to do with the bay area bayes cargo cult, despite it having a small mention there, with most of the Talk: page being about how it really shouldn't.

[–] Architeuthis@awful.systems 8 points 2 months ago (1 children)

NYT and WaPo are his specific examples. He also wants a connection to "a policy/defense/intelligence/foreign affairs journal/magazine" if possible.

[–] Architeuthis@awful.systems 11 points 2 months ago* (last edited 2 months ago) (4 children)

Today on highlighting random rat posts from ACX:

poster thinks the future of llm training is contingent on focusing early on philosophical and theological text because they match the causality of human experience

(Current first post on today's SSC open thread)

On slightly more relevant news the main post is scoot asking if anyone can put him in contact with someone from a major news publication so he can pitch an op-ed by a notable ex-OpenAI researcher that will be ghost-written by him (meaning siskind) on the subject of how they (the ex researcher) opened a forecast market that predicts ASI by the end of Trump's term, so be on the lookout for that when it materializes I guess.

[–] Architeuthis@awful.systems 2 points 2 months ago* (last edited 2 months ago)

wrong thread :(

[–] Architeuthis@awful.systems 8 points 2 months ago (3 children)

The zizian angle makes this so weird. Like, on top of probably being stopped for driving while trans, they might have instigated the shootout to prove to the basilisk that their parallel universe selves/simulated iterations/eternal souls can't be acausally blackmailed.

view more: ‹ prev next ›