Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 9 points 3 months ago* (last edited 3 months ago)

The vibe I get is that by 'enjoyers' he means people who thought fighting the nazis in WW2 was morally justified.

[–] Architeuthis@awful.systems 9 points 3 months ago* (last edited 3 months ago) (16 children)

Here's a screenshot of a skeet of a screenshot of a tweet featuring an unusually shit take on WW2 by Moldbug:

link

transcriptskeet by Joe Stieb: Another tweet that should have ended with the first sentence.

Also, I guess I'm a "World War Two enjoyer"

tweet by Curtis Yarvin: There is very very extensive evidence of the Holocaust.

Unfortunately for WW2 enjoyers, the US and England did not go to war to stop the Holocaust. They went to war to stop the Axis plan for world conquest.

There is no evidence of the Axis plan for world conquest.

edit: hadn't seen yarvin's twitter feed before, that's one high octane shit show.

[–] Architeuthis@awful.systems 9 points 3 months ago

karma

Works the same on LessWrong.

[–] Architeuthis@awful.systems 17 points 3 months ago

sarcophagi would be the opposite of vegetarians

Unrelated slightly amusing fact, sarcophagos is still the word for carnivorous in Greek, the amusing part being that the word for vegetarian is chortophagos and how weirdly close it is to being a slur since it literally means grass eater.

I am easily amused.

[–] Architeuthis@awful.systems 10 points 3 months ago

Mesa-optimization

Why use the perfectly fine 'inner optimizer' mentioned in the references when you can just ask google translate to give you the clunkiest, most pedestrian and also wrong part of speech Greek term to use in place of 'in' instead?

Also natural selection is totally like gradient descent brah, even though evolutionary algorithms actually modeled after natural selection used to be their own subcategory of AI before the term just came to mean lying chatbot.

[–] Architeuthis@awful.systems 9 points 3 months ago* (last edited 3 months ago) (4 children)

The kokotajlo/scoot thing apparently made it to the new york times.

So this is what that was about:

stubsack post from two months agoOn slightly more relevant news the main post is scoot asking if anyone can put him in contact with someone from a major news publication so he can pitch an op-ed by a notable ex-OpenAI researcher that will be ghost-written by him (meaning siskind) on the subject of how they (the ex researcher) opened a forecast market that predicts ASI by the end of Trump’s term, so be on the lookout for that when it materializes I guess.

edit: also @gerikson is apparently a superforcaster

[–] Architeuthis@awful.systems 13 points 3 months ago (1 children)

Reminds me of an SMBC comic that had a setup along the same lines, that if male birth order correlates with homosexuality and family size trends being what they are, the past must have been considerably gayer on average.

[–] Architeuthis@awful.systems 6 points 3 months ago

No idea where they would land on what to mock and what to take seriously from this whole mess.

Don't know what they're up to these days but last time I checked I had them pegged as enlightened centrists whose style of satire is having strong beliefs about stuff is cringe more than it is ever having to say anything of even accidental substance about said things.

[–] Architeuthis@awful.systems 6 points 3 months ago

The first prompt programming libraries start to develop, along with the first bureaucracies.

I went three layers deep in his references and his references' references to find out what the hell prompt programming is supposed to be, ended up in a gwern footnote:

It's the ideologized version of You're Prompting It Wrong. Which I suspected but doubted, because why would they pretend that LLMs being finicky and undependable unless you luck into very particular ways of asking for very specific things is a sign that they're doing well.gwern wrote:

I like “prompt programming” as a description of writing GPT-3 prompts because ‘prompt’ (like ‘dynamic programming’) has almost purely positive connotations; it indicates that iteration is fast as the meta-learning avoids the need for training so you get feedback in seconds; it reminds us that GPT-3 is a “weird machine” which we have to have “mechanical sympathy” to understand effective use of (eg. how BPEs distort its understanding of text and how it is always trying to roleplay as random Internet people); implies that prompts are programs which need to be developed, tested, version-controlled, and which can be buggy & slow like any other programs, capable of great improvement (and of being hacked); that it’s an art you have to learn how to do and can do well or poorly; and cautions us against thoughtless essentializing of GPT-3 (any output is the joint outcome of the prompt, sampling processes, models, and human interpretation of said outputs).

[–] Architeuthis@awful.systems 4 points 3 months ago

They look like the evil twins of the Penny Arcade writers.

[–] Architeuthis@awful.systems 10 points 3 months ago (11 children)

It is with great regret that I must inform you that all this comes with a three-hour podcast featuring Scoot in the flesh: 2027 Intelligence Explosion: Month-by-Month Model — Scott Alexander & Daniel Kokotajlo

[–] Architeuthis@awful.systems 7 points 3 months ago (1 children)

That was a good one. Also, was he the first to break the coreweave situation? Not a bad journalistic get if that's the case.

view more: ‹ prev next ›