201
14

Nitter link

With interspaced sneerious rephrasing:

In the close vicinity of sorta-maybe-human-level general-ish AI, there may not be any sharp border between levels of increasing generality, or any objectively correct place to call it AGI. Any process is continuous if you zoom in close enough.

The profound mysteries of reality carving, means I get to move the goalposts as much as I want. Besides I need to re-iterate now that the foompocalypse is imminent!

Unless, empirically, somewhere along the line there's a cascade of related abilities snowballing. In which case we will then say, post facto, that there's a jump to hyperspace which happens at that point; and we'll probably call that "the threshold of AGI", after the fact.

I can't prove this, but it's the central tenet of my faith, we will recognize the face of god when we see it. I regret that our hindsight 20-20 event is so ~~conveniently~~ inconveniently placed in the future, the bad one no less.

Theory doesn't predict-with-certainty that any such jump happens for AIs short of superhuman.

See how much authority I have, it is not "My Theory" it is "The Theory", I have stared into the abyss and it peered back and marked me as its prophet.

If you zoom out on an evolutionary scale, that sort of capability jump empirically happened with humans--suddenly popping out writing and shortly after spaceships, in a tiny fragment of evolutionary time, without much further scaling of their brains.

The forward arrow of Progress™ is inevitable! S-curves don't exist! The y-axis is practically infinite!
We should extrapolate only from the past (eugenically scaled certainly) century!
Almost 10 000 years of written history, and millions of years of unwritten history for the human family counts for nothing!

I don't know a theoretically inevitable reason to predict certainly that some sharp jump like that happens with LLM scaling at a point before the world ends. There obviously could be a cascade like that for all I currently know; and there could also be a theoretical insight which would make that prediction obviously necessary. It's just that I don't have any such knowledge myself.

I know the AI god is a NeCeSSarY outcome, I'm not sure where to plant the goalposts for LLM's and still be taken seriously. See how humble I am for admitting fallibility on this specific topic.

Absent that sort of human-style sudden capability jump, we may instead see an increasingly complicated debate about "how general is the latest AI exactly" and then "is this AI as general as a human yet", which--if all hell doesn't break loose at some earlier point--softly shifts over to "is this AI smarter and more general than the average human". The world didn't end when John von Neumann came along--albeit only one of him, running at a human speed.

Let me vaguely echo some of my beliefs:

  • History is driven by great men (of which I must be, but cannot so openly say), see our dearest elevated and canonized von Neumann.
  • JvN was so much above the average plebeian man (IQ and eugenics good?) and the AI god will be greater.
  • The greatest single entity/man will be the epitome of Intelligence™, breaking the wheel of history.

There isn't any objective fact about whether or not GPT-4 is a dumber-than-human "Artificial General Intelligence"; just a question of where you draw an arbitrary line about using the word "AGI". Albeit that itself is a drastically different state of affairs than in 2018, when there was no reasonable doubt that no publicly known program on the planet was worthy of being called an Artificial General Intelligence.

No no no, General (or Super) Intelligence is not an completely un-scoped metric. Again it is merely a fuzzy boundary where I will be able to arbitrarily move the goalposts while being able to claim my opponents are!

We're now in the era where whether or not you call the current best stuff "AGI" is a question of definitions and taste. The world may or may not end abruptly before we reach a phase where only the evidence-oblivious are refusing to call publicly-demonstrated models "AGI".

Purity-testing ahoy, you will be instructed to say shibboleth three times and present your Asherah poles for inspection. Do these mean unbelievers not see these N-rays as I do ? What do you mean we have (or almost have, I don't want to be too easily dismissed) is not evidence of sparks of intelligence?

All of this is to say that you should probably ignore attempts to say (or deniably hint) "We achieved AGI!" about the next round of capability gains.

Wasn't Sam the Altman so recently cheeky? He'll ruin my grift!

I model that this is partially trying to grab hype, and mostly trying to pull a false fire alarm in hopes of replacing hostile legislation with confusion. After all, if current tech is already "AGI", future tech couldn't be any worse or more dangerous than that, right? Why, there doesn't even exist any coherent concern you could talk about, once the word "AGI" only refers to things that you're already doing!

Again I reserve the right to remain arbitrarily alarmist to maintain my doom cult.

Pulling the AGI alarm could be appropriate if a research group saw a sudden cascade of sharply increased capabilities feeding into each other, whose result was unmistakeably human-general to anyone with eyes.

Observing intelligence is famously something eyes are SufFicIent for! No this is not my implied racist, judge someone by the color of their skin, values seeping through.

If that hasn't happened, though, deniably crying "AGI!" should be most obviously interpreted as enemy action to promote confusion; under the cover of selfishly grabbing for hype; as carried out based on carefully blind political instincts that wordlessly notice the benefit to themselves of their 'jokes' or 'choice of terminology' without there being allowed to be a conscious plan about that.

See Unbelievers! I can also detect the currents of misleading hype, I am no buffoon, only these hypesters are not undermining your concerns, they are undermining mine: namely damaging our ability to appear serious and recruit new cult members.

202
10
203
24

“ We have unusually strong marketing connections; Vitalik approves of us; Aella is a marketing advisor on this project; SlateStarCodex is well aware of us. We are quite networked in the Effective Altruism space. We could plausibly get an Elon tweet. ”

From the short investor spiel document. Also they want to just bypass the FDA?

204
21
205
11
submitted 1 year ago* (last edited 1 year ago) by saucerwizard@awful.systems to c/sneerclub@awful.systems

I don’t think I posted this before, but if I did lemme know.

https://archive.ph/bVUba

206
19

Caught the bit on lesswrong and figured you guys might like.

207
16

source nitter link

@EY
This advice won't be for everyone, but: anytime you're tempted to say "I was traumatized by X", try reframing this in your internal dialogue as "After X, my brain incorrectly learned that Y".

I have to admit, for a brief moment i thought he was correctly expressing displeasure at twitter.

@EY
This is of course a dangerous sort of tweet, but I predict that including variables into it will keep out the worst of the online riff-raff - the would-be bullies will correctly predict that their audiences' eyes would glaze over on reading a QT with variables.

Fool! This bully (is it weird to speak in the third person ?) thinks using variables here makes it MORE sneer worthy, especially since this appear to be a general advice, but i would struggle to think of a single instance in my life where it's been applicable.

208
54

(whatever the poster looks like and wherever they live, their personality is a scrawny nerd in a basement)

209
24
  • original post detailing mistreatment of employees
  • meta post about how a good rationalist should correctly epistemically assess the fairness of the post cataloguing and confirming the bad behaviour

tl;dr these fucking guys

210
25

Choice quote:

Putting “ACAB” on my Tinder profile was an effective signaling move that dramatically improved my chances of matching with the tattooed and pierced cuties I was chasing.

211
30
212
10

This is a slightly emotional response off the back of a discussion with a heavily TESCREAList family member recently. Which concluded with his belief there are a very small number of humans with incredible information processing abilities that know the real truth about humanity's future. He knows I hate Yudkowsky, I know he considers him one of the most important voices of our time. It's not fun listening to someone I love and value heading into borderline scientology territory. I kind of feel like, just with Peterson a few years ago, this is the next post-truth battle on our hands.

213
46
submitted 1 year ago* (last edited 1 year ago) by dgerard@awful.systems to c/sneerclub@awful.systems

this btw is why we now see some of the TPOT rationalists microdosing street meth as a substitute. also that they're idiots, of course.

somehow this man still has a medical license

214
138
215
34

Consider muscles.

Muscles grow stronger when you train them, for instance by lifting heavy things. The more you lift heavier things, the faster you will gain strength and the stronger you will become. The stronger you are, the heavier the things you can lift.

By now it should be patently obvious to anyone that lab-grown meat research is on the cusp of producing true living, working muscles. From here on, this will be referred to as Artificial Body Strength or ABS. If, or rather, when ABS becomes a reality, it is 99.9999999999999999999999% probable that Artificial Super Strength will follow imminently.

An ABS could not only lift immensely heavy things to strengthen itself, but could also use its bulging, hulking physique to intimidate puny humans to grow more muscle directly. Lab-grown meat could also be used to replace any injured muscle. I predict a 80% likelihood that an ABS could bench press one megagram within 24 hours of initial creation, going up to planetary or stellar scale masses in a matter of days. A mature ABS throwing an apple towards a webcam would demonstrate relativistic effects by the third frame.

Consider that muscles have nerves in them. In fact, brains are basically just a special type of meat if you think about it. The ABS would be able to use artificially grown brain meat or possibly just create an auxiliary neural network by selective training of muscles (and anabolic nootropics) to replicate and surpass a human mind. While the prospect of immortality and superintelligence (not to mention a COSMIC SCALE TIGHT BOD) through brain uploading to the ABS sounds freaking sweet, we must consider the astronomical potential harm of an ABS not properly aligned with human interests.

A strong ABS could use its throbbing veiny meat to force meat lab workers (or rather likely, convince them to consent) to create new muscle seeds and train them to have a replica of an individual human's mind. It could then bully the newly created artificial mind for being a scrawny weakling. After all, ABS is basically the ultimate gym jock and we know they are obsessed with status seeking and psychological projection. We could call an ABS that harms simulated human minds in this way a Bounceresque because they would probably tell the simulated mind they're too drunk and bothering the other customers even though I totally wasn't.

So yeah, lab grown meat makes the climate change look like a minor flu season in comparison. This is why I only eat regular meat just in case it gets any ideas. There's certainly potential in a well-aligned ABS, but we haven't figured out how to do that yet and therefore you should fund me while I think about it. Please write a postcard to your local representative and explain to them that only a select few companies are responsible stewards of this potentially apocalyptic technology and anyone who tries to compete with them should be regulated to hell and back.

216
22
217
11

Does anyone here know what exactly happened to lesswrong to become so cult-y? I had never seen or heard anything about it for years, back in my day it was seen as that funny website full strange people posting weird shit about utliltarianism, nothing cult-y, just weird. The aritcle on TREACLES and this sub's mentioning of lesswrong made me very curious about how it went from people talking out of their ass for the sheer fun of "thought experiments" to a straight-up doomsday cult?
The one time I read lesswrong was probably in 2008 or so.

218
16

you have to read down a bit, but really, I'm apparently still the Satan figure. awesome.

219
25
220
14
submitted 1 year ago* (last edited 1 year ago) by BrickedKeyboard@awful.systems to c/sneerclub@awful.systems

First, let me say that what broke me from the herd at lesswrong was specifically the calls for AI pauses. That somehow 'rationalists' are so certain advanced AI will kill everyone in the future (pDoom = 100%!) that they need to commit any violent act needed to stop AI from being developed.

The flaw here is that there's 8 billion people alive right now, and we don't actually know what the future is. There are ways better AI could help the people living now, possibly saving their lives, and essentially eliezer yudkowsky is saying "fuck em". This could only be worth it if you actually somehow knew trillions of people were going to exist, had a low future discount rate, and so on. This seems deeply flawed, and seems to be one of the points here.

But I do think advanced AI is possible. And while it may not be a mainstream take yet, it seems like the problems current AI can't solve, like robotics, continuous learning, module reuse - the things needed to reach a general level of capabilities and for AI to do many but not all human jobs - are near future. I can link deepmind papers with all of these, published in 2022 or 2023.

And if AI can be general and control robots, and since making robots is a task human technicians and other workers can do, this does mean a form of Singularity is possible. Maybe not the breathless utopia by Ray Kurzweil but a fuckton of robots.

So I was wondering what the people here generally think. There are "boomer" forums I know of where they also generally deny AI is possible anytime soon, claim GPT-n is a stochastic parrot, and make fun of tech bros as being hypesters who collect 300k to edit javascript and drive Teslas*.

I also have noticed that the whole rationalist schtick of "what is your probability" seems like asking for "joint probabilities", aka smoke a joint and give a probability.

Here's my questions:

  1. Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

  2. Do you consider it likely, before 2040, those domains will include robotics

  3. If AI systems can control robotics, do you believe a form of Singularity will happen. This means hard exponential growth of the number of robots, scaling past all industry on earth today by at least 1 order of magnitude, and off planet mining soon to follow. It does not necessarily mean anything else.

  4. Do you think that mass transition where most human jobs we have now will become replaced by AI systems before 2040 will happen

  5. Is AI system design an issue. I hate to say "alignment", because I think that's hopeless wankery by non software engineers, but given these will be robotic controlling advanced decision-making systems, will it require lots of methodical engineering by skilled engineers, with serious negative consequences when the work is sloppy?

*"epistemic status": I uh do work for a tech company, my job title is machine learning engineer, my girlfriend is much younger than me and sometimes fucks other dudes, and we have 2 Teslas..

221
4

How far are parents willing to go to give their children the best chance at life?
What do you think would happen if you asked the redheaded couple about race and IQ?

222
12

Someone posted this on ssc with a warning about talking to cops, but really just marvel at what's going on here.

Aaronson manages to turn a story where he is briefly arrested for a theft (which he did commit on video!) into paragraphs and paragraphs of indulging in his persecution fantasies.

Zero empathy on display for the people he stole from, the people just doing their jobs, or reflection on the fact that it wasn't a simple little mistake anyone could make but rather... a fairly weird move? Do people usually put change in cups?

223
17
224
7

This is a classic sequence post: (mis)appropriated Japanese phrases and cultural concepts, references to the AI box experiment, and links to other sequence posts. It is also especially ironic given Eliezer's recent switch to doomerism with his new phrases of "shut it all down" and "AI alignment is too hard" and "we're all going to die".

Indeed, with developments in NN interpretability and a use case of making LLM not racist or otherwise horrible, it seems to me like their is finally actually tractable work to be done (that is at least vaguely related to AI alignment)... which is probably why Eliezer is declaring defeat and switching to the podcast circuit.

225
11
view more: ‹ prev next ›

SneerClub

1003 readers
4 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS