151
24
submitted 1 year ago* (last edited 1 year ago) by dgerard@awful.systems to c/sneerclub@awful.systems

an entirely vibes-based literary treatment of an amateur philosophy scary campfire story, continuing in the comments

152
15

... while at the same time not really worth worrying about so we should be concentrating on unnamed alleged mid term risks.

EY tweets are probably the lowest effort sneerclub content possible but the birdsite threw this to my face this morning so it's only fair you suffer too. Transcript follows:

Andrew Ng wrote:

In AI, the ratio of attention on hypothetical, future, forms of harm to actual, current, realized forms of harm seems out of whack.

Many of the hypothetical forms of harm, like AI "taking over", are based on highly questionable hypotheses about what technology that does not currently exist might do.

Every field should examine both future and current problems. But is there any other engineering discipline where this much attention is on hypothetical problems rather than actual problems?

EY replied:

I think when the near-term harm is massive numbers of young men and women dropping out of the human dating market, and the mid-term harm is the utter extermination of humanity, it makes sense to focus on policies motivated by preventing mid-term harm, if there's even a trade-off.

153
54

I somehow missed this one until now. Apparently it was once mentioned in the comments on the old sneerclub but I don't think it got a proper post, and I think it deserves one.

154
13
155
20

From Sam Altman's blog, pre-OpenAI

156
42

Epistemic status: Speculation. An unholy union of evo psych, introspection, random stuff I happen to observe & hear about, and thinking. Done on a highly charged topic. Caveat emptor!

oh boy

archive: https://archive.is/uOP4y

157
6
LW malware question (awful.systems)

When I click a link to LessWrong from this board, I receive a malware alert from my home gateway (Netgear Armor). Apparently it's their AI text-to-speech bot.

Question - any concerns about this? Google isn't helping me much.

URL is https: // embed.type3.audio/

Searching their site tells me that this is literally a feature and not a bug.

https://www.lesswrong.com/posts/b9oockXDs2xMdYp66/announcement-ai-narrations-available-for-all-new-lesswrong

TYPE III AUDIO is running an experiment with the LessWrong team to provide automatic AI narrations on all new posts. All new LessWrong posts will be available as AI narrations (for the next few weeks).

You might have noticed the same feature recently on the EA Forum, where it is now an ongoing feature. Users there have provided excellent feedback and suggestions so far, and your feedback on this pilot will allow further improvements.

158
18

WOOOOOOO MORE AXE GRINDING LETS GO!

Okay enough of that, so I was doing a little bit of a foray into the GPI cesspit to look at the latest decision theoretic drivel they've been putting out recently. And boy oh boy did I come across something juicy.

Basically this 36 Page paper is one big 'nuh uh' to all the critics of longtermism. Think Crary and the like; it explicitly states that critics dismiss longtermism out of hand by denying broadly utilitarian principles. This is all fair enough, but the the philosopher tries to defend longtermism by saying that denying it on broadly normative grounds incurs 'significant theoretical costs'. I've checked what these 'costs' would be and to may admittedly quite dumb eyes they look like they're only be 'costs' if you are a utilitarian in the first place! The entire discussion is predicted on utilitarian principles, the weighing of theoretical costs and benefits the consistently bullshit new principles and what I've always thought were completely as hoc new rules that they make up to make anything fit the criteria and get longtermism out the ass end as well making the discussion impervious to criticism cos insert brand new shiny principle here it's fucken dumb.

Not to overstate my case, I'm kinda dumb, which means I could be very wrong here, but even with that in mind I woulda expected better from a PhD.

Anyways to end off, are there any resources that actually go through their math and fact check that shit? Actually wanna see if the math they use actually checks out or if it's kinda cobbled together.

159
29
submitted 1 year ago* (last edited 1 year ago) by dgerard@awful.systems to c/sneerclub@awful.systems

warning: seriously nasty narcissism at length

archive: https://archive.is/eoXQj

this is a response to the post discussed in: https://awful.systems/post/220620

160
48
submitted 1 year ago* (last edited 1 year ago) by dgerard@awful.systems to c/sneerclub@awful.systems
161
36
162
41

(via Timnit Gebru)

Although the board members didn’t use the language of abuse to describe Altman’s behavior, these complaints echoed some of their interactions with Altman over the years, and they had already been debating the board’s ability to hold the CEO accountable. Several board members thought Altman had lied to them, for example, as part of a campaign to remove board member Helen Toner after she published a paper criticizing OpenAI, the people said.

The complaints about Altman’s alleged behavior, which have not previously been reported, were a major factor in the board’s abrupt decision to fire Altman on Nov. 17, according to the people. Initially cast as a clash over the safe development of artificial intelligence, Altman’s firing was at least partially motivated by the sense that his behavior would make it impossible for the board to oversee the CEO.

For longtime employees, there was added incentive to sign: Altman’s departure jeopardized an investment deal that would allow them to sell their stock back to OpenAI, cashing out equity without waiting for the company to go public. The deal — led by Joshua Kushner’s Thrive Capital — values the company at almost $90 billion, according to a report in the Wall Street Journal, more than triple its $28 billion valuation in April, and it could have been threatened by tanking value triggered by the CEO’s departure.

huh, I think this shady AI startup whose product is based on theft that cloaks all its actions in fake concern for humanity might have a systemic ethics problem

163
20
164
28
submitted 1 year ago* (last edited 1 year ago) by Evinceo@awful.systems to c/sneerclub@awful.systems

Utilitarian brainworms or one of the many very real instances of a homicidal parent going after their disabled child? I can't decide, but it's a depressing read.

May end up on SRD, but you read it here first.

165
24

They've been pumping this bio-hacking startup on the Orange Site (TM) for the past few months. Now they've got Siskind shilling for them.

166
14
167
17

Image taken from this tweet: https://twitter.com/softminus/status/1732597516594462840

post title was this response: https://twitter.com/QuintusActual/status/1732615870613258694

Sadly the article is behind a paywall and I am loath to give Scott my money

168
15

I was wondering if someone here has a better idea of how EA developed in its early days than I do.

Judging by the link I posted, it seems like Yudkowsky used the term "effective altruist" years before Will MacAskill or Peter Singer adopted it. The link doesn't mention this explicitly, but Will MacAskill was also a lesswrong user, so it seems at least plausible that Yudkowsky is the true father of the movement.

I want to sort this out because I've noticed that a recently lot of EAs have been downplaying the AI and longtermist elements within the movement and talking more about Peter Singer as the movement's founder. By contrast the impression I get about EA's founding based on what I know is that EA started with Yudkowsky and then MacAskill, with Peter Singer only getting involved later. Is my impression mistaken?

169
30

At various points, on Twitter, Jezos has defined effective accelerationism as “a memetic optimism virus,” “a meta-religion,” “a hypercognitive biohack,” “a form of spirituality,” and “not a cult.” ...

When he’s not tweeting about e/acc, Verdon runs Extropic, which he started in 2022. Some of his startup capital came from a side NFT business, which he started while still working at Google’s moonshot lab X. The project began as an April Fools joke, but when it started making real money, he kept going: “It's like it was meta-ironic and then became post-ironic.” ...

On Twitter, Jezos described the company as an “AI Manhattan Project” and once quipped, “If you knew what I was building, you’d try to ban it.”

170
21
submitted 1 year ago* (last edited 1 year ago) by GorillasAreForEating@awful.systems to c/sneerclub@awful.systems

Most of the article is well-trodden ground if you've been following OpenAI at all, but I thought this part was noteworthy:

Some members of the OpenAI board had found Altman an unnervingly slippery operator. For example, earlier this fall he’d confronted one member, Helen Toner, a director at the Center for Security and Emerging Technology, at Georgetown University, for co-writing a paper that seemingly criticized OpenAI for “stoking the flames of AI hype.” Toner had defended herself (though she later apologized to the board for not anticipating how the paper might be perceived). Altman began approaching other board members, individually, about replacing her. When these members compared notes about the conversations, some felt that Altman had misrepresented them as supporting Toner’s removal. “He’d play them off against each other by lying about what other people thought,” the person familiar with the board’s discussions told me. “Things like that had been happening for years."

171
22

non-paywall archived version here: https://archive.is/ztech

172
31
173
24
174
22
submitted 1 year ago* (last edited 1 year ago) by sue_me_please@awful.systems to c/sneerclub@awful.systems

Let's build a tower of nonsense on top of numbers we vibe with and pulled out of our ass

175
18
submitted 1 year ago* (last edited 1 year ago) by dgerard@awful.systems to c/sneerclub@awful.systems

For your utter delight and mine.

The original genesis appears to have been these guys rambling on Twitter shortly before one of them posted the first essay.

cheers to @earthquake@lemm.ee and @blakestacey for finding these

this is a worse understanding of the Second Law of Thermodynamics than creationists.

they don't name Nick Land as their origin, but do generally credit him as the creator of accelerationism, and also their ideas on the inevitability of techno-capitalism are straight out of "The Dark Enlightenment" by Land.

so firstly, I blame El Sandifer for speaking this fucking neutron star of stupidity into existence, and secondly myself for understanding most of this orchestra of neoreactionary dog whistles.

thirdly, these fuckers all talk like Sephiroth.

I feel like I should write it up to explain the AI grifter e/acc thing, but also it's hard to explain this nonsense without a #include of Neoreaction A Basilisk and I'm not sure the centrist finance types of my readership have that much patience, nor that I do.

EDIT: urgh. Vitalik Buterin is philosophising again. d/acc. https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html "Special thanks to" several rationalists

view more: ‹ prev next ›

SneerClub

1003 readers
4 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS