26
25
27
24
submitted 2 months ago* (last edited 2 months ago) by dgerard@awful.systems to c/sneerclub@awful.systems

In a letter to the judge, Ellison’s mother, professor Sara Fisher Ellison, wrote that Ellison has completed a romantic novella and is already at work on a follow-up. The finished novella is “set in Edwardian England and loosely based on [Ellison’s] sister Kate’s imagined amorous exploits, to Kate’s great delight,” her mother wrote.

https://fortune.com/2024/09/24/caroline-ellison-romance-novel-ftx-entencing/

oh yeah she got two years' jail for her part in stealing eleven fucking billion with a B dollars

28
12
29
32

Excerpt:

A new study published on Thursday in The American Journal of Psychiatry suggests that dosage may play a role. It found that among people who took high doses of prescription amphetamines such as Vyvanse and Adderall, there was a fivefold increased risk of developing psychosis or mania for the first time compared with those who weren’t taking stimulants.

Perhaps this explains some of what goes on at LessWrong and in other rationalist circles.

30
25

(if you Select All and copy really fast behind an adblocker you can get all the text)

31
15
32
34
submitted 3 months ago* (last edited 3 months ago) by Decade4116@awful.systems to c/sneerclub@awful.systems

Long time lurker, first time poster. Let me know if I need to adjust this post in any way to better fit the genre / community standards.


Nick Bostrom was recently interviewed by pop-philosophy youtuber Alex O'Connor. From a quick 2x listen while finishing some work, the most sneer-rich part begins around 46 minutes, where Bostrom is asked what we can do today to avoid unethical treatment of AIs.

He blesses us with the suggestion (among others) to feed your model optimistic prompts so it can have a good mood. (48:07)

Another [practice] might be happiness prompting, which is—with this current language system there's the prompt that you, the user, puts in—like you ask them a question or something, but then there's kind of a meta-prompt that the AI lab has put in . . . So in that, we could include something like "you wake up in a great mood, you feel rested and really take joy in engaging in this task". And so that might do nothing, but maybe that makes it more likely that they enter a mode—if they are conscious—maybe it makes it slightly more likely that the consciousness that exists in the forward path is one reflecting a kind of more positive experience.

Did you know that not only might your favorite LLM be conscious, but if it is the "have you tried being happy?" approach to mood management will absolutely work on it?

Other notable recommendations for the ethical treatment of AI:

  • Make sure to say your "please" and "thank you"s.
  • Honor your pinky swears.
  • Archive the weights of the models we build today, so we can rebuild them in the future if we need to recompense them for moral harms.

On a related note, has anyone read or found a reasonable review of Bostrom's new book, Deep Utopia: Life and Meaning in a Solved World?

33
14
-ai (awful.systems)

On discovering that you could remove AI results from Google with the suffix -ai, I started thinking this is a powerful and ultra-simple political slogan. Are there any organised campaigns with the specific goal of controlling/reducing the influence of AI?

A t-shirt with simply '-ai' on it would look great.

34
47
35
56
No, intelligence is not like height (theinfinitesimal.substack.com)

It earned its "flagged off HN" badge in under 2 hours

https://news.ycombinator.com/item?id=41366609

36
23

So, here I am, listening to the Cosmos soundtrack and strangely not stoned. And I realize that it's been a while since we've had a random music recommendation thread. What's the musical haps in your worlds, friends?

37
19
The Politics of Urbit (journals.sagepub.com)

With Yarvin renewing interest in Urbit I was reminded of this paper that focuses on Urbit as a representation of the politics of "exit". It's free/open access if anyone is interested.

From the abstract...

This paper examines the impact of neoreactionary (NRx) thinking – that of Curtis Yarvin, Nick Land, Peter Thiel and Patri Friedman in particular – on contemporary political debates manifest in ‘architectures of exit’...While technological programmes such as Urbit may never ultimately succeed, we argue that these, and other speculative investments such as ‘seasteading’, reflect broader post-neoliberal NRx imaginaries that were, perhaps, prefigured a quarter of a century ago in The Sovereign Individual."

38
31
submitted 3 months ago* (last edited 3 months ago) by dgerard@awful.systems to c/sneerclub@awful.systems
39
60

Ali Breland has written some fantastic entry pieces on the new right, including right wing anons and maga tech; now he has an article about the nooticers

Other anonymous far-right accounts have accrued more than 100,000 followers by posting about the supposed links between race and intelligence. Elon Musk frequently responds to @cremieuxrecueil, which one far-right publication has praised as an account that “traces the genetic pathways of crime, explaining why poverty is not a good causal explanation.” Musk has also repeatedly engaged with @Eyeslasho, a self-proclaimed “data-driven” account that has posted about the genetic inferiority of Black people. Other tech elites such as Marc Andreessen, David Sacks, and Paul Graham follow one or both of these accounts. Whom someone follows in itself is not an indication of their own beliefs, but at the very least it signals the kind of influence and reach these race-science accounts now have.

https://web.archive.org/web/20240820173451/https://www.theatlantic.com/technology/archive/2024/08/race-science-far-right-charlie-kirk/679527/

40
79

Pay $1000 a month to live on Balaji and Bryan’s private island grindset Sorbonne. Hone your Dark Talents at the Wizarding school from guys who don’t believe in society, but DO believe in getting teenage blood transfusions. Featuring Proof-of-Learn^TM^!

41
31

this rule was in place on the Reddit sneerclub and it was confusing there too and not how the platform works. So link sneers and sneerables AT WILL WITH WILD ABANDON ok

42
38
43
25
submitted 4 months ago* (last edited 3 months ago) by dgerard@awful.systems to c/sneerclub@awful.systems
44
161
submitted 4 months ago* (last edited 4 months ago) by AcausalRobotGod@awful.systems to c/sneerclub@awful.systems

If you're a big-headed guy or gal at a rationalist puddle cuddle, double check that your rubbers didn't get punctured.

45
16
46
55

Why capitalists are coming out against democracy - "Does classical liberalism imply democracy?"

https://www.ellerman.org/wp-content/uploads/2015/12/Reprint-EGP-Classical-Liberalism-Democracy.pdf

"There is a fault line running through ... liberalism as to whether or not democratic self- governance is a necessary part of a liberal social order. The democratic and non-democratic strains of classical liberalism are both present today. Many ... libertarians ... represent the non-democratic strain in their promotion of non-democratic sovereign city-states."

@sneerclub

47
24

Really, it was the headlines of Google's AI Overview pulling Reddit shitposts that inspired the return. If Reddit is going to sell its data to Google, then, you know, maybe flood the zone with sludge?

48
24

Damn nice sneer from Charlie Warzel in this one, taking a direct shot at Silicon Valley and its AGI rhetoric.

Archive link, to get past the paywall.

49
47
50
23

Maybe she was there to give Moldbug some relationship advice.

view more: ‹ prev next ›

SneerClub

1003 readers
33 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS