blakestacey

joined 2 years ago
MODERATOR OF
[–] blakestacey@awful.systems 5 points 11 hours ago

Scandinavian fathers and sons are famously not close.

[–] blakestacey@awful.systems 8 points 1 day ago* (last edited 1 day ago) (1 children)

Highlights from the comments: @wjpmitchell3 writes,

Actual psychology researcher: the problem with IQ is A) We don't really know what it's measuring, B.) We don't really know how it's useful, C.) We don't really know how context-specific it is, D.) When people make arguments about IQ, it's often couched around prejudiced ulterior motives. No one actually cares about IQ; they care about what it's a proxy measure of and we don't have good evidence yet to say "This is a reliable and broadly-encompassing representation of intelligence." or whatever else, so if you are trying to use IQ differences to say that there are race differences in intelligence, you have no grounds. The best you can say is there are race differences in this proxy measure that we're still trying to understand. It's dangerous to use an unreliable and possibly inaccurate representation of a phenomena to make policy changes or inform decisions around race. The evidence threshold has to be extremely high because we're entering sensitive ethical spaces, which is something that rationalist don't do well in because their utilitarian calculus has difficulty capturing the intangibles.

@arnoldkotlyarevsky383 says,

Nothing wrong with being self educated but she comes across as being not as far along as you would want someone to be in their self-education before being given a platform.

@User123456767 observes,

You can kind of tell she grew up as a Calvinist because she still seems to think she's part of the elect she's just replaced an actual big G God with some sort of AI God.

@jaredsarnie3712 begins,

I feel like so much of what she says boils down to finding bizarre hypothetical situations where child sexual abuse is morally acceptable.

And from @Fruuuuuuuuuck:

Doomscroll gooner arc

[–] blakestacey@awful.systems 5 points 1 day ago* (last edited 1 day ago) (1 children)

"DS" in the Retraction Watch comments makes a good observation:

What scientific book only has 46 references?

A question for future work: This book is part of a "Transactions on Computer Systems and Networks" series. How many of the others in that series are also slop?

[–] blakestacey@awful.systems 11 points 2 days ago (4 children)

Oh, and looking back at the comments on titotal’s post… his detailed elaboration of some pretty egregious errors in AI 2027 didn’t really change anyone’s mind, at most moving them back a year to 2028.

Huh, what's this I have open in another browser tab:

The Great Disappointment in the Millerite movement was the reaction that followed Baptist preacher William Miller's proclamation that Jesus Christ would return to the Earth by 1844, which he called the Second Advent. His study of the Daniel 8 prophecy during the Second Great Awakening led him to conclude that Daniel's "cleansing of the sanctuary" was cleansing the world from sin when Christ would come, and he and many others prepared. When Jesus did not appear by October 22, 1844, Miller and his followers were disappointed.

[–] blakestacey@awful.systems 12 points 2 days ago

For what it's worth I know one of the founders of e/acc and they told me they were radicalized by a date they had with you where they felt you bullied them about this subject.

A-and yep, that's my dose of cursed for the day

[–] blakestacey@awful.systems 11 points 2 days ago (2 children)

It's a bird! It's a plane! It's... Evangelion Unit 1 with a Superman logo and a Diabolik mask.

[–] blakestacey@awful.systems 12 points 3 days ago (1 children)

"A case for courage, when speaking of made-up sci-fi bullshit"

[–] blakestacey@awful.systems 9 points 3 days ago

Thomas Claburn writes in The Register:

IT consultancy Gartner predicts that more than 40 percent of agentic AI projects will be cancelled by the end of 2027 due to rising costs, unclear business value, or insufficient risk controls.

That implies something like 60 percent of agentic AI projects would be retained, which is actually remarkable given that the rate of successful task completion for AI agents, as measured by researchers at Carnegie Mellon University (CMU) and at Salesforce, is only about 30 to 35 percent for multi-step tasks.

[–] blakestacey@awful.systems 7 points 3 days ago (1 children)

It's like when Scott Aaronson got me to sympathize with a cop. A sneersmas miracle.

[–] blakestacey@awful.systems 9 points 6 days ago

I poked around the search results being pointed to, saw a Ray Kurzweil book and realized that none of these people are worth taking seriously. My condolences to anyone who tries to explain the problems with the "improved" sources on offer.

[–] blakestacey@awful.systems 2 points 1 week ago

Adding https://en.wikipedia.org/wiki/Inner_alignment to the compendium for completeness' sake.

[–] blakestacey@awful.systems 9 points 1 week ago (2 children)

Rather than trying to participate in the "article for deletion" dispute with the most pedantic nerds on Earth (complimentary) and the most pedantic nerds on Earth (derogatory), I will content myself with pointing and laughing at the citation to Scientific Reports, aka "we have Nature at home"

 

"TheFutureIsDesigned" bluechecks thusly:

You: takes 2 hours to read 1 book

Me: take 2 minutes to think of precisely the information I need, write a well-structured query, tell my agent AI to distribute it to the 17 models I've selected to help me with research, who then traverse approximately 1 million books, extract 17 different versions of the information I'm looking for, which my overseer agent then reviews, eliminates duplicate points, highlights purely conflicting ones for my review, and creates a 3-level summary.

And then I drink coffee for 58 minutes.

We are not the same.

For bonus points:

I want to live in the world of Hyperion, Ringworld, Foundation, and Dune.

You know, Dune.

(Via)

 

Everybody loves Wikipedia, the surprisingly serious encyclopedia and the last gasp of Old Internet idealism!

(90 seconds later)

We regret to inform you that people write credulous shit about "AI" on Wikipedia as if that is morally OK.

Both of these are somewhat less bad than they were when I first noticed them, but they're still pretty bad. I am puzzled at how the latter even exists. I had thought that there were rules against just making a whole page about a neologism, but either I'm wrong about that or the "rules" aren't enforced very strongly.

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

 

In the week since a Chinese AI model called DeepSeek became a household name, a dizzying number of narratives have gained steam, with varying degrees of accuracy [...] perhaps most notably, that DeepSeek’s new, more efficient approach means AI might not need to guzzle the massive amounts of energy that it currently does.

The latter notion is misleading, and new numbers shared with MIT Technology Review help show why. These early figures—based on the performance of one of DeepSeek’s smaller models on a small number of prompts—suggest it could be more energy intensive when generating responses than the equivalent-size model from Meta. The issue might be that the energy it saves in training is offset by its more intensive techniques for answering questions, and by the long answers they produce.

Add the fact that other tech firms, inspired by DeepSeek’s approach, may now start building their own similar low-cost reasoning models, and the outlook for energy consumption is already looking a lot less rosy.

 

In the spirit of our earlier "happy computer memories" thread, I'll open one for happy book memories. What's a book you read that occupies a warm-and-fuzzy spot in your memory? What book calls you back to the first time you read it, the way the smell of a bakery brings back a conversation with a friend?

As a child, I was into mystery stories and Ancient Egypt both (not to mention dinosaurs and deep-sea animals and...). So, for a gift one year I got an omnibus set of the first three Amelia Peabody novels. Then I read the rest of the series, and then new ones kept coming out. I was off at science camp one summer when He Shall Thunder in the Sky hit the bookstores. I don't think I knew of it in advance, but I snapped it up and read it in one long summer afternoon with a bottle of soda and a bag of cookies.

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this.)

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this.)

 

I'm seeing empty square outlines next to "awful.systems" and my username in the top bar, and next to many (but not all) usernames in comment bylines.

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this.)

 

Kate Knibbs reports in Wired magazine:

Against the company’s wishes, a court unredacted information alleging that Meta used Library Genesis (LibGen), a notorious so-called shadow library of pirated books that originated in Russia, to help train its generative AI language models. [...] In his order, Chhabria referenced an internal quote from a Meta employee, included in the documents, in which they speculated, “If there is media coverage suggesting we have used a dataset we know to be pirated, such as LibGen, this may undermine our negotiating position with regulators on these issues.” [...] These newly unredacted documents reveal exchanges between Meta employees unearthed in the discovery process, like a Meta engineer telling a colleague that they hesitated to access LibGen data because “torrenting from a [Meta-owned] corporate laptop doesn’t feel right 😃”. They also allege that internal discussions about using LibGen data were escalated to Meta CEO Mark Zuckerberg (referred to as "MZ" in the memo handed over during discovery) and that Meta's AI team was "approved to use" the pirated material.

 

Retraction Watch reports:

All but one member of the editorial board of the Journal of Human Evolution (JHE), an Elsevier title, have resigned, saying the “sustained actions of Elsevier are fundamentally incompatible with the ethos of the journal and preclude maintaining the quality and integrity fundamental to JHE’s success.”

The resignation statement reads in part,

In fall of 2023, for example, without consulting or informing the editors, Elsevier initiated the use of AI during production, creating article proofs devoid of capitalization of all proper nouns (e.g., formally recognized epochs, site names, countries, cities, genera, etc.) as well italics for genera and species. These AI changes reversed the accepted versions of papers that had already been properly formatted by the handling editors.

(Via Pharyngula.)

Related:

view more: next ›