blakestacey

joined 2 years ago
MODERATOR OF
[–] blakestacey@awful.systems 14 points 1 week ago (1 children)

You know, just this once, I am willing to see the "Dead Dove: Do Not Eat" label and be content to leave the bag closed.

[–] blakestacey@awful.systems 8 points 1 week ago

"And the name of that novelist? Albert Einstein!"

[–] blakestacey@awful.systems 9 points 1 week ago (2 children)

Yes; the feeling that I've been pondering over and trying to articulate is that in every iteration of the "curtains are blue" story I can recall, the statement by the teacher being shot down didn't even sound like any of the bad literature teachers I've had. It's some kind of strawman crossed with "and then everyone clapped".

[–] blakestacey@awful.systems 8 points 1 week ago (4 children)

I've known my share of obnoxious literature teachers, and their problem overall was not that they insisted that details had to be symbolic, but rather that their peculiar interpretation of very-obviously-symbolic details had to be the correct one.

[–] blakestacey@awful.systems 13 points 1 week ago

Or was it a consequence of the fact that capital-R Rationalists just don't shut up?

[–] blakestacey@awful.systems 8 points 1 week ago

I suppose you could explain that on the talk page, if only you expressed it in acronyms for the benefit of the most pedantic nerds on the planet.

[–] blakestacey@awful.systems 6 points 1 week ago

feels like they are wrong on the object level

Who actually wants to sound like this?

[–] blakestacey@awful.systems 5 points 1 week ago

There might be enough point-and-laugh material to merit a post (also this came in at the tail end of the week's Stubsack).

[–] blakestacey@awful.systems 7 points 1 week ago

The opening line of the "Beliefs" section of the Wikipedia article:

Rationalists are concerned with improving human reasoning, rationality, and decision-making.

No, they aren't.

Anyone who still believes this in the year Two Thousand Twenty Five is a cultist.

I am too tired to invent a snappier and funnier way of saying this.

[–] blakestacey@awful.systems 9 points 1 week ago

I'm the torture copy and so is my wife

[–] blakestacey@awful.systems 16 points 1 week ago

In other news, I got an "Is your website AI ready" e-mail from my website host. I think I'm in the market for a new website host.

[–] blakestacey@awful.systems 12 points 1 week ago (1 children)

That Carl Shulman post from 2007 is hilarious.

After years spent studying existential risks, I concluded that the risk of an artificial intelligence with inadequately specified goals dominates. Attempts to create artificial intelligence can be expected to continue, and to become more likely to succeed in light of increased computing power, neuroscience, and intelligence-enhancements. Unless the programmers solve extremely difficult problems in both philosophy and computer science, such an intelligence might eliminate all utility within our future light-cone in the process of pursuing a poorly defined objective.

Accordingly, I invest my efforts into learning more about the relevant technologies and considerations, increasing my earnings capability (so as to deliver most of a large income to relevant expenditures), and developing logistical strategies to more effectively gather and expend resources on the problem of creating AI that promotes (astronomically) and preserves global welfare rather than extinguishing it.

Because the potential stakes are many orders of magnitude greater than relatively good conventional expenditures (vaccine and Green Revolution research), and the probability of disaster much more likely than for, e.g. asteroid impacts, utilitarians with even a very low initial estimate of the practicality of AI in coming decades should still invest significant energy in learning more about the risks and opportunities associated with it. (Having done so, I offer my assurance that this is worthwhile.) Note that for materialists the possibility of AI follows from the existence proof of the human brain, and that an AI able to redesign itself for greater intelligence and copy itself would have the power to determine the future of Earth-derived life.

I suggest beginning with the two articles below on existential risk, the first on relevant cognitive biases, and the second discussing the relation of AI to existential risk. Processing these arguments should provide sufficient reason for further study.

The "two articles below" are by Yudkowsky.

User "gaverick" replies,

Carl, I'm inclined to agree with you, but can you recommend a rigorous discussion of the existential risks posed by Unfriendly AI? I had read Yudkowsky's chapter on AI risks for Bostrom's bk (and some of his other SIAI essays & SL4 posts) but when I forward them to others, their informality fails to impress.

Shulman's response begins,

Have you read through Bostrom's work on the subject? Kurzweil has relevant info for computing power and brain imaging.

Ray mothersodding Kurzweil!

 

The UCLA news office boasts, "Comparative lit class will be first in Humanities Division to use UCLA-developed AI system".

The logic the professor gives completely baffles me:

"Normally, I would spend lectures contextualizing the material and using visuals to demonstrate the content. But now all of that is in the textbook we generated, and I can actually work with students to read the primary sources and walk them through what it means to analyze and think critically."

I'm trying to parse that. Really and truly I am. But it just sounds like this: "Normally, I would [do work]. But now, I can actually [do the same work]."

I mean, was this person somehow teaching comparative literature in a way that didn't involve reading the primary sources and, I'unno, comparing them?

The sales talk in the news release is really going all in selling that undercoat.

Now that her teaching materials are organized into a coherent text, another instructor could lead the course during the quarters when Stahuljak isn’t teaching — and offer students a very similar experience. And with AI-generated lesson plans and writing exercises for TAs, students in each discussion section can be assured they’re receiving comparable instruction to those in other sections.

Back in my day, we called that "having a book" and "writing a lesson plan".

Yeah, going from lecture notes and slides to something shaped like a book is hard. I know because I've fuckin' done it. And because I put in the work, I got the benefit of improving my own understanding by refining my presentation. As the old saying goes, "Want to learn a subject? Teach it." Moreover, doing the work means that I can take a little pride in the result. Serving slop is the cafeteria's job.

(Hat tip.)

 

So, after the Routledge thing, I got to wondering. I've had experience with a few noble projects that fizzled for lacking a clear goal, or at least a clear breathing point where we could say, "Having done this, we're in a good place. Stage One complete." And a project driven by volunteer idealism — the usual mix of spite and whimsy — can splutter out if it requires more than one person to be making it a high/top priority. If half a dozen people all like the idea but each of them ranks it 5th or 6th among things to do, academic life will ensure that it never gets done.

With all that in mind, here is where my thinking went. I provisionally tagged the idea "Harmonice Mundi Books", because Kepler writing about the harmony of the world at the outbreak of the Thirty Years' War is particularly resonant to me. It would be a micro-publisher with the tagline "By scholars, for scholars; by humans, for humans."

The Stage One goal would be six books. At least one would be by a "big name" (e.g., someone with a Wikipedia article that they didn't write themselves). At least one would be suitable for undergraduates: a supplemental text for a standard course, or even a drop-in replacement for one of those books that's so famous it's known by the author's last name. The idea is to be both reputable and useful in a readily apparent way.

Why six books? I want the authors to get paid, and I looked at the standard flat fee that a major publisher paid me for a monograph. Multiplying a figure in that range by 6 is a budget that I can imagine cobbling together. Not to make any binding promises here, but I think that authors should also get a chunk of the proceeds (printing will likely be on demand), which would be a deal that I didn't get for my monograph.

Possible entries in the Harmonice Mundi series:

  • anything you were going to send to a publisher that has since made a deal with the LLM devil

  • doctoral theses

  • lecture notes (I find these often fall short of being full-fledged textbooks, chiefly by lacking exercises, but perhaps a stipend is motivation to go the extra km)

  • collections of existing long-form online writing, like the science blogs of yore

  • text versions of video essays — zany, perhaps, but the intense essayists already have manual subtitles, so maybe one would be willing to take the next, highly experimental step

Skills necessary for this project to take off:

  • subject-matter editor(s) — making the call about what books to accept, in the case we end up with the problem we'd like to have, i.e., too many books; and supervising the revision of drafts

  • production editing — everything from the final spellcheck to a print-ready PDF

  • website person — the site could practically be static, but some kind of storefront integration would be necessary (and, e.g., rigging the server to provide LLM scrapers with garbled material would be pleasingly Puckish)

  • visuals — logo, website design, book covers, etc. We could have all the cover art be pictures of flowers that I have taken around town, but we probably shouldn't.

  • publicity — getting authors to hear about us, and getting our books into libraries and in front of reviewers

Anyway, I have just barely started looking into all the various pieces here. An unknown but probably large amount of volunteer enthusiasm will be needed to get the ball rolling. And cultures will have to be juggled. I know that there are some tasks I am willing to do pro bono because they are part of advancing the scientific community, I am already getting a salary and nobody else is profiting. I suspect that other academics have made similar mental calculations (e.g., about which journals to peer review for). But I am not going to go around asking creative folks to work "for exposure".

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

 

Time for some warm-and-fuzzies! What happy memories do you have from your early days of getting into computers/programming, whenever those early days happened to be?

When I was in middle school, I read an article in Discover Magazine about "artificial life" — computer simulations of biological systems. This sent me off on the path of trying to make a simulation of bugs that ran around and ate each other. My tool of choice was PowerBASIC, which was like QBasic except that it could compile to .EXE files. I decided there would be animals that could move, and plants that could also move. To implement a rule like "when the animal is near the plant, it will chase the plant," I needed to compute distances between points given their x- and y-coordinates. I knew the Pythagorean theorem, and I realized that the line between the plant and the animal is the hypotenuse of a right triangle. Tada: I had invented the distance formula!

 

So, here I am, listening to the Cosmos soundtrack and strangely not stoned. And I realize that it's been a while since we've had a random music recommendation thread. What's the musical haps in your worlds, friends?

 

Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh facts of Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

 

Bumping this up from the comments.

 

Was anyone else getting a 503 error for a little while today?

 

Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

 

Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

view more: ‹ prev next ›