scruiser

joined 2 years ago
[–] scruiser@awful.systems 6 points 2 weeks ago (6 children)

It's a good post. A few minor quibbles:

The “nonprofit” company OpenAI was launched under the cynical message of building a “safe” artificial intelligence that would “benefit” humanity.

I think at least some of the people at launch were true believers, but strong financial incentives and some cynics present at the start meant the true believers didn't really have a chance, culminating in the board trying but failing to fire Sam Altman and him successfully leveraging the threat of taking everyone with him to Microsoft. It figures one of the rare times rationalists recognize and try to mitigate the harmful incentives of capitalism they fall vastly short. OTOH... if failing to convert to a for-profit company is a decisive moment in popping the GenAI bubble, then at least it was good for something?

These tools definitely have positive uses. I personally use them frequently for web searches, coding, and oblique strategies. I find them helpful.

I wish people didn't feel the need to add all these disclaimers, or at least put a disclaimer on their disclaimer. It is a slightly better autocomplete for coding that also introduces massive security and maintainability problems if people entirely rely on it. It is a better web search only relative to the ad-money-motivated compromises Google has made. It also breaks the implicit social contract of web searches (web sites allow themselves to be crawled so that human traffic will ultimately come to them) which could have pretty far reaching impacts.

One of the things I liked and didn't know about before

Ask Claude any basic question about biology and it will abort.

That is hilarious! Kind of overkill to be honest, I think they've really overrated how much it can help with a bioweapons attack compared to radicalizing and recruiting a few good PhD students and cracking open the textbooks. But I like the author's overall point that this shut-it-down approach could be used for a variety of topics.

One of the comments gets it:

Safety team/product team have conflicting goals

LLMs aren't actually smart enough to make delicate judgements, even with all the fine-tuning and RLHF they've thrown at them, so you're left with over-censoring everything or having the safeties overridden with just a bit of prompt-hacking (and sometimes both problems with one model)/1

[–] scruiser@awful.systems 2 points 2 weeks ago (2 children)

Having seen the problem a few more times I can elaborate more clearly now. All the icons get temporarily massive oversized while the text actually stays the same size. I've only had it occur once on my computer browser, but it reoccurs regularly on my safari mobile browser, with a refresh fixing it each time.

[–] scruiser@awful.systems 11 points 2 weeks ago

Had me in the first few paragraphs…not gonna lie.

Yeah, the first few paragraphs actually felt like they would serve as a defense of Hamas: Israel engineered a situation were any form of resistance against them would need to be violent and brutal so Hamas is justified even if it killed 5 people to save 1.

The more I think about his metaphor the more frustrated I get. Israel holds disproportionate power in this entire situation, if anyone is contriving no-win situations to win temporary PR victories it is Israel (Netanyahu's trial is literally getting stalled out by the conflict).

[–] scruiser@awful.systems 6 points 2 weeks ago

Lots of woo and mysticism already has a veneer of stolen Quantum terminology. It's too far from respectable to get the quasi-expert endorsement or easy VC money that LLM hype has gotten, but quantum hucksters fusing quantum computing nonsense with quantum mysticism can probably still con lots of people out of their money.

[–] scruiser@awful.systems 5 points 2 weeks ago

I like how Zitron does a good job of distinguishing firm overall predictions from specific scenarios (his chaos bets) which are plausible but far from certain. AI 2027 specifically conflated and confused those things in a way that gave it's proponents more rhetorical room to hide and dodge.

[–] scruiser@awful.systems 5 points 2 weeks ago

I like how he doesn't even bother debunking it point by point, he just slams the very premise of it and moves on.

[–] scruiser@awful.systems 7 points 2 weeks ago (1 children)

system memory

System memory is just the marketing label for "having an LLM summarize a bunch of old conversations and shoving it into a hidden prompt". I agree that using that term is sneer-worthy.

[–] scruiser@awful.systems 5 points 2 weeks ago

I have three more examples of sapient marine mammals!

  • whales warning the team of an impending solar flare in Stargate Atlantis via echolocation induced hallucinations
  • the dolphins in hitchhiker’s guide to the galaxy
  • whales showing up to help in one book of Animorphs while they are morphed into dolphins
[–] scruiser@awful.systems 4 points 2 weeks ago

I was thinking this also, like it's the perfect parody of several lesswrong and EA memes: overly concerned with animal suffering/sapience, overly concerned with IQ stats, openly admitting to no expertise or even relevant domain knowledge but driven to pontificate anyway, and inspired by existing science fiction... I think the last one explains it and it isn't a parody. As cinnasverses points out, Cetacean intelligence shows up occasionally in sci-fi. to add to the examples... sapient whales warning the team of an impending solar flare in Stargate Atlantis via echolocation induced hallucinations, the dolphins in hitchhiker's guide to the galaxy, and the whales showing up to help in one book of Animorphs.

[–] scruiser@awful.systems 7 points 2 weeks ago

I was trying to figure out why he hadn't turned this into an opportunity to lecture (or write a mini-fanfic) about giving more attack surface to the AGI to manipulate you... I was stumped until I saw your comment. I think that is it, expressing his childhood distrust of authority trumps lecturing us on the AI-God's manipulations.

[–] scruiser@awful.systems 9 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I have context that makes this even more cringe! "Lawfulness concerns" refers to like, Dungeons and Dragons lawfulness. Specifically the concept of lawfulness developed in the Pathfinder fanfiction we've previously discussed (the one with deliberately bad BDSM and eugenics). Like a proper Lawful Good Paladin of Iomedae wouldn't put you in a position where you had to trust they hadn't rigged the background prompt if you went to them for spiritual counseling. (Although a Lawful Evil cleric of Asmodeus totally would rig the prompt... Lawfulness as a measuring stick of ethics/morality is a terrible idea even accepting the premise of using Pathfinder fanfic to develop your sense of ethics.)

[–] scruiser@awful.systems 4 points 2 weeks ago (3 children)

On both mobile (safari and chrome) and browser (firefox) I occasionally get a thing where all the posts are massively oversized. It goes away when I refresh.

view more: ‹ prev next ›