[-] Architeuthis@awful.systems 16 points 2 weeks ago* (last edited 2 weeks ago)

It's complicated.

It's basically a forum created to venerate the works and ideas of that guy who in the first wave of LLM hype had an editorial published in TIME where he called for a worldwide moratorium on AI research and GPU sales to be enforced with unilateral airstrikes, and whose core audience got there by being groomed by one the most obnoxious Harry Potter fanfictions ever written, by said guy.

Their function these days tends to be to provide an ideological backbone of bad scifi justifications to deregulation and the billionaire takeover of the state, which among other things has made them hugely influential in the AI space.

They are also communicating vessels with Effective Altruism.

If this piques your interest check the links on the sidecard.

[-] Architeuthis@awful.systems 16 points 2 weeks ago

The company was named after the U+1F917 🤗 HUGGING FACE emoji.

HF is more of a platform for publishing this sort of thing, as well as the neural networks themselves and a specialized cloud service to train and deploy them, I think. They are not primarily a tool vendor, and they were around well before the LLM hype cycle.

[-] Architeuthis@awful.systems 16 points 4 weeks ago

If we came across very mentally disabled people or extremely early babies (perhaps in a world where we could extract fetuses from the womb after just a few weeks) that could feel pain but only had cognition as complex as shrimp, it would be bad if they were burned with a hot iron, so that they cried out. It’s not just because they’d be smart later, as their hurting would still be bad if the babies were terminally ill so that they wouldn’t be smart later, or, in the case of the cognitively enfeebled who’d be permanently mentally stunted.

wat

[-] Architeuthis@awful.systems 16 points 4 weeks ago

This almost reads like an attempt at a reductio ad absurdum of worrying about animal welfare, like you are supposed to be a ridiculous hypocrite if you think factory farming is fucked yet are indifferent to the cumulative suffering caused to termites every time an exterminator sprays your house so it doesn't crumble.

Relying on the mean estimate, giving a dollar to the shrimp welfare project prevents, on average, as much pain as preventing 285 humans from painfully dying by freezing to death and suffocating. This would make three human deaths painless per penny, when otherwise the people would have slowly frozen and suffocated to death.

Dog, you've lost the plot.

FWIW a charity providing the means to stun shrimp before death by freezing as is the case here isn't indefensible, but the way it's framed as some sort of an ethical slam dunk even compared to say donating to refugee care just makes it too obvious you'd be giving money to people who are weird in a bad way.

[-] Architeuthis@awful.systems 16 points 3 months ago

Summizing Emails is a valid purpose.

Or it would have been if LLMs were sufficiently dependable anyway.

[-] Architeuthis@awful.systems 16 points 3 months ago* (last edited 3 months ago)

OpenAI manages to do an entire introduction of a new model without using the word "hallucination" even once.

Apparently it implements chain-of-thought, which either means they changed the RHFL dataset to force it to explain its 'reasoning' when answering or to do self questioning loops, or that it reprompts itsefl multiple times behind the scenes according to some heuristic until it synthesize a best result, it's not really clear.

Can't wait to waste five pools of drinkable water to be told to use C# features that don't exist, but at least it got like 25.2452323760909304593095% better at solving math olympiads as long as you allow it a few tens of tries for each question.

[-] Architeuthis@awful.systems 16 points 5 months ago* (last edited 5 months ago)

If he becomes president he's selling off everything that isn't bolted down, isn't he? The US's own Boris Yeltsin.

[-] Architeuthis@awful.systems 16 points 5 months ago* (last edited 5 months ago)

The interminable length has got to have started out as a gullibility filter before ending up as an unspoken imperative to be taken seriously in those circles, isn't HPATMOR like a million billion chapters as well?

Siskind for sure keeps his wildest quiet-part-out-loud takes until the last possible minute of his posts, when he does decide to surface them.

[-] Architeuthis@awful.systems 16 points 7 months ago

Did the Aella moratorium from r/sneerclub carry over here?

Because if not

for the record, im currently at ~70% that we're all dead in 10-15 years from AI. i've stopped saving for retirement, and have increased my spending and the amount of long-term health risks im taking

[-] Architeuthis@awful.systems 16 points 10 months ago

From the comments:

I am someone who takes great interest in scientific findings outside his own area of expertise.

I find it rather disheartening to discover that most of it is rather bunk, and

image

ChatGPT, write me up an example of a terminal case of engineers disease and post it to acx to see if they'll catch on to it.

[-] Architeuthis@awful.systems 16 points 10 months ago* (last edited 10 months ago)

I really like how he specifies he only does it when with white people, just to dispel any doubt this happens in the context of discussing Lovecraft's cat.

[-] Architeuthis@awful.systems 16 points 11 months ago

tvtropes

The reason Keltham wants to have two dozen wives and 144 children, is that he knows Civilization doesn't think someone with his psychological profile is worth much to them, and he wants to prove otherwise. What makes having that many children a particularly forceful argument is that he knows Civilization won't subsidize him to have children, as they would if they thought his neurotype was worth replicating. By succeeding far beyond anyone's wildest expectations in spite of that, he'd be proving they were not just mistaken about how valuable selfishness is, but so mistaken that they need to drastically reevaluate what they thought they knew about the world, because obviously several things were wrong if it led them to such a terrible prediction.

huh

view more: ‹ prev next ›

Architeuthis

joined 1 year ago