this post was submitted on 25 May 2025
25 points (100.0% liked)

TechTakes

1880 readers
91 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] blakestacey@awful.systems 22 points 3 days ago (4 children)

Curtis Yarvin:

Girls think the "eu" in "eugenics" means EW. Don't get the ick, girls! It literally means good.

So if you're not into eugenics, that means you must be into dysgenics. Dissing your own genes! OMG girl what

dr. caitlin m. green:

... how is this man still able to post from inside the locker he should be stuffed in 24/7

[–] swlabr@awful.systems 16 points 3 days ago

Seeing Yarvin mansplain eugenics really does make one wonder how he doesn't just get suckerpunched whenever he says anything at someone in public.

[–] Soyweiser@awful.systems 12 points 3 days ago

Not beating the sexism allegations.

[–] Amoeba_Girl@awful.systems 9 points 3 days ago (1 children)

The eigenrobot thread he's responding to is characteristically bizarre and gross. You'd think eigenrobot being anti-eugenics is a good thing but he still finds a way to make it suspect. (He believes being unable to make babies is worse than death?)

[–] mountainriver@awful.systems 8 points 3 days ago (2 children)

I think he means "mass sterilisation of a population" Vs "mass murder of the same population", which is genocide either way, and then he would opt for the faster method.

Or something. Feels extra creepy discussing which genocide is better with the ongoing genocide in Gaza.

[–] Soyweiser@awful.systems 8 points 2 days ago

Re: extra creepy: and also with people their people in power.

[–] YourNetworkIsHaunted@awful.systems 5 points 3 days ago* (last edited 3 days ago)

I mean I guess you can argue that straight up murder has a certain honesty to it? At the same time that is mainly good because it makes it harder to justify what's happening compared to anti-miscegenation laws or restricting people to an open-air prison for a few generations. And we can see how that's working out in the current political climate.

[–] istewart@awful.systems 9 points 3 days ago (1 children)

sounds like he's posting from inside a dilapidated white panel van parked strategically just outside a legally-mandated exclusion radius surrounding a middle school

[–] BlueMonday1984@awful.systems 6 points 3 days ago

So, he's essentially Drake if he got into AI doom

[–] scruiser@awful.systems 18 points 3 days ago (1 children)

A new LLM plays pokemon has started, with o3 this time. It plays moderately faster, and the twitch display UI is a little bit cleaner, so it is less tedious to watch. But in terms of actual ability, so far o3 has made many of the exact same errors as Claude and Gemini including: completely making things up/seeing things that aren't on the screen (items in Virdian Forest), confused attempts at navigation (it went back and forth on whether the exit to Virdian Forest was in the NE or NW corner), repeating mistakes to itself (both the items and the navigation issues I mentioned), confusing details from other generations of Pokemon (Nidoran learns double kick at level 12 in Fire Red and Leaf Green, but not the original Blue/Yellow), and it has signs of being prone to going on completely batshit tangents (it briefly started getting derailed about sneaking through the tree in Virdian Forest... i.e. moving through completely impassable tiles).

I don't know how anyone can watch any of the attempts at LLMs playing Pokemon and think (viable) LLM agents are just around the corner... well actually I do know: hopium, cope, cognitive bias, and deliberate deception. The whole LLM playing Pokemon thing is turning into less of a test of LLMs and more entertainment and advertising of the models, and the scaffold are extensive enough and different enough from each other that they really aren't showing the models' raw capabilities (which are even worse than I complained about) or comparing them meaningfully.

[–] aio@awful.systems 10 points 3 days ago (1 children)

I like how all of the currently running attempts have been equipped with automatic navigation assistance, i.e. a pathfinding algorithm from the 60s. And that's the only part of the whole thing that actually works.

[–] scruiser@awful.systems 6 points 3 days ago (1 children)

I wouldn't say even that part works so well, given how Mt. Moon is such a major challenge even with all the features like that.

[–] aio@awful.systems 9 points 3 days ago

The actual pathfinding algorithm (which is surely just A* search or similar) works just fine; the problem is the LLM which uses it.

[–] froztbyte@awful.systems 7 points 3 days ago (3 children)

I regret to inform you that, once again, aella (via this)

fucked that this is at least moderately honest

[–] Amoeba_Girl@awful.systems 13 points 3 days ago (1 children)

Cringe but my god the horde of buttmad nazis force me to be on weirdo & girl girl's side

Personally I have more questions about a Rationalist having an anarcho-communist flag tattoo.

[–] Soyweiser@awful.systems 10 points 2 days ago (3 children)

anarcho-communist flag, username RatOrthodox, hangs out with eugenicist iq-anon types, somewhere on the longtermist scale. Has unblocked replies from a person who fell into all the alt-right signifiers and who is calling this 'Jewish' (anti-semitism and a holocaust joke, classy. If I sneer any harder I will start to look like Scott on that recent video).

[–] Amoeba_Girl@awful.systems 7 points 2 days ago* (last edited 2 days ago) (1 children)

I see now he's also got a supply and demand tattoo*? I don't fucking know. Maybe he thinks red and black is the antifa flag. Maybe he got it for sexy humiliation purposes. Maybe it's faded blue ink.

*"If you want to celebrate [capitalism] with a symbol, you’ll have to get a law of nature tattooed on you."

[–] Amoeba_Girl@awful.systems 10 points 2 days ago (1 children)

Yeah okay. Boring!!! You suck!!!

[–] Soyweiser@awful.systems 9 points 2 days ago* (last edited 2 days ago) (5 children)

I have so many questions, stuff like, do you know what laws of nature are? (also, see the weak link between supply and demand and capitalism, various non capitalist systems also have supply and demand curves. "im not economically illiterate, just ideologically"). And 13? Wtf is wrong with your parents that they allowed that? At least my earlier 'wtf why is he hanging out with neo-nazis' has been answered.

E: also note the dead plant in the tattoo in progress pics. (An orchid I think).

load more comments (5 replies)
load more comments (2 replies)
[–] mirrorwitch@awful.systems 9 points 2 days ago (7 children)

Ok I have to say, I despise aella but as a promiscuous woman I completely fail to see what's supposed to be the problem with this particular form of play. That people like having casual sex? That they have slut pride? What.

I probably passed the 100 mark myself several years ago, I've hooked up with girls with much more obvious slut tattoos too, and we're all antifascist anarchists. Is this community ok with sneering at public sexuality now?

The only thing I found vaguely mid in that X is using a tattoo gun rather than scarification, branding, or at least stick-and-poke. But I don't kink-shame people for being casuals.

[–] froztbyte@awful.systems 9 points 2 days ago (1 children)

oh, no, to be clear: this isn't about the sex/sluttiness - I have no issue with that at all. in a more general sense (over the wider set of things aella), her positioning herself as a "sex researcher" but often using that to bash specific groups or put out a specific narrative, that is far more problematic

but regarding this specific thing, tho: the "forced a tattoo on me" bit specifically, and all the rationalists doing their weird quasi-intellectual arguing with everyone who asks questions. the "forced" wording feels ... intentionally bait&switch? presumably (..hopefully?) there's mutual consent here, so it's a fucking choice to go frame the event that way

[–] mirrorwitch@awful.systems 4 points 2 days ago* (last edited 2 days ago) (4 children)

That's very much the language my play partners use online, though? I totally post banter like "that submissive was so blanked out that I took advantage of them by doing X", to which they'll reply "implying I didn't evily manipulated you into doing that in the first place", and so on. This is so commonplace in my communities that I failed to even understand what could be the problem before you pointed it out. I mean, "I forced my famous domme to mark me as a trophy as her #100 simp"? How would you exactly force (non-kink sense) someone to tattoo you anyway, and if you did and were unwise enough to brag about it, presumably the microcelebrity in question wouldn't like and retweet it? I took it to mean "I was so into the idea of being marked, I'm glad she agreed to my pestering", and I would bet money if any of my people talked in that exact wording, that's what everyone would take it as. I mean, otherwise I would probably have been arrested for the frequency of times I say "bye everyone gonna tie someone up and do unspeakably cruel things to them" and whatnot

load more comments (4 replies)
load more comments (6 replies)
[–] Soyweiser@awful.systems 8 points 2 days ago (1 children)

This is more on Aella Cult than on her tbh.

And didn't Aella and also Grimes have a come to jesus moment when they realized they hung out with a lot of bad people. Guess nothing came from that. (That part of them is always worse than these just a bit off antics they pull).

[–] froztbyte@awful.systems 11 points 2 days ago (1 children)

And didn’t Aella and also Grimes have a come to jesus moment when they realized they hung out with a lot of bad people

I would give this at least -9600 credibility points. neither of them are whoopsie-daisy'ing into these scenes and crowds, they both knew what and who

[–] Soyweiser@awful.systems 9 points 2 days ago

Sorry I was being a bit too vague because I didn't have any proof ready. But here it is. It is them covering their asses if there might be a fallout, a non-apology equivalent, as there has been no changes or dropping of names/extremist behavior since, it is all just vague vibes.

[–] Soyweiser@awful.systems 13 points 3 days ago

Im sure this is fine https://infosec.exchange/@paco/114509218709929701

"Paco Hope #resist @paco@infosec.exchange

OMG. #Microsoft #Copilot bypasses #Sharepoint #security so you don’t have to!

“CoPilot gets privileged access to SharePoint so it can index documents, but unlike the regular search feature, it doesn’t know about or respect any of the access controls you might have set up. You can get CoPilot to just dump out the contents of sensitive documents that it can see, with the bonus feature* that your access won’t show up in audit logs.”

The S in CoPilot stands for Security! https://pivotnine.com/the-crux/archive/remembering-f00fs-of-old/"

[–] raktheundead@fedia.io 10 points 3 days ago (1 children)

Another critihype article from the BBC, with far too much credulousness at the idea behind supposed AI consciousness at the cost of covering the harms of AI as things stand, e.g. the privacy, environmental, data set bias problems:

https://www.bbc.com/news/articles/c0k3700zljjo

[–] BlueMonday1984@awful.systems 10 points 3 days ago (2 children)

Tried to read it, ended up glazing over after the first or second paragraph, so I'll fire off a hot take and call it a day:

Artificial intelligence is a pseudoscience, and it should be treated as such.

[–] scruiser@awful.systems 13 points 3 days ago

Every AI winter, the label AI becomes unwanted and people go with other terms (expert systems, machine learning, etc.)... and I've come around to thinking this is a good thing, as it forces people to specify what it is they actually mean, instead of using a nebulous label with many science fiction connotations that lumps together decent approaches and paradigms with complete garbage and everything in between.

[–] corbin@awful.systems -2 points 3 days ago (4 children)

I'm gonna be polite, but your position is deeply sneerworthy; I don't really respect folks who don't read. The article has quite a few quotes from neuroscientist Anil Seth (not to be confused with AI booster Anil Dash) who says that consciousness can be explained via neuroscience as a sort of post-hoc rationalizing hallucination akin to the multiple-drafts model; his POV helps deflate the AI hype. Quote:

There is a growing view among some thinkers that as AI becomes even more intelligent, the lights will suddenly turn on inside the machines and they will become conscious. Others, such as Prof Anil Seth who leads the Sussex University team, disagree, describing the view as "blindly optimistic and driven by human exceptionalism." … "We associate consciousness with intelligence and language because they go together in humans. But just because they go together in us, it doesn't mean they go together in general, for example in animals."

At the end of the article, another quote explains that Seth is broadly aligned with us about the dangers:

In just a few years, we may well be living in a world populated by humanoid robots and deepfakes that seem conscious, according to Prof Seth. He worries that we won't be able to resist believing that the AI has feelings and empathy, which could lead to new dangers. "It will mean that we trust these things more, share more data with them and be more open to persuasion." But the greater risk from the illusion of consciousness is a "moral corrosion", he says. "It will distort our moral priorities by making us devote more of our resources to caring for these systems at the expense of the real things in our lives" – meaning that we might have compassion for robots, but care less for other humans.

A pseudoscience has an illusory object of study. For example, parapsychology studies non-existent energy fields outside the Standard Model, and criminology asserts that not only do minds exist but some minds are criminal and some are not. Robotics/cybernetics/artificial intelligence studies control loops and systems with feedback, which do actually exist; further, the study of robots directly leads to improved safety in workplaces where robots can crush employees, so it's a useful science even if it turns out to be ill-founded. I think that your complaint would be better directed at specific AGI position papers published by techbros, but that would require reading. Still, I'll try to salvage your position:

Any field of study which presupposes that a mind is a discrete isolated event in spacetime is a pseudoscience. That is, fields oriented around neurology are scientific, but fields oriented around psychology are pseudoscientific. This position has no open evidence against it (because it's definitional!) and aligns with the expectations of Seth and others. It is compatible with definitions of mind given by Dennett and Hofstadter. It immediately forecloses the possibility that a computer can think or feel like humans; at best, maybe a computer could slowly poorly emulate a connectome.

[–] blakestacey@awful.systems 12 points 3 days ago* (last edited 3 days ago) (1 children)

I am not sure that having "an illusory object of study" is a standard that helps define pseudoscience in this context. Consider UFOlogy, for example. It arguably "studies" things that do exist — weather balloons, the planet Venus, etc. Pseudoarchaeology "studies" actual inscriptions and actual big piles of rocks. Wheat gluten and seed oils do have physical reality. It's the explanations put forth which are unscientific, while attempting to appeal to the status of science. The "research" now sold under the Artificial Intelligence banner has become like Intelligent Design "research": Computers exist, just like bacterial flagella exist, but the claims about them are untethered.

[–] blakestacey@awful.systems 11 points 3 days ago

Scientists and philosophers have spilled a tanker truck of ink about the question of how to demarcate science from non-science or define pseudoscience rigorously. But we can bypass all that, because the basic issue is in fact very simple. One of the most fundamental parts of living a scientific life is admitting that you don't know what you don't know. Without that, it's well-nigh impossible to do the work. Meanwhile, the generative AI industry is built on doing exactly the opposite. By its very nature, it generates slop that sounds confident. It is, intrinsically and fundamentally, anti-science.

Now, on top of that, while being anti-science the AI industry also mimics the form of science. Look at all the shiny PDFs! They've got numbers in them and everything. Tables and plots and benchmarks! I think that any anti-science activity that steals the outward habits of science for its own purposes will qualify as pseudoscience, by any sensible definition of pseudoscience. In other words, wherever we draw the line or paint the gray area, modern "AI" will be on the bad side of it.

[–] scruiser@awful.systems 12 points 3 days ago (1 children)

No, I think BlueMonday is being reasonable. The article has some quotes from scientists with actually relevant expertise, but it uncritically mixes them with LLM hype and speculation in a typical both sides sort of thing that gives lay readers the (false) impression that both sides are equal. This sort of journalism may appear balanced, but it ultimately has contributed to all kinds of controversies (from Global Warming to Intelligent Design to medical pseudoscience) where the viewpoints of cranks and uninformed busybodies and autodidacts of questionable ability and deliberate fraudsters get presented equally with actually educated and researched viewpoints.

[–] blakestacey@awful.systems 15 points 3 days ago

Having now read the thing myself, I agree that the BBC is serving up criti-hype and false balance.

[–] o7___o7@awful.systems 9 points 3 days ago (1 children)

...fields oriented around neurology are scientific, but fields oriented around psychology are pseudoscientific.

When a good man gazes into the palantir and sees L Ron Hubbard looking back

[–] Amoeba_Girl@awful.systems 5 points 3 days ago (1 children)

To be fair I also believe psychology is by and large pseudoscience, but the answer to it is sociology, not the MRI gang.

[–] scruiser@awful.systems 8 points 3 days ago

There are parts of the field that have major problems, like the sorts of studies that get done on 20 student volunteers and then get turned into a pop psychology factoid that gets tossed around and over-generalized while the original study fails to replicate, but there are parts that are actually good science.

[–] Amoeba_Girl@awful.systems 5 points 3 days ago

Touting neuroscience as especially informed and scientific about minds is very brave.

[–] dovel@awful.systems 11 points 3 days ago (3 children)

Some quality sneers in Extropic's latest presentation about their thermodynamics hardware. My favorite part was the Founder's mission slide "e/acc maximizes the watts per civilization while Extropic maximizes intelligence per watt".

[–] Amoeba_Girl@awful.systems 9 points 3 days ago

I'm not going to watch more than a few seconds but I enjoyed how awkward Beff Jezos is coming across.

load more comments (2 replies)
[–] V0ldek@awful.systems 12 points 4 days ago* (last edited 4 days ago)
[–] swlabr@awful.systems 8 points 4 days ago (2 children)

Opening up the sack with your new favourite uwu news influencer giving a quick shout-out to our old pals, the NRx. Hoped that we wouldn’t get here, but here we are, regardless.

[–] mountainriver@awful.systems 7 points 3 days ago (2 children)

I didn't know that uwu news influencer was a thing. Kind of a clash between style and topic there, but hey whatever gets the word out.

[–] swlabr@awful.systems 7 points 3 days ago

I didn’t know that uwu news influencer was a thing.

It's probably a thing where if you start thinking about it, it's always been around, but we've just never had the right vocabulary to describe it.

load more comments (1 replies)
[–] Soyweiser@awful.systems 5 points 3 days ago

I had so hoped that the rise of Trump (and his fall due to Biden) on the back of the more numerous and popular seeming Alt-Right had been the end of all this. Showing that NRx was a sort of weaker evolutionary dead end so to speak. But sadly no.

load more comments
view more: ‹ prev next ›