27
submitted 3 months ago* (last edited 3 months ago) by blakestacey@awful.systems to c/techtakes@awful.systems

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

top 50 comments
sorted by: hot top controversial new old
[-] dgerard@awful.systems 24 points 3 months ago

Timnit Gebru on Twitter:

We received feedback from a grant application that included "While your impact metrics & thoughtful approach to addressing systemic issues in AI are impressive, some reviewers noted the inherent risks of navigating this space without alignment with larger corporate players,"

https://xcancel.com/timnitGebru/status/1836492467287507243

[-] swlabr@awful.systems 14 points 3 months ago

navigating this space without alignment with larger corporate players

stares into middle distance, hollow laugh

load more comments (1 replies)
[-] sailor_sega_saturn@awful.systems 22 points 2 months ago* (last edited 2 months ago)

Today in you can't make this stuff up: SpaceX invades Cards Against Humanity's crowdfunded southern border plot of land.

Article (Ars Technica) Lawsuit with pictures (PDF)

Reddit Comment with CAH's email to backers

The above Ars Technica article also lead me to this broader article (reuters) about SpaceX's operations in Texas. I found these two sentences particularly unpleasant:

County commissioners have sought to rechristen Boca Chica, the coastal village where Johnson remains a rare holdout, with the Musk-endorsed name of Starbase.

At some point, former SpaceX employees and locals told Reuters, Starbase workers took down a Boca Chica sign identifying their village. They said workers also removed a statue of the Virgin of Guadalupe, an icon revered by the predominantly Mexican-American residents who long lived in the area.

Reading all of this also somehow makes Elon Musk's anti-immigrant tweets feel even worse to me than they already were.

load more comments (3 replies)
[-] self@awful.systems 22 points 3 months ago

so mozilla decided to take the piss while begging for $10 donations:

We know $10 USD may not seem like enough to reclaim the internet and take on irresponsible tech companies. But the truth is that as you read this email, hundreds of Mozilla supporters worldwide are making donations. And when each one of us contributes what we can, all those donations add up fast.

With the rise of AI and continued threats to online privacy, the stakes of our movement have never been higher. And supporters like you are the reason why Mozilla is in a strong position to take on these challenges and transform the future of the internet.

the rise of AI you say! wow that sounds awful, it’s so good Mozilla isn’t very recently notorious for pushing that exact thing on their users without their consent alongside other privacy-violating changes. what a responsible tech company!

[-] hrrrngh@awful.systems 22 points 2 months ago

This quote flashbanged me a little

When you describe your symptoms to a doctor, and that doctor needs to form a diagnosis on what disease or ailment that is, that's a next word prediction task. When choosing appropriate treatment options for said ailment, that's also a next word prediction task.

From this thread: https://www.reddit.com/r/gamedev/comments/1fkn0aw/chatgpt_is_still_very_far_away_from_making_a/lnx8k9l/

[-] Soyweiser@awful.systems 15 points 2 months ago

Instead of improving LLMs, they are working backwards to prove that all other things are actually word prediction tasks. It is so annoying and also quite dumb. No chemisty isn't like coding/legos. The law isn't invalid because it doesn't have gold fringes and you use magical words.

[-] swlabr@awful.systems 14 points 2 months ago

None of these fucking goblins have learned that analogies aren’t equivalences!!! They break down!!! Auuuuuuugggggaaaaaaarghhhh!!!!!!

[-] YourNetworkIsHaunted@awful.systems 13 points 2 months ago

The problem is that there could be any number of possible next words, and the available results suggest that the appropriate context isn't covered in the statistical relationships between prior words for anything but the most trivial of tasks i.e. automating the writing and parsing of emails that nobody ever wanted to read in the first place.

[-] gerikson@awful.systems 11 points 2 months ago

This is just standard promptfondler false equivalence: "when people (including me) speak, they just select the next most likely token, just like an LLM"

[-] dgerard@awful.systems 19 points 3 months ago

Paul Krugman and Francis Fukuyama and Daniel Dennett and Steve Pinker were in a "human biodiversity discussion group" with Steve Sailer and Ron Unz in 1999, because of course they were

[-] Soyweiser@awful.systems 12 points 3 months ago

I look forward to the 'but we often disagreed' non-apologies. With absolute lack of self reflection on how this helped push Sailer/Unz into the positions they are now. If we even get that.

[-] swlabr@awful.systems 15 points 3 months ago

Pinker: looking through my photo album where I’m with people like Krauss and Epstein, shaking my head the whole time so the people on the bus know I disagree with them

load more comments (10 replies)
[-] khalid_salad@awful.systems 17 points 2 months ago

Every few years there is some new CS fad that people try to trick me into doing research in


"algorithms" (my actual area), then quantum, then blockchain, then AI.

Wish this bubble would just fucking pop already.

[-] ibt3321@lemmy.blahaj.zone 12 points 2 months ago

This stuff feels like a DJ is cross-fading between the different hype cycles.

[-] mii@awful.systems 17 points 2 months ago

Follow up for this post from the other day.

Our DSO now greenlit the stupid Copilot integration because "Microsoft said it's okay" (of course they did), and he also was on some stupid AI convention yesterday and whatever fucking happened there, he's become a complete AI bro and is now preaching the Gospel of Altman that everyone who's not using AI will be obsolete in few years and we need to ADAPT OR DIE. It's the exact same shit CEO is spewing.

He wants an AI that handles data security breaches by itself. He also now writes emails with ChatGPT even though just a week ago he was hating on people who did that. I sat with my fucking mouth open in that meeting and people asked me whether I'm okay (I'm not).

I need to get another job ASAP or I will go clinically insane.

load more comments (6 replies)
[-] o7___o7@awful.systems 16 points 3 months ago

Behind the Bastards is starting a series about Yarvin today. Always appreciate it when they wander into our bailiwick!

load more comments (4 replies)
[-] flizzo@awful.systems 14 points 3 months ago

Orange site on pager bombs in Lebanon:

If we try to do what we are best at here at HN, let’s focus the discussion on the technical aspects of it.

It immediately reminded me of Stuxnet, which also from a technical perspective was quite interesting.

load more comments (5 replies)
[-] ibt3321@lemmy.blahaj.zone 14 points 2 months ago

A lemmy-specific coiner today: https://awful.systems/post/2417754

The dilema of charging the users and a solution by integrating blockchain to fediverse

First, there will be a blockchain. There will be these cryptocurrencies:

This guy is speaking like he is in Genesis 1

I guess it would be better that only the instances can own instance-specific coins.

You guess alright? You mean that you have no idea what you're saying.

if a user on lemmy.ee want to post on lemmy.world, then lemmy.ee have to pay 10 lemmy.world coin to lemmy.world

What will this solve? If 2 people respond to each other's comments, the instance with the most valuable coin will win. What does that have to do with who caused the interaction?

[-] sailor_sega_saturn@awful.systems 19 points 2 months ago

Yes crypto instances, please all implement this and "disallow" everyone else from interacting with you! I promise we'll be sad and not secretly happy and that you'll make lots of money from people wanting to interact with you.

load more comments (1 replies)
load more comments (3 replies)
[-] dgerard@awful.systems 13 points 3 months ago

fuckin. when did Mozilla's twitter feed turn into wall to fucking wall AI spam https://x.com/mozilla

load more comments (8 replies)
[-] gerikson@awful.systems 13 points 3 months ago

Despite Soatak explicitely warning users that posting his latest rant[1] to the more popular tech aggregators would lead to loss of karma and/or public ridicule, someone did just that on lobsters and provoked this mask-slippage[2]. (comment is in three paras, which I will subcomment on below)

Obligatory note that, speaking as a rationalist-tribe member, to a first approximation nobody in the community is actually interested in the Basilisk and hasn’t been for at least a decade. As far as I can tell, it’s a meme that is exclusively kept alive by our detractors.

This is the Rationalist version of the village worthy complaining that everyone keeps bringing up that one time he fucked a goat.

Also, “this sure looks like a religion to me” can be - and is - argued about any human social activity. I’m quite happy to see rationality in the company of, say, feminism and climate change.

Sure, "religion" is on a sliding scale, but Big Yud-flavored Rationality ticks more of the boxes on the "Religion or not" checklist than feminism or climate change. In fact, treating the latter as a religion is often a way to denigrate them, and never used in good faith.

Finally, of course, it is very much not just rationalists who believe that AI represents an existential risk. We just got there twenty years early.

Citation very much needed, bub.


[1] https://soatok.blog/2024/09/18/the-continued-trajectory-of-idiocy-in-the-tech-industry/

[2] link and username witheld to protect the guilty. Suffice to say that They Are On My List.

[-] Soyweiser@awful.systems 11 points 3 months ago* (last edited 3 months ago)

nobody in the community is actually interested in the Basilisk

But you should, yall created an idea which some people do take seriously and it is causing them mental harm. In fact, Yud took it so seriously in a way that shows that he either beliefs in potential acausal blackmail himself, or that enough people in the community believe it that the idea would cause harm.

A community he created to help people think better. Which now has a mental minefield somewhere but because they want to look sane to outsiders now people don't talk about it. (And also pretend that now mentally exploded people don't exist). This is bad.

I get that we put them in a no-win situation, either take their own ideas seriously enough to talk about acausal blackmail. And then either help people by disproving the idea, or help people by going 'this part of our totally Rational way of thinking is actually toxic and radioactive and you should keep away from it (A bit like Hegel am I right(*))'. Which makes them look a bit silly for taking it seriously (of which you could say who cares?), or a bit openly culty if they go with the secret knowledge route. Or they could pretend it never happened and never was a big deal and isn't a big deal in an attempt to not look silly. Of course, we know what happened, and that it still is causing harm to a small group of (proto)-Rationalists. This option makes them look insecure, potentially dangerous, and weak to social pressure.

That they do the last one, while have also written a lot about acausal trading, which just shows they don't take their own ideas that seriously. Or if it is an open secret to not talk openly about acausal trade due to acausal blackmail it is just more cult signs. You have to reach level 10 before they teach you about lord Xeno type stuff.

Anyway, I assume this is a bit of a problem for all communal worldbuilding projects, eventually somebody introduces a few ideas which have far reaching consequences for the roleplay but which people rather not have included. It gets worse when the non-larping outside then notices you and the first reaction is to pretend larping isn't that important for your group because the incident was a bit embarrassing. Own the lightning bolt tennis ball, it is fine. (**)

*: I actually don't know enough about philosophy to know if this joke is correct, so apologies if Hegel is not hated.

**: I admit, this joke was all a bit forced.

load more comments (5 replies)
[-] BlueMonday1984@awful.systems 13 points 3 months ago

Pulling out a pretty solid Tweet @ai_shame showed me:

countersneer

To pull out a point I've been hammering since Baldur Bjarnason talked about AI's public image, I fully anticipate tech's reputation cratering once the AI bubble bursts. Precisely how the public will view the tech industry at large in the aftermath I don't know, but I'd put good money on them being broadly hostile to it.

load more comments (8 replies)
[-] fasterandworse@awful.systems 13 points 3 months ago* (last edited 3 months ago)

Just discovered Patrick Boyle's channel. Deadpan sneer perfection https://www.youtube.com/watch?v=3jhTnk3TCtc

edit: tried to post invidious link but didn't seem to work

load more comments (2 replies)
[-] o7___o7@awful.systems 13 points 3 months ago* (last edited 3 months ago)

What are the chances that--somewhere deep in the bowels of Clearwater, FL--some poor soul has been ordered to develop an AI replicant of L. Ron Hubbard?

There is a substantial corpus.

[-] self@awful.systems 12 points 3 months ago

the only worthwhile use of LLMs: endlessly prompting the L Ron Hubbard chatbot with Battlefield Earth reviews as a form of acausal torture

load more comments (1 replies)
load more comments (5 replies)
[-] sailor_sega_saturn@awful.systems 13 points 2 months ago

Meanwhile, over at the orange site they discuss a browser hack: https://news.ycombinator.com/item?id=41597250 As in a hack that gave the attacker control over any user of this particular browser even if they only ever visited innocent websites, only needing to know their user ID.

This is what's known in the biz as a company destroying level fuck-up. I'm not sure this is particularly sneerable or not but I'm just agog at how a company that calls themselves "The Browser Company" can get the basic browser security model so incredibly wrong.

[-] self@awful.systems 12 points 2 months ago* (last edited 2 months ago)

from their Wikipedia page I’m starting to get why I’ve never previously heard of The Browser Company’s browser; it’s about a year old, it’s only for macOS, iOS, and Windows, and it’s just a chromium fork with a Swift UI overtop and extremely boring features you can get with plugins on Firefox without risking getting your entire life compromised (til Mozilla decides that’s profitable, I suppose)

Arc is designed to be an "operating system for the web", and integrates standard browsing with Arc's own applications through the use of a sidebar. The browser is designed to be customisable and allows users to cosmetically change how they see specific websites.

oh fuck off. so what makes something an operating system is:

  • the whole UI got condensed down into an awkward-looking sidebar that takes up more space instead of a top bar
  • you can re-style websites (which is the feature that enabled this hack, and which must be one of the most common browser plugins)
  • you can change the browser’s UI color
  • it can run “its own applications”? which sounds like a real security treat if they’re running in the UI context of the browser. though to be honest I don’t see why these wouldn’t just be ordinary web apps, in which case it’s just a PWA feature
load more comments (3 replies)
load more comments (2 replies)
[-] gerikson@awful.systems 13 points 2 months ago
[-] dgerard@awful.systems 11 points 2 months ago* (last edited 2 months ago)

so according to @liveuamap, the backstory here is that this is to get his name out of news about the WildBerries shooting in Moscow - where a battle for corporate control came down to gunshots - because he was backing one of the sides

[-] antifuchs@awful.systems 12 points 2 months ago

Let’s bring the haunted nuclear reactor back online so copilot can hallucinate a little more https://www.washingtonpost.com/business/2024/09/20/microsoft-three-mile-island-nuclear-constellation/

[-] mii@awful.systems 16 points 2 months ago

[…] the tech giant would buy 100 percent of its power for 20 years.

I want them to fucking choke on this deal when the bubble bursts.

[-] antifuchs@awful.systems 11 points 2 months ago

I live like 15mi from there, I would prefer the containment bubble to stay intact. But the tech bubble is welcome to go blow up any moment

[-] sailor_sega_saturn@awful.systems 15 points 2 months ago* (last edited 2 months ago)

How the heck have people become so... blasé about climate change?? It is wild to me. If we're restarting nuclear reactors, with everything that entails, it should be with the goal of shutting down gas or coal power. Not to do more unsustainable garbage on top of all the existing unsustainable garbage.

Feels like the world's just given up sometimes, even though it's not quite too late.

load more comments (1 replies)
[-] gerikson@awful.systems 12 points 2 months ago

"to give you more AI slop we have to restart TMI" is going to do wonders for the public's opinion of Big Tech

load more comments (1 replies)
[-] mountainriver@awful.systems 12 points 2 months ago

Jason Kint writes a thread on how Google spun - and publications printed their spin - on a recently lost case: https://xcancel.com/jason_kint/status/1836781623137681746

If you already are very cynical about tech journalism (or the state of journalism in general), it might be nothing new except confirmation from the internal documents of Google. But always nice to see how the sausages are made.

load more comments (1 replies)
[-] sailor_sega_saturn@awful.systems 12 points 2 months ago* (last edited 2 months ago)

The robots clearly want us dead -- "Delivery Robot Knocked Over Pedestrian, Company Offered ‘Promo Codes’ to Apologize" (404 media) (archive)

And here rationalists warned that AI misalignment would be hidden from us until the "diamonoid bacteria".

[-] BigMuffin69@awful.systems 12 points 2 months ago

I literally just saw a xitter post about how the exploding pagers in Lebanon is actually a microcosm of how a 'smarter' entity (the yahood) can attack a 'dumber' entity, much like how AGI will unleash the diamond bacterium to simultaneously kill all of humanity.

Which again, both entities are humans- they have the same intelligence you twats. Same argument people make all the time w.r.t. Spanish v Aztecs where gunpowder somehow made Cortez and company gigabrains compared to the lowly indigenous people (and totally ignoring the contributions of the real super intelligent entity: the small pox virus).

[-] sailor_sega_saturn@awful.systems 13 points 2 months ago

OK new rule you're only allowed to call someone dumb for not finding explosives in their pagers if you had, previously to hearing the news, regularly checked with no specialized tools all electronics you buy for bombs hidden inside of the battery compartment.

load more comments (3 replies)
[-] sinedpick@awful.systems 12 points 2 months ago* (last edited 2 months ago)

I signed up for the Urbit newsletter many moons ago when I was a little internet child. Now, it's a pretty decent source of sneers. This month's contains: "The First Wartime Address with Curtis Yarvin". In classic Moldbug fashion, it's Two Hours and Forty Fucking Five minutes long. I'm not going to watch the whole thing, but I'll try to mine the transcript for sneers.

26:23 --

Simplicity in them you know it runs on a virtual machine who specification Nock [which] fits on a T-shirt and uh you know the goal of the system is to basically take this kind of fundamental mathematical simplicity of Nock and maintain that simplicity all the way to user space so we create something that's simple and easy to use that's not a small amount of of work

Holy fucking shit, does this guy really think building your entire software stack on brainfuck makes even a little bit of sense at all?

30:17 -- a diatribe about how social media can only get worse and how Facebook was better than myspace because its original users were at the top of the social hierarchy. Obviously, this bodes well for urbit because all of you spending 3 hours of your valuable time listening to this wartime address? You're the cream of the crop.

~2:00:00 -- here he addresses concerns about his political leanings, caricaturing the concern as "oh Yarvin wants to make this a monarchy" and responding by saying "nuh uh, urbit is decentralized." Absent from all this is any meaningful analysis of how decentralized systems (such as the internet itself) eventually tend to centralized systems under certain incentive structures. Completely devoid of substance.

load more comments (4 replies)
[-] sailor_sega_saturn@awful.systems 12 points 2 months ago

I've been slightly unhappy at my job lately as it's been getting less cool and more bureaucratic and stressful over time; so I've been idly browsing job postings. But so many of them are about AI it's kinda discouraging.

Take Microsoft for example, a big company that surely does lots of interesting stuff. They currently have 17 job postings for experienced programmers in California. 12 of them mention AI in the description. That's 70%. And the only cool position asks for a bazillion years of kernel experience (almost tempted to go for that anyway though).

Ugh guess it's maybe not the best time to switch jobs. ~~Really I should just go self employed what could possibly go wrong?~~

load more comments (1 replies)
[-] gerikson@awful.systems 11 points 3 months ago
[-] FredFig@awful.systems 18 points 3 months ago* (last edited 3 months ago)

I admit, in my haste, I read that link as Marc Andreessen openly announcing they're investing in the Chinese Communist Party, which is slightly funnier than the reality of yet another crypto game.

load more comments (4 replies)
load more comments
view more: next ›
this post was submitted on 16 Sep 2024
27 points (100.0% liked)

TechTakes

1481 readers
365 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS