BlueMonday1984

joined 2 years ago
[–] BlueMonday1984@awful.systems 6 points 8 hours ago

(I don't know why, but part of me's saying the quantum bubble isn't gonna last long. Its probably the fact the AI bubble is still going - when that bursts, the sheer economic devastation it'll cause will likely burst the quantum bubble as well.)

In this paper, Gutmann is telling cryptographers not to worry too much about quantum computing. Though cryptographers have still been on the case for a couple of decades, just in case there’s a breakthrough.

Cryptographers do tend to be paranoid about threats to encryption. Given how every single government's hellbent on breaking it or bypassing it, I can't blame them on that front.

The AI bubble launched with a super-impressive demo called ChatGPT, and quantum computing doesn’t have anything like that. There are no products. But the physics experiments are very pretty.

Moreover, quantum can't really break into the consumer market like AI has. AI had slopgens of all stripes and supposedly sky-high adoption (through [forcing it on everyone](https://www.bloodinthemachine.com/p/how-big-tech-is-force-feeding-us https://awful.systems/post/5348844)), quantum's gonna have none of that.

(I don't see the general public falling for the quantum hype, either, given how badly they got burned by the AI hype.)

[–] BlueMonday1984@awful.systems 6 points 11 hours ago* (last edited 10 hours ago)

New edition of AI Killed My Job, giving a deep dive into how genAI has hurt artists. I'd like to bring particular attention to Meilssa's story, which is roughly halfway through, specifically the ending:

There's a part of me that will never forgive the tech industry for what they've taken from me and what they've chosen to do with it. In the early days as the dawning horror set in, I cried about this almost every day. I wondered if I should quit making art. I contemplated suicide. I did nothing to these people, but every day I have to see them gleefully cheer online about the anticipated death of my chosen profession. I had no idea we artists were so hated—I still don't know why. What did my silly little cat drawings do to earn so much contempt? That part is probably one of the hardest consequences of AI to come to terms with. It didn't just try to take my job (or succeed in making my job worse) it exposed a whole lot of people who hate me and everything I am for reasons I can't fathom. They want to exploit me and see me eradicated at the same time.

[–] BlueMonday1984@awful.systems 3 points 12 hours ago

Given how gen-AI has utterly consumed the tech industry over these past two years, I see very little reason to give the benefit of the doubt here.

Focusing on NVidia, they've made billions selling shovels in the AI gold rush (inflating their stock prices in the process), and have put billions more into money-burning AI startups to keep the bubble going. They have a vested interest in forcing AI onto everyone and everything they can.

[–] BlueMonday1984@awful.systems 3 points 14 hours ago (2 children)

Nvidia and California College of the Arts Enter Into a Partnership

Oh, I'm sure the artists enrolling at the CCA are gonna be so happy to hear they've been betrayed

The collaboration with CCA is described in today’s announcement as aiming to “prepare a new generation of creatives to thrive at the intersection of art, design and emerging technologies.”

Hot take: There is no "intersection" between these three, because the "emerging technologies" in question are a techno-fascist ideology designed to destroy art for profit

And Copilot hallucinated all the way through the study.

HORRIFYING: The Automatic Lying Machine Lied All The Way Through

The evaluation did not find evidence that time savings have led to improved productivity, and control group participants had not observed productivity improvements from colleagues taking part in the M365 Copilot pilot.

SHOCKING: The Mythical Infinite Productivity Machine Is A Fucking Myth

At least 72% of the test subjects enjoyed themselves.

Gambling and racism are two of the UK's specialties, and AI is very good at both of those). On this statistic, I am not shocked.

Is there already a word for “an industry which has removed itself from reality and will collapse when the public’s suspension of disbelief fades away”?

If there is, I haven't heard of it. To try and preemptively coin one, "artificial industry" ("AI" for short) would be pretty fitting - far as I can tell, no industry has unmoored itself from reality like this until the tech industry pulled it off via the AI bubble.

Calling this just “a bubble” doesn’t cut it anymore, they’re just peddling sci-fi ideas now. (Metaverse was a bubble, and it was stupid as hell, but at least those headsets and the legless avatars existed.)

I genuinely forgot the metaverse existed until I read this.

New post from tante: The “Data” Narrative eats itself, using the latest Pivot to AI as a jumping off point to talk about synthetic data.

Naturally, the best and most obvious fix — don’t hoard all that shit in the first place — wasn’t suggested.

At this point, I'm gonna chalk the refusal to stop hoarding up to ideology more than anything else. The tech industry clearly sees data not as information to be taken sparingly, used carefully, and deleted when necessary, but as Objective Reality Units^tm^ which are theirs to steal and theirs alone.

[–] BlueMonday1984@awful.systems 13 points 2 days ago (7 children)

Starting things off with a newsletter by Jared White that caught my attention: Why “Normies” Hate Programmers and the End of the Playful Hacker Trope, which directly discusses how the public perception of programmers has changed for the worse, and how best to rehabilitate it.

Adding my own two cents, the rise of gen-AI has definitely played a role here - I'm gonna quote Baldur Bjarnason directly here, since he said it better than I could:

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

If AI slop is an insult to life itself, then this shit is an insult to knowledge. Any paper that actually uses "synthetic data" should be immediately retracted (and ideally destroyed altogether), but it'll probably take years before the poison is purged from the scientific record.

Artificial intelligence is the destruction of knowledge for profit. It has no place in any scientific endeavor. (How you managed to maintain a calm, detached tone when talking about this shit, I will never know.)

[–] BlueMonday1984@awful.systems 5 points 3 days ago (1 children)

Saw an AI-extruded "art" "timelapse" in the wild recently - the "timelapse" in question isn't gonna fool anyone who actually cares about art, but it's Good Enough^tm^ to pass muster on someone mindlessly scrolling, and its creation serves only to attack artists' ability to prove their work was human made.

This isn't the first time AI bros have pulled this shit (Exhibit A, Exhibit B), by the way.

Burke and Goodnough are working to rectify the report. That sounds like removing the fake stuff but not the conclusions based on it. Those were determined well ahead of time.

In a better world, those conclusions would've been immediately thrown out as lies and Burke and Goodnough would've been immediately fired. We do not live in a better timeline, but a man can dream.

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

 

New blog entry from Baldur, comparing the Icelandic banking bubble and its fallout to the current AI bubble and its ongoing effects.

 

(This is an expanded version of a comment I made, which I've linked above.)

Well, seems the tech industry’s prepared to pivot to quantum if and when AI finally dies and goes away forever. If and when the hucksters get around to inflating the quantum bubble, I expect they’re gonna find themselves facing some degree of public resistance - probably not to the extent of what AI received, but still enough to give the hucksters some trouble.

The Encryption Issue

One of quantum’s big selling points is its purported ability to break the current encryption algorithms in use today - for a couple examples, Shor’s algorithm can reportedly double-tap public key cryptography schemes such as RSA, and Grover’s algorithm promises to supercharge brute-force attacks on symmetric-key cryptography.

Given this, I fully expect its supposed encryption-breaking abilities to stoke outcry and resistance from privacy rights groups. Even as a hypothetical, the possibility of such power falling into government hands is one that all-but guarantees Nineteen Eighty-Four levels of mass surveillance and invasion of privacy if it comes to pass.

Additionally, I expect post-quantum encryption will earn a lot of attention during the bubble as well, to pre-emptively undermine such attempts at mass surveillance.

Environmental Concerns

Much like with AI, info on how much power quantum computing requires is pretty scarce (though that’s because they more-or-less don’t exist, not because AI corps are actively hiding/juicing the numbers).

The only concrete number I could find came from IEEE Spectrum, which puts the power consumption of the D-Wave 2X (from 2015) at “slightly less than 25 kilowatts”, with practically all the power going to the refrigeration unit keeping it within a hair’s breadth of absolute zero, and the processor itself using “a tiny fraction of a microwatt”.

Given the minimal amount of info, and the AI bubble still being fresh in the public’s mind, I expect quantum systems will face resistance from environmental groups. Between the obscene power/water consumption of AI datacentres, the shitload of pollution said datacentres cause in places like Memphis, and the industry’s attempts to increase said consumption whenever possible, any notion that tech cares about the environment is dead in the (polluted) water, and attempts to sell the tech as energy efficient/environmentally friendly will likely fall on deaf ears.

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

 

It’s been a couple of weeks since my last set of predictions on the AI winter. I’ve found myself making a couple more.

Mental Health Crises

With four known suicides (Adam Raine, Sewell Setzer, Sophie Rottenberg and an unnamed Belgian man), a recent murder-suicide, and involuntary commitments caused by AI psychosis, there’s solid evidence to show that using AI is a fast track to psychological ruin.

On top of that, AI usage is deeply addictive, combining a psychic’s con with a gambling addiction to produce what amounts to digital cocaine, leaving its users hopelessly addicted to it, if not utterly dependent on it to function (such cases often being referred to as “sloppers”).

If and when the chatbots they rely on are shut down, I expect a major outbreak of mental health crises among sloppers and true believers, as they find themselves unable to handle day-to-day life without a personal sycophant/”assistant”/”””therapist””” on hand at all times. For psychiatrists/therapists, I expect they will find a steady supply of new clients during the winter, as the death of the chatbot sends addicted promptfondlers spiralling.

Skills Gaps Galore

One of the more common claims from promptfondlers and boosters when confronted is “you won’t be replaced by AI, but by a human using AI”.

With how AI prevents juniors from developing their skills, makes seniors worse at their jobs, damages productivity whilst creating a mirage of it, and damages their users’ critical thinking and mental acuity, all signs point to the exact opposite being the case - those who embrace and use AI will be left behind, their skills rotting away as their AI-rejecting peers remain as skilled as before the bubble, if not more so thanks to spending time and energy on actually useful skills, rather than shit like “prompt engineering” or “vibe coding”.

Once the winter sets in and the chatbots disappear, the gulf between these two groups is going to become much wider, as promptfondlers’ crutches are forcibly taken away from them and their “skills” in using the de-skilling machine are rendered useless. As a consequence, I expect promptfondlers will be fired en masse and struggle to find work during the winter, as their inability to work without a money-burning chatbot turns them into a drag on a company’s bottom line.

 

Recently, I read a short article from Iris Meredith about rethinking how we teach programming. It's a pretty solid piece of work all around, and it has got me thinking how to further build on her ideas.

This contains a quick overview of her newsletter to get you up to speed, but I recommend reading it for yourself.

The Problem

As is rather obvious to most of us, the software industry is in a dire spot - Meredith summed it up better than I can:

Software engineers tend to be detached, demotivated and unwilling to care much about the work they're doing beyond their paycheck. Code quality is poor on the whole, made worse by the current spate of vibe coding and whatever other febrile ideas come out of Sam Altman's brain. Much of the software that we write is either useless or actively hurts people. And the talented, creative people that we most need in the industry are pushed to the margins of it.

As for the cause, Iris points to the "teach the mystic incantations" style used in many programming courses, which ignores teaching students how to see through an engineer’s eyes (so to speak), and teaching them the ethics of care necessary to write good code (roughly 90% of what goes into software engineering). As Iris notes:

This tends to lead, as you might expect, to a lot of new engineers being confused, demotivated and struggling to write good code or work effectively in a software environment. [...] It also means, in the end, that a lot of people who'd be brilliant software engineers just bounce off the field completely, and that a lot of people who find no joy in anything and just want a big salary wind up in the field, never realising that they have no liking or aptitude for it.

Meredith’s Idea

Meredith’s solution, in brief, is threefold.

First, she recommends starting people off with HTML as their first language, giving students the tools they need to make something they want and care about (a personal website in this case), and providing a solid bedrock for learning fundamental programming skills

Second, she recommends using “static site generators with templating engines” as an intermediate step between HTML/CSS and full-blown programming, to provide students an intuitive method of understanding basic concepts such as loops, conditionals, data structures and variables.

(As another awful member points out, they provide an easy introduction to performance considerations/profiling by being blazing fast compared to all-too common JS monoliths online, and provide a good starting point for introducing modularity as well.)

Third, and finally, she recommends having students publish their work online right from the start, to give them reason to care about their work as early as possible and give them the earliest possible opportunity to learn about the software development life cycle.

A Complementary Idea

[basic idea: teach art alongside coding, to flex students’ creative muscles]

Meredith’s suggested approach to software education is pretty solid on all fronts - it gets students invested in their programming work, and gives them the tools needed to make and maintain high-quality code.

If I were to expand on this a bit, I think the obvious addition would be to provide an arts education to complement Iris’ proposed webdev-based approach

As explicit means of self-expression, the arts provide provide great assistance in highlighting the expressive elements of software Meredith wishes to highlight

An arts education would wonderfully complement the expressive elements of software Meredith wishes to highlight - focusing on webdev, developing students’ art skills would expand their ability to customise their websites to their liking, letting them make something truly unique to themselves.

The skills that students learn through the arts would also complement what they directly learn in programming, too. The critical eye that art critique grants them will come in handy for code review. The creative muscles they build through art will enhance their problem-solving abilities, and so on.

Beyond that, I expect the complementary arts will do a good job attracting creatives to the field, whilst pushing away “people who find no joy in anything and just want a big salary”, which Meredith notes are common in the field. Historically, “learn to code” types have viewed the arts as a “useless” degree, so they’ll near-certainly turn their noses up at having to learn it alongside something more “useful”, leaving the door open for more creatives to join up.

A More Outlandish Idea

For a more outlandish idea, the long-defunct, yet well-beloved multimedia platform Adobe Flash could provide surprisingly useful for a programming education, especially with the complementary arts education I suggested before.

Being effectively an IDE and an animation program combined into one, Flash offers a means of developing and testing a student’s skills in art and programming simultaneously, and provides an easy showcase of how the two can complement each other.

Deploying Flash to a personal website wouldn’t be hard for students either, as the Ruffle emulator allows Flash content to play without having to install Flash player. (Rather helpful, given most platforms don’t accept Flash content these days :P)

 

Another excellent piece from Iris Meredith - strongly recommend reading if you want an idea of how to un-fuck software as a field.

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

 

Well, it seems the AI bubble’s nearing its end - the Financial Times has reported a recent dive in tech stocks, the mass media has fully soured on AI, and there’s murmurs that the hucksters are pivoting to quantum.

By my guess, this quantum bubble is going to fail to get off the ground - as I see it, the AI bubble has heavily crippled the tech industry’s ability to create or sustain new bubbles, for two main reasons.

No Social License

For the 2000s and much of the 2010s, tech enjoyed a robust social license to operate - even if they weren’t loved per se (e.g. Apple), they were still pretty widely accepted throughout society, and resistance to them was pretty much nonexistent.

Whilst it was starting to fall apart with the “techlash” of the 2020s, the AI bubble has taken what social license tech has had left and put it through the shredder.

Environmental catastrophe, art theft and plagiarism, destruction of livelihoods and corporate abuse, misinformation and enabling fascism, all of this (and so much more) has eviscerated acceptance of the tech industry as it currently stands, inspiring widespread resistance and revulsion against AI, and the tech industry at large.

For the quantum bubble, I expect it will face similar resistance/mockery right out of the gate, with the wider public refusing to entertain whatever spurious claims the hucksters make, and fighting any attempts by the hucksters to force quantum into their lives.

(For a more specific prediction, quantum’s alleged encryption-breaking abilities will likely inspire backlash, being taken as evidence the hucksters are fighting against Internet privacy.)

No Hypergrowth Markets

As Baldur Bjarnason has noted about tech industry valuations:

“Over the past few decades, tech companies have been priced based on their unprecedented massive year-on-year growth that has kept relatively steady through crises and bubble pops. As the thinking goes, if you have two companies—one tech, one not—with the same earnings, the tech company should have a higher value because its earnings are likely to grow faster than the not-tech company. In a regular year, the growth has been much faster.”

For a while, this has held - even as the hypergrowth markets dried up and tech rapidly enshittified near the end of the ‘10s, the gravy train has managed to keep rolling for tech.

That gravy train is set to slam right into a brick wall, however - between the obscenely high costs of both building and running LLMs (both upfront and ongoing), and the virtually nonexistent revenues those LLMs have provided (except for NVidia, who has made a killing in the shovel selling business), the AI bubble has burned billions upon billions of dollars on a product which is practically incapable of making a profit, and heavily embrittled the entire economy in the process.

Once the bubble finally bursts, it’ll gut the wider economy and much of the tech industry, savaging evaluations across the board and killing off tech’s hypergrowth story in the process.

For the quantum bubble, this will significantly complicate attempts to raise investor/venture capital, as the finance industry comes to view tech not as an easy and endless source of growth, but as either a mature, stable industry which won’t provide the runaway returns they’re looking for, or as an absolute money pit of an industry, one trapped deep in a malaise era and capable only of wiping out whatever money you put into it.

(As a quick addendum, it's my 25th birthday tomorrow - I finished this over the course of four hours and planned to release it tomorrow, but decided to post it tonight.)

view more: next ›