BlueMonday1984
Another day, another case of "personal responsibility" used to shift blame for systemic issues, and scapegoat the masses for problems bad actors actively imposed on them.
Its not like we've heard that exact same song and dance a million times before, I'm sure the public hasn't gotten sick and tired of it by this point.
Probable hot take: this shit's probably also hampering people's efforts to overcome self-serving bias, as well - taking responsibility for your own faults is hard enough in a vacuum, its likely even harder when bad actors act with impunity by shifting the blame to you.
Given the trajectory of the world, yeah, let's go with that
I like the DNF / vaporware analogy, but did we ever have a GPT Doom or Duke3d killer app in the first place? Did I miss it?
In a literal sense, Google did attempt to make GPT Doom, and failed (i.e. a large language model can't run Doom).
In a metaphorical sense, the AI equivalent to Doom was probably AI Dungeon, a roleplay-focused chatbot viewed as quite impressive when it released in 2020.
Ed Zitron's given his thoughts on GPT-5's dumpster fire launch:
Personally, I can see his point - the Duke Nukem Forever levels of hype around GPT-5 set the promptfondlers up for Duke Nukem Forever levels of disappointment with GPT-5, and the "deaths" of their AI waifus/therapists this has killed whatever dopamine delivery mechanisms they've set up for themselves.
Anyways, personal sidenote/prediction: I suspect the Internet Archive’s gonna have a much harder time archiving blogs/websites going forward.
Me, two months ago
Looks like I was on the money - Reddit's began limiting what the Internet Archive can access, claiming AI corps have been scraping archived posts to get around Reddit's pre-existing blocks on scrapers. Part of me suspects more sites are gonna follow suit pretty soon - Reddit's given them a pretty solid excuse to use.
You're dead right on that.
Part of me suspects STEM in general (primarily tech, the other disciplines look well-protected from the fallout) will have to deal with cleaning off the stench of Eau de Fash after the dust settles, with tech in particular viewed as unequipped to resist fascism at best and out-and-proud fascists at worst.
Iris van-Rooij found AI slop in the wild (determining it as such by how it mangled a word's definition) and went on find multiple other cases. She's written a blog post about this, titled "AI slop and the destruction of knowledge".
I wrote yesterday about red-team cybersecurity and how the attack testing teams don’t see a lot of use for AI in their jobs. But maybe the security guys should be getting into AI. Because all these agents are a hilariously vulnerable attack surface that will reap rich rewards for a long while to come.
Hey, look on the bright side, David - the user is no longer the weakest part of a cybersecurity system, so they won't face as many social engineering attempts on them.
Seriously, though, I fully expect someone's gonna pull off a major breach through a chatbot sooner or later. We're probably overdue for an ILOVEYOU-level disaster.
It'll probably earn a lot of users if and when Github goes down the shitter. They've publicly stood with marginalised users before, so they're already in my good books.
New piece from Brian Merchant, about the growing power the AI bubble's granted Microsoft, Google, and Amazon: The AI boom is fueling a land grab for Big Cloud