The AI lawsuit's going to discovery - I expect things are about to heat up massively for the AI industry:
Crypto NG+ AI% Speedrun (no skips)
Thinking about it, the public and spectacular failure of NFTs probably helped AI with speedrunning its rise and fall (mainly its fall), for two reasons.
First, it crippled technological determinism (which Unserious Academic interrogated in depth BTW) as a concept. Before that, it was generally assumed whatever new crap the tech industry came up with with would inevitably become a part of daily life, for better or for worse.
The NFT craze, by publicly and spectacularly failing despite a heavy push from Silicon Valley, showed the public that it was possible to beat Silicon Valley and prevent the future it wants from coming to pass, that resistance against them is anything but futile.
Second, the NFT craze's failure publicly humiliated the tech industry, as NFTs became a pop-culture punchline and supporting NFTs became a public mark of shame for anyone involved. If crippling technological determinism made it cool to resist Silicon Valley, then the public humiliation of NFTs helped make it uncool to support SV, a trend which I feel has helped amplify emnity against AI.
Raytheon can at least claim they're helping kill terrorists or some shit like that, Artisan's just going out and saying "We ruin good people's lives for money, and we can help you do that too"
Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers
A trend on Reddit that sees Londoners giving false restaurant recommendations in order to keep their favorites clear of tourists and social media influencers highlights the inherent flaws of Google Search’s reliance on Reddit and Google's AI Overview.
Anyways, personal sidenote:
Beyond putting another blow to AI's reliability, this will probably also make the public more wary of user-generated material - its hard to trust something if you know the masses could be actively manipulating you.
Quick update: The open letter on AI training (https://aitrainingstatement.org/) has reached 15k signatures:
Okay, quick prediction time:
-
Even if character.ai manages to win the lawsuit, this is probably gonna be the company's death knell. Even if the fallout of this incident doesn't lead to heavy regulation coming down on them, the death of one of their users is gonna cause horrific damage to their (already pretty poor AFAIK) reputation.
-
On a larger scale, chatbot apps like Replika and character.ai (if not chatbots in general) are probably gonna go into a serious decline thanks to this - the idea that "our chatbot can potentially kill you" is now firmly planted in the public's mind, and I suspect their userbase is gonna blow their lid from how heavily the major apps are gonna lock their shit down.
evidence of wider continued rising of the tide against saltman’s bullshit grows
Precisely when that rising tide will drown Altman I'm not sure, but I feel safe in saying it'll probably drown the rest of the AI industry (and potentially "AI" as a concept) as well - Altman is pretty much the face of this AI bubble, after all.
The rising tide was likely also helped along by OpenAI going fully for-profit, which shattered the humanitarian guise it spent the last decade or so building, and, to quote myself, "given the true believers reason to believe [Altman would] commit omnicide-via-spicy-autocomplete for a quick buck".
New piece from The Atlantic: A New Tool to Warp Reality (archive)
Turns out the bullshit firehoses undeservedly called chatbots have some capacity to generate a reality distortion field of sorts around them.
You know those polls that say fewer than 20% of Americans trust AI scientists?
No, but I'd say its a good sign we're getting close to an AI winter
This article is excellent, and raises a point that’s been lingering in the back of my head–what happens if the promises don’t materialize? What happens when the market gets tired of stories about AI chatbots telling landlords to break the law, or suburban moms complaining about their face being plastered onto a topless model, or any of the other myriad stories of AI making glaring mistakes that would get any human immediately fired?
If we're lucky, we might end up with a glut of cheap GPUs/server space once the bubble pops.
Quick update - Brian Merchant's list of "luddite horror" films ended up getting picked up by Fast Company:
To repeat a previous point of mine, it seems pretty safe to assume "luddite horror" is gonna become a bit of a trend. To make a specific (if unrelated) prediction, I imagine we're gonna see AI systems and/or their supporters become pretty popular villains in the future - the AI bubble's produces plenty of resentment towards AI specifically and tech more generally, and the public's gonna find plenty of catharsis in watching them go down.