Quick update: The open letter on AI training (https://aitrainingstatement.org/) has reached 15k signatures:
It was a pretty good comment, and pointed out one of the possible risks this AI bubble can unleash.
I've already touched on this topic, but it seems possible (if not likely) that copyright law will be tightened in response to the large-scale theft performed by OpenAI et al. to feed their LLMs, with both of us suspecting fair use will likely take a pounding. As you pointed out, the exploitation of fair use's research exception makes it especially vulnerable to its repeal.
On a different note, I suspect FOSS licenses (Creative Commons, GPL, etcetera) will suffer a major decline in popularity thanks to the large-scale code theft this AI bubble brought - after two-ish years of the AI industry (if not tech in general) treating anything publicly available as theirs to steal (whether implicitly or explicitly), I'd expect people are gonna be a lot stingier about providing source code or contributing to FOSS.
Conservatives in particular have, for culture war reasons, recently recommended Telegram—an “encrypted messaging” app that has many parts that are not encrypted and which does not have a clear governance structure—over Signal, an app that is open source and by all accounts uses one of the strongest encryption protocols ever created, on every chat that happens on the platform.
Refusing to keep your shit secret to own the libs
Ah, hell yeah, the much-anticipated finale.
Gonna give particular praise to the opening, because this really caught my eye:
Tech culture often denigrates humans through its assumptions that human skills, knowledge and functions can be improved through their replacement by technological replacements, and through transhumanist narratives that rely on a framing of human consciousness as fundamentally computational.
I've touched on the framing of human consciousness part myself - seems we may be on the same wavelength.
As for the whole "replacement by technological replacements" part...well, we've all seen the AI art slop-nami, its crystal fucking clear what you're referring to.
Etsy: an artistic one-stop chop shop where slop pops up like catch crops - and that quick shot's no hatchet-job, so keep it from pops 'fore it leaves him in a strop:
(Full disclosure: the opportunity for some quickfire rhymes may have played a role in birthing this sneer.)
https://nitter.poast.org/edzitron/status/1819591404873568715
Zitron's sample size may be limited to his Twitter following, but its a bad sign for AI if bashing it gets you praise from both sides of the political aisle:
Bonus Tweet from Chris Alvino:
I was just thinking about this today. I had a thread go viral where I bashed AI, resulted in thousands of new followers, and I've had to block a bunch bc they turned out to be on the far right. Honestly amazing how bipartisan AI hate is. Never seen anything like it before 😂
At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers. (New York Times, 2005, at the end of the last AI winter.)
I expect history to repeat itself quite soon - where previously using the term "artificial intelligence" got you looked at as a wild-eyed dreamer, now, using that term's likely getting you looked at as an asshole techbro, and your research deemed a willing attempt to hurt others.
ChatGPT's new search feature hasn't even launched and already its shitting the bed
The sneers are writing themselves
Not a sneer, but an observation on the tech industry from Baldur Bjarnason, plus some of my own thoughts:
I don’t think I’ve ever experienced before this big of a sentiment gap between tech – web tech especially – and the public sentiment I hear from the people I know and the media I experience.
Most of the time I hear “AI” mentioned on Icelandic mainstream media or from people I know outside of tech, it’s being used as to describe something as a specific kind of bad. “It’s very AI-like” (“mjög gervigreindarlegt” in Icelandic) has become the talk radio short hand for uninventive, clichéd, and formulaic.
Baldur has pointed that part out before, and noted how its kneecapping the consumer side of the entire bubble, but I suspect the phrase "AI" will retain that meaning well past the bubble's bursting. "AI slop", or just "slop", will likely also stick around, for those who wish to differentiate gen-AI garbage from more genuine uses of machine learning.
To many, “AI” seems to have become a tech asshole signifier: the “tech asshole” is a person who works in tech, only cares about bullshit tech trends, and doesn’t care about the larger consequences of their work or their industry. Or, even worse, aspires to become a person who gets rich from working in a harmful industry.
For example, my sister helps manage a book store as a day job. They hire a lot of teenagers as summer employees and at least those teens use “he’s a big fan of AI” as a red flag. (Obviously a book store is a biased sample. The ones that seek out a book store summer job are generally going to be good kids.)
I don’t think I’ve experienced a sentiment disconnect this massive in tech before, even during the dot-com bubble.
Part of me suspects that the AI bubble's spread that "tech asshole" stench to the rest of the industry, with some help from the widely-mocked NFT craze and Elon Musk becoming a punching bag par excellence for his public breaking-down of Twitter.
(Fuck, now I'm tempted to try and cook up something for MoreWrite discussing how I expect the bubble to play out...)
Just say he's yapping, because that's all this dipshit's doing
"Not Like Us" was less scathing than this. Fucking hell.
Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers
Anyways, personal sidenote:
Beyond putting another blow to AI's reliability, this will probably also make the public more wary of user-generated material - its hard to trust something if you know the masses could be actively manipulating you.