BlueMonday1984

joined 1 year ago
[–] BlueMonday1984@awful.systems 5 points 3 weeks ago (1 children)

This isn’t one of the works of art I expected to so explicitly dunk on these unscrupulous scams, but I welcome it all the more for that.

Crypto is nigh-universally hated outside of the techbrosphere (doubly so for NFTs) - they are synonymous with scams and cringe in the public eye. I'd be more shocked if you found a work which presents crypto without immediately dunking on it.

[–] BlueMonday1984@awful.systems 7 points 3 weeks ago

Not to mention he also didn't write a third-rate rapey-as-shit "dark fantasy" novel, throw nonstop tantrums about people criticising/making fun of him, or jump on the anti-woke content mill grift train.

Just to make this perfectly clear, yes, I am saying that >shadman has more dignity than Shadiversity.

[–] BlueMonday1984@awful.systems 9 points 3 weeks ago (1 children)

Ran across an animation mocking AI art on Newgrounds recently - found it a pretty good watch.

[–] BlueMonday1984@awful.systems 16 points 3 weeks ago

The government is backtracking on this cut. But when they said “AI,” they meant magical chatbots with costs in the fabulous future that would make them look cool. They didn’t mean medical systems that work, but cost money right now. This was always about the press releases.

In the grander scheme of things, I expect this shitshow will further reinforce notions of "AI" being utterly useless as a tech - auto-contouring was a real-life example of AI being useful, and it got thrown in the bin because it wasn't a magical chatbot that made radiologists obsolete.

[–] BlueMonday1984@awful.systems 7 points 3 weeks ago* (last edited 3 weeks ago)

Addendum: If you wanna support the artist, she has a Tumblr and a personal portfolio.

[–] BlueMonday1984@awful.systems 9 points 3 weeks ago (4 children)

Update on The Shadiversity Drama^tm^: he's still malding about being an utterly soulless waste of oxygen:

Now, some of you may be wondering "Monday, how is that AI-generated? That piece actually has a soul!" Well, as it turns out, it wasn't AI - Shad quite literally stole someone's artwork and passed it off as AI.

[–] BlueMonday1984@awful.systems 10 points 3 weeks ago* (last edited 3 weeks ago)

In other news, the Guardian landed an exclusive scoop on cuts to "AI cancer tech funding in England". Baldur Bjarnason's given his commentary:

Turns out rebranding even the genuinely useful Machine Learning as “AI” doesn’t help them get funding. The only beneficiaries of the bubble seem to be volatile media synthesis engines

You want my opinion, future machine learning research is probably gonna struggle to get funding once the bubble bursts, both due to the "AI" stench rubbing off on the field, and due to gen-AI sucking up all of the funding that would've gone towards actually useful shit. (Arguably, its already struggling even before the bubble's burst.)

[–] BlueMonday1984@awful.systems 5 points 3 weeks ago (1 children)

Taking a shot in the dark, journalistic incidents like Bloomberg's failed tests with AI summaries and the BBC's complaints about Apple AI mangling headlines probably helped with accelerating that fall to earth - for any journalists reading about or reporting on such shitshows, it likely shook their faith in AI's supposed abilities in a way failures outside their field didn't.

[–] BlueMonday1984@awful.systems 8 points 3 weeks ago

And by "more fuckable", he means "refusing/unable to consent".

[–] BlueMonday1984@awful.systems 10 points 3 weeks ago (4 children)

In other news, Jazza's AI-generated cousin is back to continue pretending to be an actual artist. This time, its by actively denigrating the works of Studio Ghibli:

Unsurprisingly, he is getting raked over the coals by basically everyone. He's also having an utter meltdown in the replies.

[–] BlueMonday1984@awful.systems 10 points 3 weeks ago (4 children)

In case you missed it, a couple sneers came out against AI from mainstream news outlets recently - CNN's put out an article titled "Apple’s AI isn’t a letdown. AI is the letdown", whilst the New York Times recently proclaimed "The Tech Fantasy That Powers A.I. Is Running on Fumes".

You want my take on this development, I'm with Ed Zitron on this - this is a sign of an impending sea change. Looks like the bubble's finally nearing its end.

[–] BlueMonday1984@awful.systems 8 points 3 weeks ago* (last edited 3 weeks ago)

In other news, Elon Musk's personal chatbot has proudly proclaimed its available on Telegram, and its proclamation got picked up by The Verge:

Right now, the integration is limited to "Grok's available as an optional chatbot", but going by what I've seen on BlueSky, people are already taking this as their cue to jump ship to Signal.

 

(Gonna expand on a comment I whipped out yesterday - feel free to read it for more context)


At this point, its already well known AI bros are crawling up everyone's ass and scraping whatever shit they can find - robots.txt, honesty and basic decency be damned.

The good news is that services have started popping up to actively cockblock AI bros' digital smash-and-grabs - Cloudflare made waves when they began offering blocking services for their customers, but Spawning AI's recently put out a beta for an auto-blocking service of their own called Kudurru.

(Sidenote: Pretty clever of them to call it Kudurru.)

I do feel like active anti-scraping measures could go somewhat further, though - the obvious route in my eyes would be to try to actively feed complete garbage to scrapers instead - whether by sticking a bunch of garbage on webpages to mislead scrapers or by trying to prompt inject the shit out of the AIs themselves.

The main advantage I can see is subtlety - it'll be obvious to AI corps if their scrapers are given a 403 Forbidden and told to fuck off, but the chance of them noticing that their scrapers are getting fed complete bullshit isn't that high - especially considering AI bros aren't the brightest bulbs in the shed.

Arguably, AI art generators are already getting sabotaged this way to a strong extent - Glaze and Nightshade aside, ChatGPT et al's slop-nami has provided a lot of opportunities for AI-generated garbage (text, music, art, etcetera) to get scraped and poison AI datasets in the process.

How effective this will be against the "summarise this shit for me" chatbots which inspired this high-length shitpost I'm not 100% sure, but between one proven case of prompt injection and AI's dogshit security record, I expect effectiveness will be pretty high.

 

After reading through Baldur's latest piece on how tech and the public view gen-AI, I've had some loose thoughts about how this AI bubble's gonna play out.

I don't have any particular structure to this, this is just a bunch of things I'm getting off my chest:

  1. AI's Dogshit Reputation

Past AI springs had the good fortune to have had no obvious negative externalities to sour the public's reputation (mainly because they weren't public facing, going by David Gerard).

This bubble, by comparison, has been pretty much entirely public facing, giving us, among other things:

All of these have done a lot of damage to AI's public image, to the point where its absence is an explicit selling point - damage which I expect to last for at least a decade.

When the next AI winter comes in, I'm expecting it to be particularly long and harsh - I fully believe a lot of would-be AI researchers have decided to go off and do something else, rather than risk causing or aggravating shit like this. (Missed this incomplete sentence on first draft)

  1. The Copyright Shitshow

Speaking of copyright, basically every AI company has worked under the assumption that copyright basically doesn't exist and they can yoink whatever they want without issue.

With Gen-AI being Gen-AI, getting evidence of their theft isn't particularly hard - as they're straight-up incapable of creativity, they'll puke out replicas of its training data with the right prompt.

Said training data has included, on the audio side, songs held under copyright by major music studios, and, on the visual side, movies and cartoons currently owned by the fucking Mouse..

Unsurprisingly, they're getting sued to kingdom come. If I were in their shoes, I'd probably try to convince the big firms my company's worth more alive than dead and strike some deals with them, a la OpenAI with Newscorp.

Given they seemingly believe they did nothing wrong (or at least Suno and Udio do), I expect they'll try to fight the suits, get pummeled in court, and almost certainly go bankrupt.

There's also the AI-focused COPIED act which would explicitly ban these kinds of copyright-related shenanigans - between getting bipartisan support and support from a lot of major media companies, chances are good it'll pass.

  1. Tech's Tainted Image

I feel the tech industry as a whole is gonna see its image get further tainted by this, as well - the industry's image has already been falling apart for a while, but it feels like AI's sent that decline into high gear.

When the cultural zeitgeist is doing a 180 on the fucking Luddites and is openly clamoring for AI-free shit, whilst Apple produces the tech industry's equivalent to the "face ad", its not hard to see why I feel that way.

I don't really know how things are gonna play out because of this. Taking a shot in the dark, I suspect the "tech asshole" stench Baldur mentioned is gonna be spread to the rest of the industry thanks to the AI bubble, and its gonna turn a fair number of people away from working in the industry as a result.

 

I don’t think I’ve ever experienced before this big of a sentiment gap between tech – web tech especially – and the public sentiment I hear from the people I know and the media I experience.

Most of the time I hear “AI” mentioned on Icelandic mainstream media or from people I know outside of tech, it’s being used as to describe something as a specific kind of bad. “It’s very AI-like” (“mjög gervigreindarlegt” in Icelandic) has become the talk radio short hand for uninventive, clichéd, and formulaic.

babe wake up the butlerian jihad is coming

39
submitted 10 months ago* (last edited 10 months ago) by BlueMonday1984@awful.systems to c/techtakes@awful.systems
 

I stopped writing seriously about “AI” a few months ago because I felt that it was more important to promote the critical voices of those doing substantive research in the field.

But also because anybody who hadn’t become a sceptic about LLMs and diffusion models by the end of 2023 was just flat out wilfully ignoring the facts.

The public has for a while now switched to using “AI” as a negative – using the term “artificial” much as you do with “artificial flavouring” or “that smile’s artificial”.

But it seems that the sentiment might be shifting, even among those predisposed to believe in “AI”, at least in part.

Between this, and the rise of "AI-free" as a marketing strategy, the bursting of the AI bubble seems quite close.

Another solid piece from Bjarnason.

view more: ‹ prev next ›