this post was submitted on 16 Aug 2025
639 points (99.5% liked)

AI Memes

101 readers
388 users here now

A community for memes and webcomics involving Artificial Intelligence.

founded 3 months ago
MODERATORS
 
top 22 comments
sorted by: hot top controversial new old
[–] Blackmist@feddit.uk 7 points 1 day ago

And much like all people who do nothing useful yet try to inject themselves into the solution anyway, it doesn't know how to say "I don't know".

It's just enthusiastic bullshit. It's perfect for replacing middle managers, salesmen, politicians, etc.

[–] SlartyBartFast@sh.itjust.works 7 points 1 day ago (1 children)

Performative positivity makes me positively want to perform a poo

Genuinely the most toxic bullshit.

[–] sp3ctr4l@lemmy.dbzer0.com 18 points 1 day ago

Basically, we made AI Patrick Bateman.

... you know, a charismatic, optimistic mask, stretched over a howling void of rage, fury and insecurity.

Yep, we taught the LLMs corpospeak, and the corpos and tech bros love them.

Has nobody else played Shadowrun?

You just learn different etiquettes, different dialects, and this functions as a charisma/speech boost to the relevant demographic.

Corposociopaths of course love themselves more than anything else, so obviously they craft a mad machine god in their own image, which is of course 'perfect'.

[–] jatone@lemmy.dbzer0.com 23 points 1 day ago (2 children)

I love lemmy, our high upvotes posts with zero comments makes me a happy soul

[–] dxc@sh.itjust.works 11 points 1 day ago

Makes me believe there's just lurkers like me and no bots

[–] naught101@lemmy.world 5 points 1 day ago

Funny memes are not always good conversation starters, I guess

[–] Perspectivist@feddit.uk 1 points 1 day ago (2 children)

Who exactly are these "they" whose thinking LLMs are conscious?

[–] justlemmyin@lemmy.world 14 points 1 day ago (1 children)
[–] Perspectivist@feddit.uk -5 points 1 day ago (2 children)

Show me one "AI bro" claiming LLMs are consciouss.

[–] drosophila@lemmy.blahaj.zone 3 points 1 day ago* (last edited 1 day ago)
[–] donuts@lemmy.world 14 points 1 day ago (1 children)

Go to /r/MyBoyfriendIsAI

Don't worry, we got bleach for your eyes for when you get back

[–] Perspectivist@feddit.uk -3 points 1 day ago (1 children)

"They taught AI to talk like a middle manager.." isn't refering to the people at /r/MyBoyfirendIsAI. Those are users, not the creators of it.

[–] donuts@lemmy.world 7 points 1 day ago (1 children)
[–] Perspectivist@feddit.uk 0 points 1 day ago (1 children)

In this context, "AI bro" clearly refers to the creators, not the end users - which is what your link is about. Users aren’t the ones who “taught it to speak like a corporate middle manager.” That was the AI company leaders and engineers. When I asked who “they” are, I was asking for names. Someone tried to dodge by saying “AI bros and their fans,” but that phrase itself distinguishes between two groups. I wasn’t asking about the fans.

Let me rephrase: name a person responsible for training an AI to sound like a corporate middle manager who also believes their LLM is conscious.

[–] donuts@lemmy.world 5 points 1 day ago* (last edited 1 day ago) (1 children)

Alright, I see your angle here. Creators generally try to avoid answering that question because they get more money if they muddle the waters. Thanks for elaborating!

Some more interesting links:

https://the-decoder.com/openai-leaves-the-question-of-ai-consciousness-consciously-unanswered/

https://www.forbes.com/sites/lanceeliot/2024/07/18/why-americans-believe-that-generative-ai-such-as-chatgpt-has-consciousness/

[–] Perspectivist@feddit.uk 0 points 1 day ago (1 children)

try to avoid answering that question because they get more money if they muddle the waters

I dont personally think this is quite fair either. Here's a quote from the first link:

According to Jang, OpenAI distinguishes between two concepts: Ontological consciousness, which asks whether a model is fundamentally conscious, and perceived awareness, which measures how human the system seems to users. The company considers the ontological question scientifically unanswerable, at least for now.

To me, as someone who has spent a lot of time thinking about consciousness (the fact of subjective experience) this seems like a perfectly reasonable take. Consciousness itself is entirely a subjective experience. There's zero evidence of it outside of our own minds. It can't be measured in any way. We can't even prove that other people are consciouss. It's a relatively safe assumption to make but there's no conclusive way to prove it. We simply assume they are because they seem like it.

In philosophy there's this concept of a "philosophical zombie" which means a creature which is outwardly indistinquishable from a human but it completely lacks any internal experience. This is basically what the robots in the TV series "west world" were - or at least so they thought.

This is all to say that there is a point after which AI system so convincingly mimics a conscious being that it's not entirely ridiculous thing to worry that what if it actually is like something to be this system and whether we're actually keeping a conscious being as a slave. If we had a way to prove that it is not consciouss then there's no issue there but we can't. People used to justify mistreatment of animals by claiming they're not consciouss either but very few people thinks that anymore. I'm not saying an LLM might be conscious, I'm relatively certain that they're not but they're also the most concsious seeming thing we've ever created and they'll just keep getting better and better. I'd say that there is a point after which these systems act consciouss so convincingly that one would need to basically be a psychopath to mistreat them.

[–] donuts@lemmy.world 6 points 1 day ago (1 children)

I don't really agree, acting like a conscious being (because it's a language model) still doesn't make it conscious, perceived or not.

Have you read Blindsight by Peter Watts? It's an interesting book that touches on self-awareness and how we perceive it.

[–] Perspectivist@feddit.uk -1 points 1 day ago

No, it doesn’t make it conscious - but you also can’t prove that it isn’t, and that’s the problem. The only evidence of consciousness outside our own minds is when something appears conscious. Once an LLM reaches the point where you genuinely can’t tell whether you’re talking to a real person or not, then insisting “it’s not actually conscious” becomes just a story you’re telling yourself, despite all the evidence pointing the other way.

I’d argue that at that point, you should treat it like another person - even if only as a precaution. And I’d even go further: not treating something that seems that "alive" with even basic decency reflects quite poorly on them and raises questions about their worldview.

[–] Klear@quokk.au 4 points 1 day ago* (last edited 1 day ago)

More to the point, who thought middle managers were conscious?

[–] justlemmyin@lemmy.world 1 points 1 day ago (1 children)