this post was submitted on 05 Oct 2025
227 points (97.5% liked)

TechTakes

2212 readers
464 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Reminds me of this:

I think atproto is a good protocol, but god bluesky-the-company is dogshit.

you are viewing a single comment's thread
view the rest of the comments
[–] iAmTheTot@sh.itjust.works 29 points 2 days ago* (last edited 2 days ago) (4 children)

Hoo boy. The original person being reposted continues on their original post that they believe we cannot be certain that genAI does not have feelings.

[–] Pieplup@lemmy.ml 1 points 14 hours ago

They are literally predictive algorithims if you have even a basic understadning of how LLMs work (not somethingalot of pro-ai people have) you'd know this is completely untrue. They do not have genuine thoughts they just say what it predicts the response would be based on previous sources.

[–] sp3ctr4l@lemmy.dbzer0.com 17 points 2 days ago

Just complete the delusional circuit and tell them you can't be sure they aren't an AI, ask them how they would prove they aren't.

[–] ayyy@sh.itjust.works 8 points 1 day ago (1 children)

How do we have people wasting their time arguing about software having feelings when we haven’t even managed to convince the majority of people that fish and crabs and stuff can feel pain even though they don’t make a frowny face when you hurt them.

[–] Architeuthis@awful.systems 6 points 1 day ago

That's easy, it's because LLM output is a reasonable simulation of sounding like a person. Fooling people's consciousness detector is just about their whole thing at this point.

Crabs should look into learning to recite the pledge of allegiance in the style of Lady GaGa.