this post was submitted on 11 Oct 2025
408 points (99.0% liked)

The Shitpost Office

356 readers
531 users here now

Welcome to The Shitpost Office

Shitposts processed from 9 to 5, with occasional overtime on weekends.

Rule 1: Be Civil, Not SinisterTreat others like fellow employees, not enemies in the breakroom.

  • No harassment, dogpiling, or brigading
  • No bigotry (transphobia, racism, sexism, etc.)
  • Respect people’s time and space. We’re here to laugh, not to loathe

Rule 2: No Prohibited PostageSome packages are simply undeliverable. That means:

  • No spam or scams
  • No porn or sexually explicit content
  • No illegal content
  • NSFW content must be properly tagged

If you see anything that violates these rules, please report it so we can return it to sender. Otherwise? Have fun, be silly, and enjoy the chaos. The office runs best when everyone’s laughing.... or retching over the stench, at least.

founded 3 weeks ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] CannonFodder@lemmy.world 1 points 1 day ago (1 children)

Of course they think. What else would you call the cyclic process of a reasoning model? Just because we know the mechanisms of the fundamental building blocks of these algorithms doesn't mean they aren't thinking. And it certainly doesn't mean that they couldn't be conscious - especially when we don't actually know exactly what consciousness is. The brain mechanics are fairly well understood too - the synapses inter communication and networking is vaguely similar to the matrix calculations of current AI tools. The brain is just a machine - a highly complicated one with chemical elements, but a machine nonetheless.
I don't assume consciousness is the end goal of ai. Although I suspect some scientists are working towards that in a more pragmatic non-metaphysics way. We don't know what consciousness actually is, or what makes it. We can't really do proper science on the subject because it can't actuallyw be observed. So we don't know if it will arise accidentally as part of building the complexity of neural nets sufficiently. Of course consciousness is a big deal; but it's very difficult to understand. I think you should look into metaphysics more to try understand the issues.

[–] masquenox@lemmy.dbzer0.com 1 points 17 hours ago (1 children)

Of course they think.

Your proof?

especially when we don’t actually know exactly what consciousness is.

So you don't know what something is when your brain does it, but you are confident enough to know what that something is when something that is totally unlike your brain in every possible way does it?

The brain is just a machine

Says who? Evolutionary processes does not create machines - human engineering does.

I don’t assume consciousness is the end goal of ai.

Could have fooled me - all I'm hearing from the pro-"consciousness-must-be-software" crowd is assumptions and little else.

[–] CannonFodder@lemmy.world 1 points 15 hours ago

They think. It's a matter of semantics of course, but the processing of ideas is thinking and AI does that.
Why do you keep strawmanning? I never said I'm confident AIs will have consciousness. I'm saying we can't know, but there's no reason to think they won't. How are you so confident they never will have consciousness since we don't actually know what consciousness is?
What else is a brain but a biological machine? Do you think it has magic or something else that differentiates it from any other machine? Getting stuck on semantics is pointless - the source of a machine is irrelevant to this discussion.
This entire discussion is based on assumptions, as is your opinion that a brain is somehow a special construct that can't ever be emulated.
The ultimate fact that we don't know what consciousness actually is, nor can we prove an entity has it or doesn't (except for our own mind) means that it's kind of a stupid thing to argue about. You can believe whatever you want. I don't really care if you're wrong.