this post was submitted on 11 Oct 2025
407 points (99.0% liked)
The Shitpost Office
352 readers
453 users here now
Welcome to The Shitpost Office
Shitposts processed from 9 to 5, with occasional overtime on weekends.
Rule 1: Be Civil, Not Sinister
Treat others like fellow employees, not enemies in the breakroom.
- No harassment, dogpiling, or brigading
- No bigotry (transphobia, racism, sexism, etc.)
- Respect people’s time and space. We’re here to laugh, not to loathe
Rule 2: No Prohibited Postage
Some packages are simply undeliverable. That means:
- No spam or scams
- No porn or sexually explicit content
- No illegal content
- NSFW content must be properly tagged
If you see anything that violates these rules, please report it so we can return it to sender. Otherwise? Have fun, be silly, and enjoy the chaos. The office runs best when everyone’s laughing.... or retching over the stench, at least.
founded 2 weeks ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You will never prove a rock is conscious, just like you will never prove another human is conscious. You can only know your own consciousness. You can logically imply other humans (and other animals) are also conscious, but you cannot know it.
And? None of this information is useful to me (or anyone else, either).
True. But it makes any discussion like the one you were arguing rather pointless.
I have no idea why everyone here absolutely insists on having this thoroughly pointless argument with me at all. I merely stated that which should be obvious - consciousness is not software - and lots of people were, apparently, offended by that because a bunch of tech bros pretended it does.
But consciousness could easily be a manifestation of 'software'. You can't know. You can't know what it is; you can't even prove it exists outside of your own experience. So when you make definitive statements like that, you will often get people pointing out that you are wrong. It's not a matter of being offended any more than being offended at any untruth being spread as if fact.
Why? Because we invented software? Viewing human consciousness as software says a lot more about the early 21st century viewer than it does human consciousness - pretty much in the same way that viewing human physiology as purely mechanical says a lot more about the early 20th century viewer than it does human physiology.
Let's be clear... there is no indication - never mind evidence - that human consciousness works like software. In spite of that, it seems to be a holy cow belief for plenty of people here. And I'd argue that the reason why that is is far, far more relevant than the "consciousness-vs.-software" debate itself.
Well I think in this context software can mean a set of flexible procedureal instructions being followed by a more rigid hardware framework. Parts of the human brain are like software (re-wireable links and learned timings), and parts are hardware (as grown from birth, mostly independent of stimulus). A computer is also hardware (cpu) and software. An AI neural network is just a big matrix of interrelations between nodes which software can run as a network, much like the human brain is a big set of neurons that runs as a network. Obviously the human brain is more complicated than the current structural basis for AI now as the human brain has other feedback mechanisms. But people are working on modeling these kinds of things and applying them to AI. And AI nets could theoretically get as big or much bigger, representing neural nets larger than our brains. So there's no particular reason AI could not match or surpass human thought power. Both the brain and computer systems are a combination of hardware and software in this context. But computer scientists see the software as a layer on top of the hardware - and inferred or secondarily intelligence comes more as a layer on the software. It doesn't really matter if it's software or hardware anyway as it's just algorithms and the implementation doesn't really matter. Similarly in the brain, there are biological hardware processes and the equivalent of software (dynamically configurable connections). But it can still be seen as an implementation of an algorithm. If consciousness can come out of that, there's no reason that it can't come out of software running on a computer. There is no 'consciousness' mechanism as far as we know - it is a result of having a sufficient complexity of the right kind of algorithmic processing. Or at least, that is a perfect reasonable explanation. It's seemingly unprovable whether it exists in anything other that one's own personal experience; so we simply can't know if another system is actually conscious. But if it acts conscious, it seems like that's about as good a test as we will ever manage. There's no point in gatekeeping the assumption of consciousness of an AI any more than denying the consciousness of another person just because you can't prove it. Unless we identify some sort of biological basis for consciousness that for some reason cannot be copied in a computer based system, there's no good reason to think AIs can't be conscious. One can bring spirituality or religion into it, but that's similarly unprovable and there's no particular reason those things couldn't apply to AI systems if they apply to human brains.
You're not comparing apples to oranges here... you are comparing apples to electric toasters.
You mean... apart from the fact that there's not even the most circumstantial whiff of evidence that these runaway algorithms will ever actually think?
These runaway algorithms already vastly surpasses humans in the ability to crunch numbers and multi-task (the latter is actually quite impossible for humans to do, in spite of the whining of the managerial classes) - no surprise, since that is what information technology was invented to do - and yet, that hasn't resulted in one of them doing anything that can be considered "having a thought."
At this point, I think you might want to ask why it is that you assume that consciousness is (somehow) the end-goal of the development of these glorified paper-clip machine simulators (since that's seemingly the most accurate way of describing them) - after all, the last thing the capitalist class needs is a paper-clip machine that thinks.
Consciousness is not that big a deal, you know - there's eight billion examples of it walking around on the earth currently, and the spoilt and privileged people creating these runaway algorithms absolutely does not care a fig for any of them.
Of course they think. What else would you call the cyclic process of a reasoning model? Just because we know the mechanisms of the fundamental building blocks of these algorithms doesn't mean they aren't thinking. And it certainly doesn't mean that they couldn't be conscious - especially when we don't actually know exactly what consciousness is. The brain mechanics are fairly well understood too - the synapses inter communication and networking is vaguely similar to the matrix calculations of current AI tools. The brain is just a machine - a highly complicated one with chemical elements, but a machine nonetheless.
I don't assume consciousness is the end goal of ai. Although I suspect some scientists are working towards that in a more pragmatic non-metaphysics way. We don't know what consciousness actually is, or what makes it. We can't really do proper science on the subject because it can't actuallyw be observed. So we don't know if it will arise accidentally as part of building the complexity of neural nets sufficiently. Of course consciousness is a big deal; but it's very difficult to understand. I think you should look into metaphysics more to try understand the issues.