259
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 11 Jan 2024
259 points (99.6% liked)
Technology
37702 readers
571 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
but that’s just a matter of complexity, not fundamental difference. the way our brains work and the way an artificial neural network work aren’t that different; just that our brains are beyond many orders of magnitude bigger
there’s no particular reason why we can’t feed artificial neural networks an enormous amount of … let’s say tangentially related experiential information … as well, but in order to be efficient and make them specialise in the things we want, we only feed them information that’s directly related to the specialty we want them to perform
there’s some… “pre training” or “pre-existing state” that exists with humans too that comes from genetics, but i’d argue that’s as relevant to the actual task of learning, comprehension, and creating as a BIOS is to running an operating system (that is, a necessary precondition to ensure the correct functioning of our body with our brain, but not actually what you’d call the main function)
i’m also not claiming that an LLM is intelligent (or rather i’d prefer to use the term self aware because intelligent is pretty nebulous); just that the structure it has isn’t that much different to our brains just on a level that’s so much smaller and so much more generic that you can’t expect it to perform as well as a human - you wouldn’t expect to cut out 99% of a humans brain and have them be able to continue to function at the same level either
i guess the core of what i’m getting at is that the self awareness that humans have is definitely not present in an LLM, however i don’t think that self-awareness is necessarily a pre-requisite for most things that we call creativity. i think that’s it’s entirely possible for an artificial neural net that’s fundamentally the same technology that we use today to be able to ingest the same data that a human would from birth, and to have very similar outcomes… given that belief (and i’m very aware that it certainly is just a belief - we aren’t close to understanding our brains, but i don’t fundamentally thing there’s anything other then neurons firing that results in the human condition), just because you simplify and specialise the input data doesn’t mean that the process is different. you could argue that it’s lesser, for sure, but to rule out that it can create a legitimately new work is definitely premature