view the rest of the comments
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
Artificial intelligence might be really good, perhaps even superhuman at one thing, for example driving a car but that same competence doesn't apply over variety of fields. Your self-driving car can't help with your homework. With artificial general intelligence however, it does. Humans posses general intelligence; we can do math, speak different languages, know how to navigate social situations, know how to throw a ball, can interpret sights, sounds etc.
With a real AGI you don't need to develop different versions of it for different purposes. It's generally intelligent so it can do it all. This also includes writing its own code. This is where the worry about intelligence explosion origins from. Once it's even slightly better than humans at writing its code it'll make a more competent version of itself which will then create even more competent version and so on. It's a chain reaction which we might not be able to stop. After all it's by definition smarter than us and being a computer; also million times faster.
Edit: Another feature that AGI would most likely, though not neccessarily posses is consciousness. There's a possibility that it feels like something to be generally intelligent.
I think that the algorithms used to learn to drive cars can learn other things too, if they’re presented with training data. Do you disagree?
Just so we’re clear, I’m not trying to say that a single, given, trained LLM is, itself, a general intelligence (capable of eventually solving any problem). But I don’t think a person at a given moment is either.
Your Uber driver might not help you with your homework either, because he doesn’t know how. Now, if he gathers information about algebra and then sleeps and practices and gains those skills, now maybe he can help you with your homework.
That sleep, which the human gets to count on in his “I can solve any problem because I’m a GI!” claim to having natural intelligence, is the equivalent of retraining a model, into a new model, that’s different from the previous day’s model in that it’s now also trained on that day’s input/output conversations.
So I am NOT claiming that “This LLM here, which can take a prompt and produce an output” is an AGI.
I’m claiming that “LLMs are capable of general intelligence” in the same way that “Human brains are capable of general intelligence”.
The brain alternates between modes: interacting, and retraining, in my opinion. Sleep is “the consolidation of the day’s knowledge into structures more rapidly accesible and correlated with other knowledge”. Sound familiar? That’s when ChatGPT’s new version comes out, and it’s been trained on all the conversations the previous version had with people who opted into that.
I've heard expers say that GPT4 displays signs of general intelligence so while I still wouldn't call it an AGI I'm in no way claiming an LLM couldn't ever become generally intelligent. Infact if I were to bet money on it I think there's a good chance that this is where our first true AGI systems will originate from. We're just not there yet.
It isn't. It doesn't understand things like we think of with intelligence. It generates output that fits a recognized input. If it doesn't recognize the input in some form it generates garbage. It doesn't understand context and it doesn't try to generalize knowledge to apply to different things.
For example, I could teach you about gravity, trees, and apples and ask you to draw a picture of an apple falling from a tree and you'd be able to create a convincing picture of what that would look like even without ever seeing it before. An LLM couldn't. It could create a picture of an apple falling from a tree based on other pictures of apples falling from trees, but not from just the knowledge of an apple, a tree, and gravity.