195
submitted 1 year ago by Five@beehaw.org to c/technology@beehaw.org
you are viewing a single comment's thread
view the rest of the comments
[-] webghost0101@sopuli.xyz 17 points 1 year ago

The Chinese room argument makes no sense to me. I cant see how its different from how young children understand and learn language.

My 2 year old sometimes unmistakable start counting when playing. (Countdown for lift off) Most numbers are gibberish but often he says a real number in the midst of it. He clearly is just copying and does not understand what counting is. At some point though he will not only count correctly but he will also be able to answer math questions. At what point does he “understand” at what point would you consider that chatgpt “understands”  There was this old tv programm where some then ai experts discussed the chinese room but they used a chinese restaurant for a more realistic setting. This ended with “So if i walk into a chinese restaurant, pick sm out on the chinese menu and can answer anything the waiter may ask, in chinese. Do i know or understand chinese? I remember the parties agreeing to disagree at that point.

[-] conciselyverbose@kbin.social 6 points 1 year ago

ChatGPT will never understand. LLMs have no capacity to do so.

To understand you need underlying models of real world truth to build your word salad on top of. LLMs have none of that.

[-] Mr_Will@feddit.uk 5 points 1 year ago

What are your underlying models of the world built out of? Because I'm human, and mine are primarily built out of words.

How do you draw a line between knowing and understanding? Does a dog understand the commands it's been trained to obey?

[-] steakmeout@aussie.zone 1 points 1 year ago

Your brain understands concepts and can self-conceptualise, LLMs cannot do either. They can sound convincingly as if they understand concepts but that's because we fill in gaps due to how we understand language. The examples of broken or distorted sentences being understandable applies here. You and I can communicate in broken sentences because you and I understand the concepts beneath the conversation. LLMs play on that understanding but they do not understand its concepts.

load more comments (6 replies)
load more comments (14 replies)
load more comments (21 replies)
this post was submitted on 29 Jul 2023
195 points (98.5% liked)

Technology

37702 readers
513 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS