My gripe with the Chinese room is that Searle argues that his inability to understand Chinese means the program doesn't understand Chinese, but I could say the same thing about the human body.
The neurons that operate your vocal chords have no idea what they're saying, nor the ones in your hands any idea what they're writing, yet they can speak and write exactly because your brain tells them what to do. Your brain is exactly like that book as far as your mouth and hand neurons are concerned.
They don't need to understand language at all for your brain to be able to understand it and give instructions based on that understanding.
My only argument is at what point does an algorithm become sufficiently advanced that it is indistinguishable from a conscious being?
Because at the end of the day, most of what a brain does is information processing based on what it has previously learnt, and that's exactly what the algorithm is doing based on training data. A sufficient enough algorithm should surely be able to replicate understanding.
Sure, that isn't ChatGPT as we know it, as you can tell from its sometimes very zany responses that while it understands what words are valid responses, it doesn't understand what the words themselves mean, but we should reach that at some point, no?
Keep in mind ChatGPT is a language model. It's designed specifically to simulate sounding like a human. It does that... Okay. It doesn't understand the information or concepts it is using. It just sounds like it does. It can't reliably do basic maths and doesn't try or need to. It just needs to talk about it in a believably conversational way.
The brain does far more than process information. And ChatGPT doesn't even really do that.
My gripe with the Chinese room is that Searle argues that his inability to understand Chinese means the program doesn't understand Chinese, but I could say the same thing about the human body.
The neurons that operate your vocal chords have no idea what they're saying, nor the ones in your hands any idea what they're writing, yet they can speak and write exactly because your brain tells them what to do. Your brain is exactly like that book as far as your mouth and hand neurons are concerned.
They don't need to understand language at all for your brain to be able to understand it and give instructions based on that understanding.
My only argument is at what point does an algorithm become sufficiently advanced that it is indistinguishable from a conscious being?
Because at the end of the day, most of what a brain does is information processing based on what it has previously learnt, and that's exactly what the algorithm is doing based on training data. A sufficient enough algorithm should surely be able to replicate understanding.
Sure, that isn't ChatGPT as we know it, as you can tell from its sometimes very zany responses that while it understands what words are valid responses, it doesn't understand what the words themselves mean, but we should reach that at some point, no?
Keep in mind ChatGPT is a language model. It's designed specifically to simulate sounding like a human. It does that... Okay. It doesn't understand the information or concepts it is using. It just sounds like it does. It can't reliably do basic maths and doesn't try or need to. It just needs to talk about it in a believably conversational way.
The brain does far more than process information. And ChatGPT doesn't even really do that.