Yippee! I get to reflect on one more thing this week. Thanks to Charles Witschorik’s feedback. He got me thinking about Basis in induction. That he did.
Learning a new language
Every quarter, predictably, at least one student asks me which language they should learn. Should it be C++, Java, Python or something else? My response to them has been the same. I talk to them about human languages. We all have different words for different things. But beneath the covers, as it were, the direct sensory experiences that these languages convey are the same. When I say I’m hungry, or angry, or in love, I mean the same as when a Spanish speaker says estoy hambriento, or enojado, or enamorado.
The superficial representations are different, but the emotions and feelings they signify are the same. I then use that as an excuse to emphasize the importance of fundamentals. I tell them that regardless of the language they write their programs in, by the time their program reaches the CPU it’s been transformed into a completely different language: Machine language – Bytes of instructions for the particular chipset that the computer is based on. The CPU can’t tell the difference between a program written in one language and another because it never sees it. What it sees are just CPU instructions. Far better therefore, I tell them, to learn how the CPU reacts to instructions because you can always learn a new language in terms of what the CPU will need to do for you.
Is a dictionary sufficient?
Once an (imaginary) friend told me he didn’t need a tutor or immersion for learning a new language. Claimed he could do it from a dictionary.
“That’s impossible.” I said. “You only get words defined in terms of other words. It’s circular.”
“That’s true.” he said. “And I have a photographic memory. Like Dr. Strange. Or Holmes.”
I looked perplexed.
“I mean the detective.” he added quickly, as though I was dense. Said he could instantly recall which word or words any other word was defined in terms of.
The truth is, he had a point. If learning a language was all about remembering the rewrite rules – how one set of squiggles (symbols) maps on to other sets of squiggles – then in a curious, theoretical, outside-perspective way one could be said to have learned a language when they manage to successfully masquerade as a squiggle manipulator. Or a “symbol transformer” to sound more erudite.
I mean, you can go to translate.google.com today and type in a sentence in a bunch of different languages. It will faithfully map it to another target language of your choice. Most of the time Google gets it right. I know that teens routinely use it for their language homework.
Does Google translate understand these languages? You’re probably thinking that all this harks back to The Chinese Room experiment by Searle. And I continue to hold the view that naive symbol shuffling, by itself, does not understanding make. It’s entirely possible that machines in the very near future (in fewer than five years) could well have real understanding of a language, but at the moment they do not. What’s missing is the important link between certain root words in the language and our direct experience.
One day, soon enough, we’ll have sophisticated machines that hold concepts – internal representations of objects and events in the external world. They’ll be able to forge live links from words to these concepts. And stitch together rich tapestries of concepts made of links to other concepts. And have goal directed behavior where the goals have been installed in them by us, their creators.
This will mark the emergence of consciousness as we know it in the machines. At this point it will make sense to say that the machines have experiences, like our own. Or transcending ours even. But until then, our machines can at best be said to exist in only highly rudimentary states of consciousness – not unlike primitive organisms that simply respond to stimuli.
All knowledge is ultimately grounded in direct experience. This is a fundamental postulate of epistemology. That’s why learning a language exclusively from a dictionary is futile. Some words – root words – must necessarily leap out of the dictionary and point directly into the real world from within our heads – either to objects, events, or ideas combining them and other ideas. But ultimately tying them to one or more direct sensory experiences. That is the final link. The experiential mile. The feeling mile. The last mile. When this happens is when a word finally comes alive. If not for these hooks into our direct experience, what we have is simply a network of primitive symbols to primitive symbols. The network could be elaborate and sophisticated. But if it isn’t pinned to experience, it lacks reality. Since fact is based on reality, and the only absolutely true facts are the facts of our own direct experiences, it is the crucial link of having a direct experience that lends reality to any idea, concept or thing that may exist objectively out there in relation to other ideas, concepts and things.
To be sure, that network of ideas, concepts and things is indeed elaborate, sophisticated, and huge. But by itself it’s useless until it’s nailed down to direct facts. These direct facts are the bases for our knowledge.
Charles prompted me to further elaborate my notion of basis in mathematical induction (my previous post) and I was going to. But what accelerated my output was his alternative observation of the basis as a “sterile predicate function”. While he’s right in one sense, from another, the one I talk about here, the basis is anything but sterile. It is that which gives meaning to the whole exercise of induction. Without a basis, the inductive proof is sterile, impotent, and as useful as a castle in the sky. Just as without direct first hand experience and feeling, this whole world out there is hollow, meaningless, and worthless. Pure fiction awaiting realization.
That’s why we say “There’s simply no basis to your argument” when we disagree with someone else’s point of view vehemently, at a fundamental level.
Thanks for the prompt, Charles.