A Portuguese version of this paper was published in Leopoldianum, Revista de Estudos de Comunicações of the University of Santos (Ano 25, No. 72, Fev. 2000, pp 87-102)

Artificial Intelligence, Children Education
and the Human Brain

 

By Sergio Navega
Intelliwise Research and Training
E-mail: snavega@attglobal.net

February 2000


Abstract

In this paper, I use the history of AI as a starting point to a short discussion on what is intelligence and how can children education benefit from our findings. I mention the importance of a solid sensory experience and how this can influence future development of intelligent processes in children. The paper ends with a simple list of characteristics that one entity (be it a child or a machine) must possess in order to be seen as intelligent. The basic idea is that the attempt to build intelligent artifacts can sprout important insights on the workings of our own brain, but also that the understanding of the way our brain handles the world gives us significant details on the necessary preconditions for the mechanization of intelligence.

Introduction

Is it possible to write a single article involving Artificial Intelligence (AI), children education and the human brain? In what follows, I will present one attempt. In this article I will expose some thoughts about AI and how that field of research contributed to enhance our understanding of the human brain. The topics may seem, at first, not directly connected. I appeal to the reader's patience in trying to discover the intended points of connection.

Short History of AI

The term "Artificial Intelligence" was coined by John McCarthy, during the famous Dartmouth College Workshop, held during two months in the summer of 1956. That was the first officially organized meeting of scientists to discuss aspects of intelligence and their implementation on machines. Back to those days we had a lot of excitement and some relatively successful experiments, given the primal nature of the available computers and programming languages. One of the significant developments of the following years was Allen Newell and Herbert Simon's GPS (General Problem Solver), designed to simulate human problem-solving methods (see Russell 1995).

GPS was, indeed, successful on the first problems it tackled but very soon revealed to be insufficient to model the breadth and intricacies of the problem solving and understanding that even a conventional, unspecialized human can have. Another important name of that era was Marvin Minsky, who influenced the field decisively in several occasions (perhaps the most significant works were the 1974 and 1986) and is an important reference even today.

With the discovery that early expectations about AI were unfounded, and with the comprehension that the mechanisms of human cognition were "deeper" than previously imagined, focus of research was directed toward more practical matters, which gave the birth to the "Expert Systems Era" (starting in the 1980s). Although with a lot of problems, it can be said that expert systems were the first "commercial success" of AI, in spite of not following the same explosive growth of other software of the same time. The main problem with expert systems was that of the lack of "understanding" of basic facts that even a child is able to infer. I will return to this point in a moment.

It was just in the middle of the 80s that neural networks returned from a long period of disregard. Research started to soar and produced some practical applications of relative commercial success (pattern recognition, stock trade prediction, data mining). But again it was not enough to be recognized as artificial intelligence.

The excitement of AI still persists today, although on a much lesser scale, but the original goals of producing human-like intelligence were substituted with time by more realistic goals. Strong criticisms to the original aspirations of AI kept appearing (see, for example, Dreyfus 1992).

It was in 1984, however, that another bold attempt started, following one of the most basic desires of the AI community: embedding common sense reasoning in machines.

Common Sense is not Intelligence

The greatest difficulty of Artificial Intelligence to date is that of making computers reason with ordinary common sense. Everybody knows that to enter a room one must first open the door. However, this is not "obvious" for computers. It is this distinction of "obviousness" that puzzles us about the limited capacities of machines. It may seem futile, however, to put this kind of knowledge about doors and rooms in a computer responsible, for instance, by the data processing of a large corporation.

But this kind of knowledge is closely related to other kinds of knowledge that also demands common sense, very useful for corporations. For instance, during the 1980s, an expert system had authorized a car loan to a person who wrote, in its application form, that he has worked at the same job for twenty years. Nothing wrong with this, except that the applicant was eighteen years old. Why does a computer have difficulty in perceiving how odd this detail is?

To solve this kind of problem, Douglas B. Lenat started with the MCC Corporation a project named CYC (CYC 1998, Lenat 1990, Whitten 1995, Mahesh et. al 1996, Yuret 1996). It is, without any doubt, the most representative and ambitious project on common sense mechanization developed so far. With over half a million statements, 30.000 rules, a gigantic ontology and a definite concern for common sense reasoning, CYC is a model of concentrated effort with a clear-cut goal. Its main philosophy is that there's no shortcut to intelligence: it demands the introduction of lots of interrelated facts, something that (it is supposed) will allow the computer to reason with common sense and become intelligent. CYC, then, begun to be "spoon-fed" by humans with thousands of carefully entered facts about the world.

Destined to finish in 1995, the project today still seems to be in frantic work and still with lots of work to do. Will it succeed in providing the kind of common sense that even a child possesses? Will the envisioned common sense capability be all that is required to be intelligent?

In my opinion, this problem is being tackled backwards. It is not the common sense reasoning that "provokes" intelligence, on the contrary, it is the intelligent mechanism that is able accumulates knowledge responsible for the common sense reasoning that we possess. A child has commonsense knowledge because she was able to, one way or another, absorb and cross-connect this knowledge of the world.

Unfortunately, however, this seems to put us back at the start of the story: what is intelligence after all?

Is Intelligence the Main Activity of Our Brain?

We usually forget that the first and hardest problems our brains have to solve are not related to abstract reasoning, but are perceptual and motor ones. This assertion is gaining momentum with the lights brought by the more recent (last decades) contributions given by neuroscience.

Coordinating limbs adaptively and with precision, walking, running, identifying objects through vision, perceiving 3D objects, processing audition, identifying thousands of smells, all these tasks are very difficult problems and consume a lot of the neurons of our brain, occupying a significant amount of our learning efforts during childhood. Yet, these are not the capacities that distinguishes us from other animals. Apes, for example, may have comparable (or even better) perceptual and motor mechanisms than most of us, because they live in an environment in which these abilities are fundamental for survival (jumping from trees, using their tail as a fifth limb, sensing audible clues indicating danger, etc).

Thus, it seems unreasonable to ascribe our distinguished intelligence just to better perceptual capacity, when in fact we may be losers when compared to other animals. But the fact is that we are more intelligent than any other known animal. There is, certainly, something more. What is it? Is it language? Could language explain our intelligence?

There's been a lot of thought on this matter. The main problem of this line of thought is that it is the same as saying that the most important (or even the only) form of thinking is language-related. This, too, have been object of intense philosophical discussion. However, it is increasingly clear that we have a lot of thinking that does not come close to being language-related. We use a lot of visual analogies, we utter phrases using onomatopoeia, we often visualize essentially spatial situations. Our qualitative thinking about physical systems is rarely linguistic. Think, for instance, in a jar being filled with water. Or a group of pulleys driven by a common string. Sometimes we make abstractions and comparisons that are hard to express in language.

But my main argument against language as the center of intelligence stems in a simple observation. If language were on the center of our thinking, it obviously would be all we need to transfer any kind of knowledge from one person to another, with precision. This, indeed, seems obvious for a lot of people and have dominated the views of scientists during the initial phases of AI. But this is not what happens in practice. Language's most problematic characteristic is its difficulty (and often its impossibility) in conveying sensory experiences, because language assumes that the "receiver" already knows what each symbol means (or what each symbol feels)..

The Limits of Language

Have you tried to teach somebody to ride a bicycle using just words? No, it's not possible. No matter how long you talk with the person, it will never learn it only by words. Without experimentation, without feeling the difficulty to balance, the problem of coordinating hands, feet, etc., one would not learn how to ride a bicycle. The same happens when one is learning how to drive a car: you can't do it "by the book", you have to get inside one and exercise the controls. You must be prepared to make errors and adaptively learn to correct them. Ditto for flying a plane. This knowledge is not acquirable just by reading.

Think of a surgeon. How many years of practical experience he have to face before being considered knowledgeable? This kind of knowledge is not transferrable through language alone. I have reasons to believe (Navega 1998) that the learning of most intellectual and reasoning abilities also suffer from similar necessities (although the practice sessions varies).

This is why it is necessary to let babies interact directly with the world, as much as possible. This is why it is necessary to have laboratory experiments in college. This is why we happen to have several kinds of hands-on practice sessions in most technical subjects (mathematics being one noble exception, although the "experiments" are done with paper and pencil and inside one's mind). My point here is that all this practical world experience is the fulcrum which supports all our abstract reasoning. It is the platform over which we build our linguistic and thinking abilities.

Symbol Grounding Problem

Stevan Harnad's Symbol Grounding Problem (Harnad 1990) is, in one view, an extension of John Searle's famous thought experiment of the Chinese Room. Unfortunately, we don't have space here to discuss Searle and Harnad in detail. Suffice it to say that Harnad starts proposing the "Chinese Dictionary" as the first and only reference offered to an intelligent entity. Imagine that you are locked in a room with no visual or auditive contact with the outside world. Imagine that you have only this chinese dictionary in your hands. You receive via a small door one paper with phrases written in chinese. You look each symbol of the phrase in the chinese dictionary. You find there just another bunch of chinese symbols. The obvious question is: Will you ever understand anything about the outside world or even about chinese?

From his initial discussion, Harnad develops valid considerations about discrimination and identification to finally propose the necessity of "grounding" symbols in iconic representations and these into distal objects of our sensory surfaces. This is enough, for Harnad, to propose one hybrid architecture in which symbolic elements are over connectionist modules, responsible for the support of the concepts on the sensory ground. It seems that we humans work this way too.

Harnad uses as example the word "horse", being a symbol created in our mind as an effort of identification and categorization of the world (in this case, all the sensory impressions that we have about horses). It is interesting to me that a conventional dog may not come up with an equivalent "symbol" in his brain, but it is obviously able to categorize things: domestic animals are very proficient in identifying known and unknown people. They learn very fast what is animate and what is inanimate. This seems to be on the right track, if we assume that dogs are also possessors of some sort of intelligence.

The conclusion seems clear: no system constructed only with symbolic information will be able to "understand" the world in which we live. For me, it is an additional indication that children must have solid sensory grounding before learning symbolic and formal constructs.

Inductive Reasoning and the Birth of Intelligence

As we learn more and more about the world around us, we start to notice things that are not directly visible from our raw perceptions of it. I want to reinforce here two basic principles that I believe are the foundations of intelligence: the perception of similarity and the perception of anomaly. Let's think like a child for a while.

As a child, I look to a tree and I'm told that it grows towards the sky. That's interesting, because if I drop a rock it tends to fall on the ground. Trees somewhat defy this tendency. My parents are greater than I am, and they say they have been small like me. They too grew up toward the sky. Both my parents and trees are said to be alive. So (among several other conclusions the child infer, some of them wrong), there's something that associates life to the opposite direction of other natural things like this tendency to fall (gravity) and the usual immobility of rocks.

How this conclusion survives and the wrong ones don't? Maybe it is because we get reinforcements, like when we see our dog: it moves, barks, do all sort of things that our plastic superman toy don't. There's something very different about my dog and my plastic toys. Something different makes my dog walk and run and never stay quiet in a position. My plastic toy always stay in the position I left it. Dogs are like myself, and like my parents and like the trees. My plastic toys are like rocks.

This perception, hard to put into words, may remain buried inside a child's mind for quite some time. But the effect of its presence will certainly influence all the reasoning and future perceptions of this child. Life, that elusive word even to adults like us, happens to be an intuitive and natural concept in the mind of a child. Yet, much before this child is able to learn all the words involved in this description, it will have the root of that knowledge firmly planted inside his brain. A knowledge that will add significantly to his common sense arsenal, to the point of differentiating what is alive and what is not by a simple visual inspection. Something that's utterly difficult for our computers to do (so far).

I claim here that most of what we know is initially derived using similar reasoning techniques and that formal education can only be successful if the student has a solid ground in which to sustain the received knowledge.

Has AI Failed?

Back to Artificial Intelligence, let us repeat the most asked question by the practitioners of this field: Have AI failed to deliver its promises? Well, if the goal was to build a machine with human-like thinking abilities, then it most certainly failed. But if the goal was to raise the important points about intelligence and let us all think about it, then we ought to say that its contribution was significant. We are now in a position to better understand how complex and exquisite the problem is.

One of the important questions addressed here seems to be the support of the so called "strong AI" hypothesis: can a computer receiving text-only input through a keyboard be able to develop human-like intelligence? As we have seen, without some sort of sensory input of the world the answer is, definitely, no, by no means. This conclusion took decades to be understood by the initial pioneers of the field and it is possible that even today some of them are not fully persuaded. It was this discovery, along with the failures of the past implementations, that drove the flood of criticism that impinged on AI. Can this mean that we will not have intelligent machines unless they are robots with vision and audition?

Any Hope of Making Machines Intelligent?

It is now clear that my position is that the human brain is essentially different from the CPU of our computers, and that would put me in the same basket as that of the skeptical of intelligence in computers. As strange as it may sound, I am not that skeptical. The fundamental question here seems, again, the definition of what is intelligence and its distinction to human-like intelligence, in a broad enough fashion to encompass (or not) a possible implementation by mechanical methods. However, definitions are, by definition, too restrictive. We could rephrase the question this way: what are the essential features one entity must possess in order to be called intelligent? Let us forget for a while what it means to be intelligent like humans. Let us think about a generic vision of intelligence. I believe that the answer to this question can shed some light on what we should do to build intelligent computers and also, as a side effect, to help with the education of our children. Here are some requisites:

Perception of regularities

I believe that, at the core of intelligence, there should be a proficient mechanism for perceiving regularities. Our world, no matter how chaotic and random it seems sometimes, is filled with regularity. It is the apprehension of these regularities that I think is the first step toward intelligence.

Conceptualization and Categorization

This is the age old problem of concepts: what, for instance, is a chair? A cut trunk of a tree may fit this category, if it is useful in our intention to sit. The creation of concepts seems to be associated again with repetition over time: it is the perception of regularities on the previously perceived regularities. When we notice something repeating itself, we start to feel the necessity to name that occurrence. A "generic dog" is a concept derived from our observation of innumerous examples of dogs of which Fido, the dog, is just one example.

Causal Discovery Mechanism

Causality is one of the highly debated areas in philosophy of mind and is certainly one of the Achilles' heel of intelligence. When a child observes that light appears as the result of the appearance of the sun or when we turn on the switch and a lamp glows, or when we lit one candle, the innate perception of regularities in the children's brain starts to suggest causal mechanisms. Much of our scientific reasoning stems on carefully hypothesized causal mechanisms.

Analogic Reasoning

The best teachers are said to be the ones who use interesting analogies when explaining a new subject to its students. Why analogy eases understanding of new subjects? As Douglas Hofstadter proposes (Hofstadter 1995), much of human intellect can be seen as analogical reasoning in action. Analogical reasoning favors the adoption of past experiences to help construct the road in which we settle the new acquired information.

Creativity

It seems that creativity is the definitive step toward intelligence: a brain capable not only of understanding the world that surrounds it but also to creatively propose alterations in its environmental conditions to enhance its living situation. A good introduction to this subject can be seen in Boden (1994). Intuition and creativity often are faced as mystical, inexplicable traits of human thought. On my vision, these are characteristics necessary to the intelligent intellect and, like consciousness, will sooner or later be understood by scientists. This is a clear indication that AI researchers must stay close to all findings of neuroscientists.

Learning and the Unconscious

The problem of understanding what intelligence is and how it works is defying scientists and thinkers for centuries. Is there a reason for the challenge in understanding this mechanism? Much of the difficulty of grasping what are the principles behind intelligence stands on the problems we have in accessing our own psyche. It seems that we are unaware of a lot of our own brain functions.

Most of initial work on AI was done mirroring the kind of thinking that we do consciously, usually using some form of logic processing. There are now several indications that there's much more than this "high-level" aspect of thought (for instance, Wason's Selection Test suggests that humans have serious difficulties with logic; see Wason (1966) or Navega (1998) for more detail). In particular, one relatively recent area of psychology called "Implicit Learning" tries to experiment with learning processes that don't happen consciously (a nice introduction to this subject can be seen in Cleeremans 1996). These clues are enough to suggest that, when we look inside "ourselves" we may be seeing just the tip of the iceberg and that understanding what is intelligence requires a lot more information from disciplines such as Cognitive Psychology (see, for instance, Thagard 1996) and Cognitive Neuroscience (see Gazzaniga 1998).

It is also important to observe that all the reasons raised against human-level intelligence in AI systems does not mean that one AI system could not develop another kind of intelligence. A different kind that uses different "sensory perceptions" and, by using properly constructed mechanisms, be able to replicate the fundamental points behind intelligence. It can be done such as to be able to reason and understand enough of "our world" to be useful to us. It is certainly not the same thing as making a bird, but it could be a useful airplane!

Conclusion

My main area of research is AI, which can be seen as the attempt to put a little more intelligence on our extremely dumb machines of today. It is interesting, however, that this endeavor informs us a lot about our own intelligence and about how to care better for the intelligence of our children. Our quest to understand the universe frequently crosses with the quest to understand ourselves. What better way of accomplishing this than to help our children fulfil their full potential? With a better understanding of what it means to be intelligent and a strong and mature emotional foundation, I believe our children will be well prepared to face the challenges of the future. We all know that the next century be full of them.

References

Boden, Margaret A. (1994) What is Creativity?. In Dimensions of Creativity, Bradford Book, MIT Press.

Cleeremans, Axel (1996) Principles for Implicit Learning. In D. Berry (Ed.), How implicit is implicit learning? (pp. 196-234), Oxford University Press.

CYC (1998) Cycorp web site. http://www.cyc.com

Dreyfus, Hubert L. (1992) What Computers Still Can't Do. The MIT Press, London, England. Revised edition of "What Computers Can't Do" (1979).

Gazzaniga, Michael S. and Ivry, Richard B. and Mangun, George R., (1998) Cognitive Neuroscience, The Biology of Mind. W. W. Norton & Company, Inc.

Harnad, Stevan (1990) The Symbol Grounding Problem. Physica D 42: 335-346.

Hofstadter, Douglas (1995) Fluid Concepts and Creative Analogies. BasicBooks, HarperCollins Publishers, Inc.

Lenat, Douglas B. and Guha, R. V. (1990), Building Large Knowledge-Based Systems. Addison-Wesley Publishing Company, Inc.

Mahesh, Kavi and Nirenburg, S. and Cowie, J. and Farwell, D. (1996) An Assessment of CYC for Natural Language Processing. Computing Research Laboratory, New Mexico State University http://crl.nmsu.edu/Research/Pubs/MCCS/Postscript/mccs-96-302.ps

Minsky, Marvin (1974) A Framework for Representing Knowledge. MIT Memo 306.

Minsky, Marvin (1986) The Society of Mind. Touchstone Book, Simon & Schuster.

Navega, Sergio C. (1998) Artificial Intelligence as Creative Pattern Manipulation: Recognition is not Enough (paper available from the author: snavega@attglobal.net)

Russell, Stuart, Norvig, Peter (1995) Artificial Intelligence, A Modern Approach, Prentice-Hall, Inc.

Searle, John R. (1980) Minds, Brains, and Programs. Behavioral and Brain Sciences 3, 417-24, also published in Boden, Margaret A. (ed) The Philosophy of Artificial Intelligence, Oxford Readings in Philosophy, (1990) Oxford University Press.

Thagard, Paul (1996) Mind, Introduction to Cognitive Science. Massachusetts Institute of Technology, Bradford Book.

Wason, Peter C. (1966) Reasoning. In B. M. Foss (ed.), New horizons in psychology. Harmondsworth, Penguin.

Whitten, David (1995) Unofficial, Unauthorized CYC FAQ http://www.mcs.com/~drt/software/cycfaq

Yuret, Deniz (1996) The Binding Roots of Symbolic AI: A brief review of the CYC project. MIT Artificial Intelligence Laboratory.