iwsmall.gif (5602 bytes)

Detailed Research Statement


Sorry, this document is not available yet.

It is our intention to put a detailed statement of the disciplines and "rationale" that follow the discoveries we make. Unfortunately, the first version of this document is not ready yet.

Meanwhile, what follows is a quick-and-dirty checklist of some of the items that will be covered (mostly outdated material, sorry).


Multidisciplinarity
I don't think we can develop intelligent computers and robots without a deep and serious multidisciplinary investigation of the matter. This includes, but is not limited to, Cognitive Science, Neuroscience, Philosophy, Evolutionary and Developmental Psychology, Linguistics, Computer Science, Mathematics, Anthropology, and other disciplines. There's a risk of getting lost in that sea of knowledge, but in my vision there's no easy shortcut: all those disciplines contribute with fundamental insights. It is our responsibility to recognize what is relevant and discard what is not.

Basic Points of Intelligence
My main interest can be understood as the search for the fundamental principles behind intelligence. I reject the idea of merely producing intelligent behavior, which could, in principle, be achieved by the development of purely reactive or pre-loaded systems. What is necessary, in my vision, is a way to develop machines that gradually become intelligent by their own efforts of exploration of the world. It will be clear later, however, that this is not a robotics initiative.

Problems With Typical AI
I'm a strong critic of the traditional approaches to AI, which include (but are not limited) to typical knowledge representation formalisms (first-order logic, semantic networks, conceptual graphs, scripts, frames, etc), typical inference methods, search techniques and typical connectionist architectures. This document will, in the future, expose the reasons behind these criticisms.

Common Sense is not Intelligence
Common sense reasoning is frequently confused with intelligence. Some researchers say that in order to obtain the latter, one have to have the former. This seems to be wrong and my point will be developed further in the main document. I believe that this confusion is responsible for some of the problems faced by the "strong AI" sect. Intelligence, to be short, is the ability of a system to generate new knowledge.

The Body as Indispensable to Intelligence
Could we think about human intelligence without a body? It seems that this is not possible. Bodies are essential parts of the intelligent entity/world system. This vision follows, among many others, some ideas of Clark (1998). I will develop in the main document other reasons to believe in the foundational importance of this aspect. However, amazingly, one of my goals will be to challenge this very same assumption. Much of my research involves ascertaining to what extent this can be done safely and then how to do this wisely.

Bottom-up or Top-Down?
Since the official birth of AI, things (roughly) settled around two competing paradigms: the "top-down" or symbolic model and the "bottom-up" or connectionist model. The fight continues up to now but I will try to find another alternative. No, it is not hybrid systems, a relatively new and interesting contribution (see Sun 1996). It is a focus in an intermediate area where the symbolic and connectionist aspects appear to meld together. Some of the interesting ideas associated with a possible approach to this question were put forward by Jaeger (1994), with his Dynamic Symbol System approach.

Recognition of Similarity and of Anomaly
Recognition of similarity is among the general abilities that any inteligent organism possess. What may be more difficult to grasp is the real importance of this ability, on a more fundamental level. I think this ability is the starting point of our cognition. Although there are lots of philosophical doubts on the validity of this principle, it is clear that biological organisms evolved under a more general principle, that of the "information economy" (Loewenstein 1999). It is the first step to suggest that brains are information compressors and this requires recognition of similarity (Wolff 82, 93, 98).

Innate Feature Detectors
We are the result of evolution, over a very long period of time, and we are the result of learning, during our existence since we were born. The thin line that divides the features that belongs to the former and the ones that constitute the latter is one of the important things we have to know about. The problems solved by the brain begin after one initial level of processing, done by sensory mechanisms (Clark 1998).

Perception and Invariance
There's a lot of things to be learned from J. J. Gibson's affordances and high level invariants. Perception is one of the subjects that appear to be closely related to intelligence. What seem to be important here is the progressive refinement of perceptual detectors. New experiences may not only create new detectors for new features, but very often refine old detectors, widening their scopes or constraining them to classify better.

Cognition From Sensorimotor Activities
The importance of interaction to the development of cognition cannot be overestimated. Here we follow the ideas of Jean Piaget and more modern counterparts such as Karmiloff-Smith.

Interaction as Disambiguation
While there are still some doubts about the real role of interaction in cognition, it is reasonable to see its effects as that of helping disambiguation, the process of reducing the number of possibilities of identification of a feature, to a more manageable set. This may suggest that one of the driving forces behind interactive actions is the desire (often unconscious) to eliminate or test unlikely candidates, to ease our classification and understanding of the world.

Induction and the Birth of Concepts
Inductive processes, although often criticized, should be seen not as a way to reason, neither as a necessary step of a scientific method, but just as an important point in the basic mechanism of intelligence.

The Birth of Language
One of the most polemic disputes of Cognitive Science centers around the innateness of language. Since Chomsky, innateness of language has been the option of choice of the majority of the cognitive scientists. Pinker proposes evolutionary roots for the "language machine". But recent attacks to this position had shed new light on this interesting dispute (Elman et al., Deacon, Bates, Christiansen). This is an important debate to follow and my position is aligned with the latter. Language appears to be a dynamic system, and brains are things "devised" to learn such things, not to carry them innately.

The Reverse Way
One of the original aspects that my research is trying to produce is related to the "inverse way". We are usually tempted to see language grounded (and emerging) from sensorimotor interactions, which are the fundamental sources of "models" of the world that we assemble in our mind (mainly during childhood). I will propose to see language, in adults, also as another source of information to feed these models. Nothing strange so far. But let us think further. Several models of the world that we keep in our mind don't have a direct sensory counterpart. They were built through the analogical reutilization of similar (but distal) sensory patterns, driven only by language (try to "see" abstractions such as "nation", "justice", "economy", etc). This is one of the supporting points of my idea of challenging the need of a body for intelligence, because it will allow us to think of an architecture that can be driven by language (plus a few more things...) in order to assemble interesting models of the world. But, in order to do this, some form of grounding is necessary. The most ambitious part of my research is the establishment of the characteristics and functional requirements necessary for this task to be successful. The main goal is to provide conventional Knowledge-Based systems with such a grounding, in order to enhance significantly their abilities to acquire knowledge automatically.

The Binding Problem
Cognitive Neuroscience deals, among several problems, with one very interesting: how our mind "binds" together several aspects of our perception that are represented in different parts of the brain. The discovery of how this mechanism works can shed some light on the fundamental aspects of perception, representation and categorization.

Brain Plasticity and Lateralization
Brain development, lateralization and plasticity are processes that can show us something about the way our brain solves the problem of understanding the world with a limited amount of resources.

Implicit Learning and Unconscious Knowledge
Arthur Reber started this subdiscipline of Psychology in the late 1960's. Artificial Grammar learning is a subject that is full of insights on how we deal with the problem of acquiring regularities from things we perceive unconsciously. This discipline is my main reason to criticize traditional, logic based approaches to AI. These approaches are trying to model what appears to be consciously available to the resolution of problems. Modeling these approaches leads to the traditional problems of AI, such as the frame problem.

Spreading Activation and Neural Darwinism
William Calvin and Gerald Edelman are two names to be remembered. Their proposition of competing mechanisms for several "fronts" of spreads in the brain may say some important things about our abilities to reason.

Complex, Dynamic and Adaptive Systems
Thelen and Kelso develops models of the brain in which complex and adaptive modules self-organize to present optimum performance.

The Neural Code
How does neurons process stimuli? Time among spikes, average time in a train of spikes or temporal synchrony? Following the work of Rieke (1997), Maass (1999) and Singer (1994) among several others may reveal some clues about this essentially important aspect. The prize? Some clues about "representation of information" in the brain.

Analogy and Metaphor
Cognitive psychology contributes much to our understanding of cognition. Analogy and metaphor, as seen by Lakof, Johnson, Veale, Gentner, Hofstadter, Holyoak, Thagard and others, could reveal important characteristics of our mental life.

Creativity
Few would question that creativity is essential to intelligence. Hofstadter, Boden, Weisberg and several others treat creativity in suggestive ways. Although much is yet to be discovered, any attempt to produce intelligent systems must find a place for creativity.

Scientific Reasoning
Children can be seen as intuitive scientists (cf. Karmiloff-Smith), looking to the world and trying to find evidences to support their "discoveries". It is necessary to plant an artificial cognitive entity in a solid ground, and it seems that this ground should be naturally skeptical, the way a scientist usually is.

References (partial list)

Clark, Andy (1998) Being There, Putting Brain, Body, and World Together Again. MIT Press.

CYC (1998) Cycorp web site. http://www.cyc.com

Dreyfus, Hubert L. and Dreyfus, Stuart (1986) Why Computers May Never Think Like People. Technology Review v. 89 (Jan 86). Also appeared in Ruggles III, Rudy L. (ed) (1997) Knowledge Management Tools. Butterworth-Heinemann.

Elman, J. L. (1995). Language as a dynamical system. In R.F. Port & T. van Gelder (Eds.), Mind as Motion: Explorations in the Dynamics of Cognition. Cambridge, MA: MIT Press. Pp.195-223.

Elman, J.L. (1999). Origins of language: A conspiracy theory. In B. MacWhinney (Ed.) Origins of Language. Hillsdale, NJ: Lawrence Earlbaum Associates

Harnad, Stevan (1990) The Symbol Grounding Problem. Physica D 42: 335-346.

Hofstadter, Douglas (1995) Fluid Concepts and Creative Analogies. BasicBooks, HarperCollins Publishers, Inc.

Jaeger, Herbert (1994) Dynamic Symbol Systems, PhD Thesis, Technische Fakultät, Universität Bielefeld.

Karmiloff-Smith, Annette (1992) Beyond Modularity, A developmental perspective on cognitive science. MIT Press.

Landauer, Thomas K. and Dumais, Susan T. (1997) A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction and Representation of Knowledge. Psychological Review 104, p 211-240.

Lenat, Douglas B. and Guha, R. V. (1990), Building Large Knowledge-Based Systems. Addison-Wesley Publishing Company, Inc.

Loewenstein, Werner R. (1999) The Touchstone of Life. Oxford University Press.

LSA (1998), Latent Semantic Analysis Web Site, http://lsa.colorado.edu/content.html, Institute of Cognitive Science, Dept. of Psychology, University of Colorado at Boulder.

Maass, Wolfgang and Bishop, Christopher M. (1999) Pulsed Neural Networks. Bradford Book-MIT Press.

Mahesh, Kavi and Nirenburg, S. and Cowie, J. and Farwell, D. (1996) An Assessment of CYC for Natural Language Processing. Computing Research Laboratory, New Mexico State University http://crl.nmsu.edu/Research/Pubs/MCCS/Postscript/mccs-96-302.ps

Minsky, Marvin (1974) A Framework for Representing Knowledge. MIT Memo 306.

Rieke, Fred and Warland, David and Steveninck, Rob de Ruyter and Bialek, William. Spikes, Exploring the Neural Code. Bradford Book, MIT Press.

Schank, Roger C. (1990) Tell Me A Story, Narrative and Intelligence. Northwestern University Press (1995 edition).

Singer, Wolf (1994) The Role of Synchrony in Neocortical Processing and Synaptic Plasticity, in Models of Neural Networks II, Springer-Verlag New York, Inc

Sun, Ron (1996) Hybrid Connectionist-Symbolic Models: a report from the IJCAI'95 workshop on connectionist-symbolic integration

Wolff, J. Gerard (1982) Language acquisition, data compression and generalization. Language & Communication, Vol. 2, No. 1, pp 57-89, 1982

Wolff, J. Gerard (1993) Computing, cognition and information compression. AI Communications 6(2), 107-127, 1993.

Wolff, J. Gerard (1998) Parsing as information compression by multiple alignment, unification and search, SEECS technical report.

Yuret, Deniz (1996) The Binding Roots of Symbolic AI: A brief review of the CYC project. MIT Artificial Intelligence Laboratory.

Back to Intelliwise Homepage

Back to Research Statement

Sergio Navega's Homepage

Selected Newsgroup Messages

Comments? snavega@attglobal.net