Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@attglobal.net>
Subject: Re: Defect of Turing Test?
Date: 12 Nov 1999 00:00:00 GMT
Message-ID: <80fdl2$5l1$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <80d9jj$35l$1@mulga.cs.mu.OZ.AU>
X-Date: Thu, 11 Nov 1999 10:11:04 -0200
X-Mod: ?
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

Peter Van Roy wrote in message <80d9jj$35l$1@mulga.cs.mu.OZ.AU>...
>Dear all,
>
>It seems to be that there is a fundamental defect of the Turing
>Test that renders it useless as a test for machine intelligence.
>Specifically, the Turing Test is an imitation game, where a
>machine is tested for its ability to sustain humanlike discourse
>[1].  The flaw in this approach is that a machine, being
>disembodied, cannot acquire enough human knowledge.  For example,
>the machine would have no way of knowing what it is like to have
>an itchy back, except by studying it "from the outside".  Loading
>the machine with all such knowledge would be extremely difficult,
>because it would require taps on all the sensory inputs to the
>brain for extended periods of time.
>

One can say that TT is problematic to assess intelligence, although
one can be confortable with the assessment of 'machine intelligence'.
I take the latter to mean a "subset" of intelligence that simplifies
a bit the kind of problems that one sends to the machine to solve.

There is some merit in what you say about disembodied computers,
but there's also a problem. You can talk over the phone with a
native blind person and, if not informed, you may have difficulty
in detecting that that person is blind if he/she is concerned in
"hiding" this disability from you. So, with a truly intelligent
machine it may be possible to have a "conversation" over a
terminal and not being able to distinguish from a human. All it
takes is a *really* intelligent machine, which is the problem we
have to solve.

A potential candidate for such a machine would be MIT's Cog robot,
20 or 30 years from now. Just take the brain out of its body and
link it to a terminal and you'll have the equivalent of a disembodied
computer on a terminal talking like a human.

>It seems to me that intelligence can better be tested by an
>approach that tests certain kinds of reasoning abilities that do
>not depend on being human (e.g., playing chess).

I doubt that chess playing is the ideal way to check for intelligence.

> Other abilities,
>which depend fundamentally on being human (e.g., related to
>"dreams" or "intuition" or "the human condition"), would not be
>testable at all.  It is not clear that abilities such as "reading
>English" or "writing English" are testable; reading or writing any
>significant text requires much human-specific knowledge.
>

I would say that the TT is flawed mostly because it is not
replicable with similar results. One person may give one kind of
assessment while another would probably give a different assessment.

It all boils down to subjective measures of intelligence, and this
occurs because people differ in their ability to judge someone
else's capacities. A layman is easily surprised by Eliza-like
programs, while for any of "us", that will not do.

I'm a bit skeptic of any assessment of intelligence, in humans or
machines. My idea of intelligence is straightly linked to what
one does when one doesn't know how to solve a problem. Unknown
problems tend to have several possible solutions, often with
no clear "best". It seems that when we finally manage to create
an intelligent machine, everybody (or at least most of the
community) will agree that the machine is in fact intelligent,
although nobody will be able do say exactly why.

Regards,

________________________________________________________
Sergio Navega
Intelliwise Research
http://www.intelliwise.com/snavega

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: "Sergio Navega" <snavega@attglobal.net>
Subject: Re: Defect of Turing Test?
Date: 13 Nov 1999 00:00:00 GMT
Message-ID: <80i7c1$766$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <80d9jj$35l$1@mulga.cs.mu.OZ.AU> <80h1g2$ojd$1@mulga.cs.mu.OZ.AU>
X-Date: Fri, 12 Nov 1999 16:36:06 -0200
X-Mod: ?
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

John Reidar Mathiassen wrote in message <80h1g2$ojd$1@mulga.cs.mu.OZ.AU>...
>Just some thoughts,
>
>Perhaps some computers can convince humans that they are intelligent by
>talking about itchy backs, the weather and the Alabama Elephants, but it
>would be far more interesting if it could discuss a topic with imagination
>and creativity, or discuss values pertinent to all living and/or intelligent
>beings, and thereby convince a human participant that it is intelligent.
>
>By the way I think the Turing test could be the wrong way to go in testing
>for intelligence. I think intelligence is a "seeing, and sort of knowing
>how it was done, is believing." thing. In my opinion there is obviously a
>difference between intelligent behaviour and intelligence. One can question
>if an entity is intelligent if it has embedded intelligent behaviour that
>the programmer has specified in the form of rules. If an entity learns by
>interacting in a complex environment, and thereby starts behaving
>intelligently, is it intelligent? If an entity programmed by a human (that
>is no NN, GA etc) seems to behave intelligently, is it then intelligent?
>What difference does the question of preprogrammed versus
>learnt-by-themselves have to say in determining if an entity is
intelligent?
>

It is not consensual, but my position is that one can only talk
about intelligence if learning is an important part of the process.
I have other ways to support this idea, but I'll stick with a
relatively common one.

Take the CYC project, for instance. What we know today leads us to
believe that its intelligence was "built" by the manual introduction
of propositions and rules-of-thumb. It is expected by its
designers that a reasonable amount of "knowledge" will make the
system "fly", that is, learn the rest of the knowledge by itself.
This is indeed possible. But the chances of not working are huge!

The big question is the predisposition to learn. Without that,
the system will not be able to acquire new knowledge automatically
in the future. But if the system is able to learn by itself, then
why worry about introducing hundreds of thousands of hand-crafted
rules? Why not starting to educate the system as if it were
in junior high-school?

One of the greatest possibilities of an Artificial Intelligence
is to surpass its creators. To do the same mental work that any
human will do is not so good as doing a mental work that no
human could do. That's a real improvement.

In order to go beyond its creators, the system must be as
independent as possible of any preconceptions and restricting
ideologies of its makers. This is not just a desirable feature,
it is a necessary one.

Intelligence is often linked to creativity. To be creative it is
required that one often breaks the "rules" and "standards" that
one was taught. It is this "spirit" that allows an organism to
look for uncommon (and valuable) solutions, which is a component
of most of the creative behaviors.

A system that was always "told" how to handle its knowledge
base will not be creative. The result of its processing will
be a mere transformation of antecedents into consequents.
No antecedents, no consequents. To be really intelligent, one
has to be able to generate antecedents, with quality and quantity.

_____________________________________________________
Sergio Navega
Intelliwise Research
http://www.intelliwise.com/snavega

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: dnk@OMIT.cs.mu.oz.au (David Kinny)
Subject: Re: Defect of Turing Test?
Date: 13 Nov 1999 00:00:00 GMT
Message-ID: <80iak0$8i6$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <80d9jj$35l$1@mulga.cs.mu.OZ.AU> <80h1g2$ojd$1@mulga.cs.mu.OZ.AU> <80i7c1$766$1@mulga.cs.mu.OZ.AU>
X-Date: 13 Nov 1999 11:11:11 +1100
X-Mod: ?
Organization: Computer Science, The University of Melbourne
Followup-To: comp.ai
Newsgroups: comp.ai

"Sergio Navega" <snavega@attglobal.net> writes:

>John Reidar Mathiassen wrote in message <80h1g2$ojd$1@mulga.cs.mu.OZ.AU>...
>>Just some thoughts,
>>
>>Perhaps some computers can convince humans that they are intelligent by
>>talking about itchy backs, the weather and the Alabama Elephants, but it
>>would be far more interesting if it could discuss a topic with imagination
>>and creativity, or discuss values pertinent to all living and/or intelligent
>>beings, and thereby convince a human participant that it is intelligent.
>>
>>By the way I think the Turing test could be the wrong way to go in testing
>>for intelligence. I think intelligence is a "seeing, and sort of knowing
>>how it was done, is believing." thing. In my opinion there is obviously a
>>difference between intelligent behaviour and intelligence. One can question
>>if an entity is intelligent if it has embedded intelligent behaviour that
>>the programmer has specified in the form of rules. If an entity learns by
>>interacting in a complex environment, and thereby starts behaving
>>intelligently, is it intelligent? If an entity programmed by a human (that
>>is no NN, GA etc) seems to behave intelligently, is it then intelligent?
>>What difference does the question of preprogrammed versus
>>learnt-by-themselves have to say in determining if an entity is
>intelligent?
>>

>It is not consensual, but my position is that one can only talk
>about intelligence if learning is an important part of the process.
>I have other ways to support this idea, but I'll stick with a
>relatively common one.

>Take the CYC project, for instance. What we know today leads us to
>believe that its intelligence was "built" by the manual introduction
>of propositions and rules-of-thumb. It is expected by its
>designers that a reasonable amount of "knowledge" will make the
>system "fly", that is, learn the rest of the knowledge by itself.
>This is indeed possible. But the chances of not working are huge!

>The big question is the predisposition to learn. Without that,
>the system will not be able to acquire new knowledge automatically
>in the future. But if the system is able to learn by itself, then
>why worry about introducing hundreds of thousands of hand-crafted
>rules? Why not starting to educate the system as if it were
>in junior high-school?

Hi Sergio,

To me the obvious answer is that some rather huge corpus of knowledge
about the contents and behaviour of the world is a prerequisite for
the education process.  This is what embodied systems learn directly
from the world: naive geometry like up, down, bigger, smaller, inside
and outside, naive physics like support, falling, liquid flow, and an
enormous amount more, much of which is probably never verbalised.  You
can't learn this from books because such "common sense" isn't written
down; everyone is expected to know it, no matter how stupid they are.

Now embodied systems can undoubtedly learn such knowledge from scratch,
although we don't understand how humans do it or how to design systems
with the capability to do so.  With CYC the great hope was that a
painstakingly crafted KB would provide a base for a less flexible form
of learning somewhat more akin to junior-high education, or at least
for an experimental programme that could lead to the identification and
development of suitable learning mechanisms.

>One of the greatest possibilities of an Artificial Intelligence
>is to surpass its creators. To do the same mental work that any
>human will do is not so good as doing a mental work that no
>human could do. That's a real improvement.

>In order to go beyond its creators, the system must be as
>independent as possible of any preconceptions and restricting
>ideologies of its makers. This is not just a desirable feature,
>it is a necessary one.

>Intelligence is often linked to creativity. To be creative it is
>required that one often breaks the "rules" and "standards" that
>one was taught. It is this "spirit" that allows an organism to
>look for uncommon (and valuable) solutions, which is a component
>of most of the creative behaviors.

>A system that was always "told" how to handle its knowledge
>base will not be creative. The result of its processing will
>be a mere transformation of antecedents into consequents.
>No antecedents, no consequents. To be really intelligent, one
>has to be able to generate antecedents, with quality and quantity.

I agree with you that some forms of learning are needed in any
intelligent system, but there are many different capacities lumped
together under the term, and it isn't clear which types of learning
are the best focus of initial attempts to make artificial systems
more intelligent.  The sort of learning system that can be truly
creative appears to me to be well beyond our grasp for some time to
come, and I conclude that we need to develop a huge amount of more
primitive learning mechanisms and infrastucture before we understand
how to construct systems within which such learning can occur.

--
David

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
From: "Sergio Navega" <snavega@attglobal.net>
Subject: Re: Defect of Turing Test?
Date: 14 Nov 1999 00:00:00 GMT
Message-ID: <80l8lr$4iu$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <80d9jj$35l$1@mulga.cs.mu.OZ.AU> <80h1g2$ojd$1@mulga.cs.mu.OZ.AU> <80i7c1$766$1@mulga.cs.mu.OZ.AU> <80iak0$8i6$1@mulga.cs.mu.OZ.AU>
X-Date: Sat, 13 Nov 1999 10:26:40 -0200
X-Mod: ?
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

David Kinny wrote in message <80iak0$8i6$1@mulga.cs.mu.OZ.AU>...
>"Sergio Navega" <snavega@attglobal.net> writes:
>
>>It is not consensual, but my position is that one can only talk
>>about intelligence if learning is an important part of the process.
>>I have other ways to support this idea, but I'll stick with a
>>relatively common one.
>
>>Take the CYC project, for instance. What we know today leads us to
>>believe that its intelligence was "built" by the manual introduction
>>of propositions and rules-of-thumb. It is expected by its
>>designers that a reasonable amount of "knowledge" will make the
>>system "fly", that is, learn the rest of the knowledge by itself.
>>This is indeed possible. But the chances of not working are huge!
>
>>The big question is the predisposition to learn. Without that,
>>the system will not be able to acquire new knowledge automatically
>>in the future. But if the system is able to learn by itself, then
>>why worry about introducing hundreds of thousands of hand-crafted
>>rules? Why not starting to educate the system as if it were
>>in junior high-school?
>
>Hi Sergio,
>
>To me the obvious answer is that some rather huge corpus of knowledge
>about the contents and behaviour of the world is a prerequisite for
>the education process.  This is what embodied systems learn directly
>from the world: naive geometry like up, down, bigger, smaller, inside
>and outside, naive physics like support, falling, liquid flow, and an
>enormous amount more, much of which is probably never verbalised.  You
>can't learn this from books because such "common sense" isn't written
>down; everyone is expected to know it, no matter how stupid they are.
>

Hi, David,

I agree entirely with what you wrote, but I think there's more to it.
A blind human is able to talk about the color of the sea or the
moonlight. This blind human does not "lose" his ability to
intelligently talk about sensory aspects that he/she is not
provided with.

The last question of my previous post was really misconstruct:
it is not enough to start in junior high if the system does not
have the kind of concepts you mentioned. But more than having
those concepts, I see the need of having the abilities to
*autonomously develop* such concepts from more primitive ones as
the fundamental ability. This is the core of the point,
in my opinion.

I posed that question just to reveal more clearly the issue of
learning: the Cyc project spent a lot of time manually defining
things such as these:

#$NationalOrganization
#$LocalCustomerContactPoint
#$PubliclyHeldCorporation
#$DirectorOfOrganization
#$MaritalStatusOfPeople

and hundreds of thousands of others. These are not domain specific
definitions, these came from the most "general" part of their
ontology, the upper level. How come?

Each one of these concepts is in fact a highly complex structure
that human adults can understand only after a lot of effort. Each
one is supported by a huge number of lower level concepts, probably
reducible to the level of childish things like "give me", "take
that", "the sensory aspect of physical actions", "the sensory
aspect of position in space", etc.

My point is that an architecture like CYC can only succeed if
it is able to grow *by itself* (and from a reasonably low level
set of primitive concepts), a coherent structure of higher
level concepts based on the associations of lower level stuff.
This association demands the ability to construct new cognitive
concepts based on the perception that prior concepts are being
repeated in similar contexts.

With this, the system obtains not only the possibility of
developing the new concepts, but also (very important) the
ability to *associate* these concepts among themselves, which
is necessary to bring them into attention during a task
like natural language processing.

Now suppose that in the near future, after the deployment of the
Cyc system, some new kind of company surfaces: a publically held
company that is driven by a board of directors elected by the
voting public (like the President of a nation). There's no
such think today and Cyc certainly does not have the necessary
concept. What are the options? I can see two instances.

Cyc may have this concept manually introduced (as a "fix"):

#$PubliclyHeldCorporationVotedBoard

In this case, it's not Cyc that is intelligent, but the human
programmer(s) who categorized the appearance of the new concept
and introduced it there. What the programmer did is the important
thing!

The other option would be for Cyc itself to perceive the appearance
of this new concept from the "reading" of newspapers, newswire feeds,
conversation with humans, etc. This requires a categorization
and conceptualization mechanism, along with that association
strategy.

What I'm saying is that this cat&conc mechanism is the "core" of
this process. It is not something to do later, it is the first
thing to be developed. This will allow the system to load its KB
throughout the interactions with human operators (question/answers),
the stand-alone reading of textual material, etc.

Obviously, this is not a new idea, it's being tried for decades.
But we can't give up just because it's difficult.

>Now embodied systems can undoubtedly learn such knowledge from scratch,
>although we don't understand how humans do it or how to design systems
>with the capability to do so.  With CYC the great hope was that a
>painstakingly crafted KB would provide a base for a less flexible form
>of learning somewhat more akin to junior-high education, or at least
>for an experimental programme that could lead to the identification and
>development of suitable learning mechanisms.
>

I may agree with that strategy only if it is used to "test the limit"
of one approach that we know beforehand will present problems. Its just
like doing a lab experiment to show that the hypothesis is wrong (and
in the process, for us to learn how to overcome the difficulties).

>
>>A system that was always "told" how to handle its knowledge
>>base will not be creative. The result of its processing will
>>be a mere transformation of antecedents into consequents.
>>No antecedents, no consequents. To be really intelligent, one
>>has to be able to generate antecedents, with quality and quantity.
>
>I agree with you that some forms of learning are needed in any
>intelligent system, but there are many different capacities lumped
>together under the term, and it isn't clear which types of learning
>are the best focus of initial attempts to make artificial systems
>more intelligent.  The sort of learning system that can be truly
>creative appears to me to be well beyond our grasp for some time to
>come, and I conclude that we need to develop a huge amount of more
>primitive learning mechanisms and infrastucture before we understand
>how to construct systems within which such learning can occur.
>

There's an important question here that I'd like to bring into
discussion. What if this creative ability is a *necessary* condition
in order to get that learning strategy started? Let me try to
clarify this a bit.

The computational problem of interpreting sensory signals is
humongous. We receive a bunch of apparently disconnected signals,
with lots of noise and ambiguities. Nonetheless, any child is able
to use, for instance, visual perception with amazing results.
Someway, the human brain is able to tame this complexity and,
from it, extract the relevant features. Then, the task that's
left is "just" to associate a name (a symbol) for each relevant
feature and we have the basis for language.

An interesting result from Computational Learning Theory is that
of complexity of learning: without special knowledge of the
domain, any learning algorithm will probably perform as good
as a random, brute force method. This is somewhat related to
the NFL (No Free Lunch) theorem. I find this very interesting.

When one studies the self-organization of the visual cortex,
what one sees happening is something like that. The cortex
adjusts itself, sometimes driven by noisy signals, but
certainly requiring external natural signals (a recent paper
in the Science magazine confirmed this idea). The result is
a mechanism able to detect salient features of the visual
input: these are the orientation dominance columns. After that,
the visual cortex becomes an "expert" in identifying edges,
lines and orientations. Guess what appears to be the next step:
to identify higher-level features, that appears because of
correlations among the lower level features. The problem to
extract these higher level features is similar to the one
before and the neural mechanism is the same. I bet the
strategy used is also the same.

What is important here is that a random, self-organizing
mechanism was used to tame the complexity of the initial task
but, as knowledge of the domain started to accumulate, then
this knowledge was applied to *simplify* the task at hand,
raising the concerns of the whole architecture to a new level.

This idea of using random explorations and giving successively
more attention to the ones that pay off is not new, Hofstadter
used something like that (the parallel terraced scan) in
Copycat, which was inspired, among other authors, in Selfridge's
Pandemonium and the Hearsay system.

There's a lot more to say about this, but in order to keep
this short, I propose that this very same strategy is used with
the creation of our high level concepts. When a child learns
what a car is, she is probably doing a process in which
random associations are being tried over her past experiences.
New experiences will confirm or invalidade certain
associations. Very often, children make those "funny"
generalizations. Learning is, then, the process in which the
surviving concepts are kept in memory and are used as the
platform in which new concepts are built. Cyc should start
as a blind-deaf child, until someday it will really understand
what a #$LocalCustomerContactPoint is.

Regards,

_________________________________________________________
Sergio Navega
Intelliwise Research
http://www.intelliwise.com/snavega

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: "Sergio Navega" <snavega@attglobal.net>
Newsgroups: comp.ai
Subject: Re: Defect of Turing Test?
Followup-To: comp.ai
Date: 19 Nov 1999 09:07:19 +1100
Organization: Intelliwise Research and Training
Lines: 224
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
Message-ID: <811tan$3fm$1@mulga.cs.mu.OZ.AU>
References: <80d9jj$35l$1@mulga.cs.mu.OZ.AU> <80h1g2$ojd$1@mulga.cs.mu.OZ.AU> <80i7c1$766$1@mulga.cs.mu.OZ.AU> <80iak0$8i6$1@mulga.cs.mu.OZ.AU> <80l8lr$4iu$1@mulga.cs.mu.OZ.AU> <80v9i2$78s$1@mulga.cs.mu.OZ.AU>
NNTP-Posting-Host: mulga.cs.mu.oz.au
X-Date: Thu, 18 Nov 1999 08:52:40 -0200
X-Mod: ?

David Kinny wrote in message <80v9i2$78s$1@mulga.cs.mu.OZ.AU>...
>"Sergio Navega" <snavega@attglobal.net> writes:
>
>>The last question of my previous post was really misconstruct:
>>it is not enough to start in junior high if the system does not
>>have the kind of concepts you mentioned. But more than having
>>those concepts, I see the need of having the abilities to
>>*autonomously develop* such concepts from more primitive ones as
>>the fundamental ability. This is the core of the point,
>>in my opinion.
>
>I agree absolutely that some mechanism to develop new concepts from
>old ones is necessary.  I'm guessing that the complexity and modus
>operandum of such a mechanism may well depend on where you start from,
>i.e. how primitive the basis is.  You also need some way to revise,
>refine and extend existing concepts.  It seems to me that autonomously
>developing concepts from raw sensory input is substantially more
>difficult than bootstrapping off a well-structured base.
>

Indeed. Starting with a preconceived structure and feeding
"off-the-shelf" concepts is easier than devising the algorithm
to let the machine build them by itself. Unfortunately, I don't
think there's any other way.

>>The other option would be for Cyc itself to perceive the appearance
>>of this new concept from the "reading" of newspapers, newswire feeds,
>>conversation with humans, etc. This requires a categorization
>>and conceptualization mechanism, along with that association
>>strategy.
>
>>What I'm saying is that this cat&conc mechanism is the "core" of
>>this process. It is not something to do later, it is the first
>>thing to be developed. This will allow the system to load its KB
>>throughout the interactions with human operators (question/answers),
>>the stand-alone reading of textual material, etc.
>
>I guess the position I'm taking is that it may be necessary to
>explore a whole lot of different approaches as to how such a
>mechanism might work before a suitable mechanism is understood
>and refined.  This research may be much easier if it's done on top
>of a suitable base.  (I'm not suggesting that the CYC base is
>necessarily a good choice.)  Once such a mechanism is developed
>it may then be possible to start from a much lower level and
>for the system to learn far more of its concepts autonomously.
>When that is possible things start to get really interesting.
>But it also raises the prospect of systems whose operation
>and capabilities are far harder for us to understand.  It will
>be rather ironic if AI succeeds in producing such systems but
>then can't explain to us how they work.
>

I guess there are two interesting points here. One is the
ability of the system to "explain" its line of reasoning. I
consider this a fundamental ability, one which I won't trade
for nothing else. It is imperative that the system be able
to turn us into a believer in its conclusions by reasonable
and rational explanations.

The other question is *our* ability to peek at the machine's
"guts" (so to speak) in order to see what are the structures
it has developed. I think this will not be possible. Neural
networks are naturally this way, but I'm proposing that even
a (properly designed) symbolic system will show this difficulty,
and this is related to the fact that in order to understand
some associations of concepts, one has to use some sort of
links (akin to semantic nets) in which the 'weight' is numeric
and the organization very complex (because it must incorporate
hundreds of different contexts). Our poor conscious brain will
probably not be able to follow the concepts involved even in
simple thoughts.

So I don't wan't to know what the system is doing internally,
as long as it can "convince" me that what it is doing is right.

>
>>When one studies the self-organization of the visual cortex,
>>what one sees happening is something like that. The cortex
>>adjusts itself, sometimes driven by noisy signals, but
>>certainly requiring external natural signals (a recent paper
>>in the Science magazine confirmed this idea). The result is
>>a mechanism able to detect salient features of the visual
>>input: these are the orientation dominance columns. After that,
>>the visual cortex becomes an "expert" in identifying edges,
>>lines and orientations. Guess what appears to be the next step:
>>to identify higher-level features, that appears because of
>>correlations among the lower level features. The problem to
>>extract these higher level features is similar to the one
>>before and the neural mechanism is the same. I bet the
>>strategy used is also the same.
>
>I'm not surprised at the similarity of the mechanisms at two
>adjacent levels, but not so convinced that you could expect this
>similarity of mechanism to persist as you continued higher and
>higher.  There may be some fundamental, abstract similarity,
>but the devil will be in the details of the differences.

>Consider the task of learning a new mathematical concept, say
>that of a "group".  Much of our most abstract concept learning
>seems deeply rooted in language, recognizing and understanding
>the role of definition, naming, context dependent reference, and
>abstraction itself.  It may still be based on feature extraction
>and pattern recognition but there seems to be a lot more going on
>as well.  One argument for quite different mechanisms is the fact
>that other animals don't seem to be able to do this at all.
>

I think you're right here, my previous paragraph leaves the
impression that this is all there is to it. In fact, it is just
a part of the problem. I still think that our ability to
group things and make concepts of similar things is something
that happens similarly in the low and high level of one's cognition.
But there is an important additional aspect, that of associating
concepts which are not similar, but are related.

As an example of this, take an apple. All the visual impressions
one has about an apple are bound together to form a coherent
aspect of the object. But when one tastes an apple (which is,
by itself, a process that identifies and categorizes the very
different, taste-related patterns of sensory experiences) there
is an additional step of constructing a "link" between these
concepts (taste to visual aspects). The vision of an apple is,
from that moment on, associated to the characteristic sweeteness
of the apple (and later, with the touch impressions and then
sound of someone eating it).

This process of "association with links" seems to be different
from the mere statistical correlations that are at the core
of the "first step". It is easy to see that a great deal of
our knowledge as adults is stored in the form of these
"second level links".

But here is the point which appears to be more related to
AI: modeling just this second level (as semantic networks
appear to do) is insufficient, because these links are
equivalent to logical relations among preexisting concepts.
This second level appears to be the approach of logic-based AI.

Thus, this seems to be associated with things like the frame
problem: this second level is not able to go beyond that is
explicitly stored in its links, without computational
intractability. We have reasonable consequents only if we
have a good set of antecedents.

That's one hell of one important thing: when our second
level cannot find a "ready made" solution, one has to
resort to the first level, in which what is important are
the similarities and primitive concepts. This may explain
the success of logic-based AI in specific, small domains
and its problem of scalability or failures in the transposition
of knowledge among different domains.

>
>>There's a lot more to say about this, but in order to keep
>>this short, I propose that this very same strategy is used with
>>the creation of our high level concepts. When a child learns
>>what a car is, she is probably doing a process in which
>>random associations are being tried over her past experiences.
>>New experiences will confirm or invalidade certain
>>associations. Very often, children make those "funny"
>>generalizations. Learning is, then, the process in which the
>>surviving concepts are kept in memory and are used as the
>>platform in which new concepts are built. Cyc should start
>>as a blind-deaf child, until someday it will really understand
>>what a #$LocalCustomerContactPoint is.
>
>My reaction is that learning concepts "from first principles" is
>rather far away in terms of when it will be achieved, because we
>understand so little about how to structure a system that is
>able to efficiently self-organize on the basis of low-level input.

>I suspect that a system which can do it will need to incorporate a
>number of distinctly different learning mechanisms, that we won't
>easily find a single general purpose mechanism that solves all
>the problems.  I'm really not convinced that self-organization in
>the visual cortex operates in a similar way to the way children
>learn categories.  But I'd be happy to be proved completely wrong
>on this.

I may agree with you about the existence of a single mechanism for
all the processes: this seems unlikely. But I'm trying to follow
the idea of finding a "group of principles" in common among all
these mechanisms. I bet such a thing exists. Among the proposed
candidates, I may cite the "cognitive economy principle",
hierarchical structures, inductive generalizations, analogical
copying, and more.

Just to illustrate the kind of thing that can be used to support
this rationale, I invite you to take a look at this paper:

Hierarchical models of object recognition in cortex
Tomaso Poggio and Maximilian Riesenhuber
Nature Neuroscience, Vol 2, No. 11, Nov 1999
http://library.neurosci.nature.com/server-java/Propub/neuro/nn1199_1019.pdf

Nature Neuroscience has free access until Dec 31. All you have
to do is register for this free access. Fig 2 in this paper is
the sort of hierarchical thing I'm referring. I'm finding things
like this in language acquisition, reasoning, perception, etc.

>
>On the topic of analogy, you may find the following of interest.
>It's an abstract for a seminar being held here next week.
>Hopefully I'll have time to go.
>

Well, if you find a way to go to that seminar, it would be
interesting if you could post your impressions of it here.

Regards,

_______________________________________________________________
Sergio Navega
Intelliwise Research
http://www.intelliwise.com/snavega

 

From: "Sergio Navega" <snavega@attglobal.net>
Newsgroups: comp.ai
References: <80d9jj$35l$1@mulga.cs.mu.OZ.AU> <80h1g2$ojd$1@mulga.cs.mu.OZ.AU> <80i7c1$766$1@mulga.cs.mu.OZ.AU> <80iak0$8i6$1@mulga.cs.mu.OZ.AU> <80l8lr$4iu$1@mulga.cs.mu.OZ.AU> <80v9i2$78s$1@mulga.cs.mu.OZ.AU> <811tan$3fm$1@mulga.cs.mu.OZ.AU> <8125cm$8th$1@mulga.cs.mu.OZ.AU>
Subject: Re: Defect of Turing Test?
Date: Fri, 19 Nov 1999 07:40:46 -0200
Lines: 55
Organization: Intelliwise Research and Training
X-Newsreader: Microsoft Outlook Express 4.71.1712.3
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3

Seth Russell wrote in message <8125cm$8th$1@mulga.cs.mu.OZ.AU>...
>Sergio Navega wrote:
>
>> Indeed. Starting with a preconceived structure and feeding
>> "off-the-shelf" concepts is easier than devising the algorithm
>> to let the machine build them by itself. Unfortunately, I don't
>> think there's any other way.
>
>Actually there is another way.  Consider the boundary between
>your machine and its environment -  your proposal is to put an
>algorithm *inside* that boundary that will "learn" how to produce
>interesting transactions through the boundary.  Why not just
>eliminate your requirement that the learning "algorithm" must
>be fully contained inside of the boundary ?  In other words:
>socialize the learning algorithm.
>

What you suggest is reasonable if one is attempting to model
an approach such as a self-contained Artificial Life environment
or a community of integrated Intelligent Agents (as the ones
being planned to circulate through Internet, cf work of Patti
Maes). Then, one has to be concerned not only about the innards
of each Agent, but also its environment, which in these cases
is somewhat controlable and predictable (at least, much more
than ours).

But in the case of a more general environment (such as an AI
system responsible for Knowledge Management, inside a
corporation), we don't have this kind of external control
(for instance, we cannot impose structures or methods for
information representation on the external side of this equation).
We can devise the Agent to socially interact with others (humans
or other agents or external storage methods), but these "others"
are not under our strict control, they are free to use whatever
methods they seem fit, they may use totally different techniques
and strategies.

This means that they can't be assumed to be an active (and
dependable!) part on the learning algorithms of the agent, they
are just "partners for occasional interaction", often assuming
conflicting positions. To survive (and be useful) in this
environment, the agent will have to use its own private methods,
representation and strategies. Learn from the environment as
much as possible, understanding that very often this environment
is not sound.

Regards,

__________________________________________________________
Sergio Navega
http://www.intelliwise.com/snavega


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net