Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: Are Semantic Links Enough? (Was Re: Reasoning from Natural Language...)
Date: 12 Jan 1999 00:00:00 GMT
Message-ID: <369bad45.0@news3.ibm.net>
References: <77fb85$l7a$1@mochi.lava.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 12 Jan 1999 20:15:01 GMT, 129.37.183.89
Organization: SilWis
Newsgroups: comp.ai.nat-lang

Chaumont Devin wrote in message <77fb85$l7a$1@mochi.lava.net>...
>On Mon, 11 Jan 1999 21:58:20 -0800, "Kevin Kirchman" <wfn@pacbell.net>
>wrote:
>
>>>This para-knowledge I am talking about is found by analysing what I have
>>>called "state flow" between words.
>
>> Your para-knowledge is what I call inductive concepts, pertaining to
>> epistemological issues.
>
>Well I can't say I don't like that idea.  Thus when one hears a new
>combination of words, like "blue apples", within ones personal ontology is
>induced the semlink:
>
>apples can_be blue
>
>But because this semlink is never used again, it disappears by itself over
>time.  What causes the disappearance of semlinks is called "weighting,"
>and of this there seem to be two flavors: (1) how often used, and (2) when
>last used.  Things that never get used get swept away over time to make
>room for useful information.
>

Dear Chaumont,

Your idea of using a weight to specify the "strength" of the link is
indeed very useful and obviously improves substantially the effectiveness
of the representation of the knowledge. However, I'm afraid it still
leaves one important point uncovered. I'll use your blue apple, although
I'd like to use another situation.

Suppose we're talking to a farmer who also happens to be an art lover,
that appreciate Salvador Dali.

As a farmer, he will be talking daily about his apple trees. However,
as an art lover, he often will admire Dali's surrealist paintings.
He may, on a periodic basis, look to the picture hanging from his wall
(it is a rich farmer ;-) and this picture happens to have a blue apple,
well on the taste of Dali.

In this situation, the farmer will receive periodic reinforcements of
two situations: the normal apple and the abnormal apple. We have to
provide mechanisms for a robot that the farmer recently bought. How
is this robot supposed to understand what the farmer is saying?
The farmer will mention frequently the apples of his plantations and
the abnormal apple of his picture. More than that, according to
the *context* of the conversation, the robot will have to infer
which kind of apple the farmer is talking. When asked about apples,
the robot must have enough information about the context of the
conversation to decide if we're talking about the edible one or
the "visually abnormal" one.

All this is to say that links to semantic nodes may not be enough
to capture situations in which the most important thing to represent
is the "pattern" of the semantic item in relation to other elements.

Obviously, this is a problem with most knowledge representations
in use today (including semantic networks, conceptual graphs,
first-order predicate calculus, etc). What I'm advocating here is
that we ought to look to other methods that allow better
representation of this kind of situations, besides being much more
psychologically plausible.

My example is a little bit contrived because I wanted to use your
apple. I have another situation which exposes the matter a little
bit more clearly.

Regards,
Sergio Navega.

From: Chaumont Devin <devil@lava.net>
Subject: Re: Are Semantic Links Enough?
Date: 12 Jan 1999 00:00:00 GMT
Message-ID: <77gh1v$cpa$1@mochi.lava.net>
X-Complaints-To: usenet@mochi.lava.net
X-Trace: mochi.lava.net 916178815 13098 199.222.42.2 (12 Jan 1999 22:06:55 GMT)
Organization: Access International
NNTP-Posting-Date: 12 Jan 1999 22:06:55 GMT
Newsgroups: comp.ai.nat-lang

On Tue, 12 Jan 1999 17:07:52 -0200, "Sergio Navega" <snavega@ibm.net>
wrote:

> Your idea of using a weight to specify the "strength" of the link is
> indeed very useful and obviously improves substantially the effectiveness
> of the representation of the knowledge. However, I'm afraid it still
> leaves one important point uncovered. I'll use your blue apple, although
> I'd like to use another situation.
>
> Suppose we're talking to a farmer who also happens to be an art lover,
> that appreciate Salvador Dali.
>
> As a farmer, he will be talking daily about his apple trees. However,
> as an art lover, he often will admire Dali's surrealist paintings.
> He may, on a periodic basis, look to the picture hanging from his wall
> (it is a rich farmer ;-) and this picture happens to have a blue apple,
> well on the taste of Dali.
>
> In this situation, the farmer will receive periodic reinforcements of
> two situations: the normal apple and the abnormal apple. We have to
> provide mechanisms for a robot that the farmer recently bought. How
> is this robot supposed to understand what the farmer is saying?
> The farmer will mention frequently the apples of his plantations and
> the abnormal apple of his picture. More than that, according to
> the *context* of the conversation, the robot will have to infer
> which kind of apple the farmer is talking. When asked about apples,
> the robot must have enough information about the context of the
> conversation to decide if we're talking about the edible one or
> the "visually abnormal" one.
>
> All this is to say that links to semantic nodes may not be enough
> to capture situations in which the most important thing to represent
> is the "pattern" of the semantic item in relation to other elements.
>
> Obviously, this is a problem with most knowledge representations
> in use today (including semantic networks, conceptual graphs,
> first-order predicate calculus, etc).

Yes, such problems are sometimes intractable for an ontology, which is a
very important but very specific kind of knowledge structure, but is not
the WHOLE data structure (does not constitute ALL the data structures)
inside a language machine.

In this case, however, the "blue apple" phenomenon might still be handled
by an ontology as follows:

1. Create a semnod for "The blue apple in the Dali picture on the manor
wall".  SEMLEX might assign this one #69152.

2. Apples can_be green.

3. apples can_be red.

4. apples can_not_be blue.

5. 69152 is an apple.

6. 69152 is-not red.

7. 69152 is blue.

This is because beside words, it is also possible to link any phrase or
other external symbol to a semnod, because semnods can also have negative
sense (I use the high-order bit of the link type to indicate this), and
because exceptions can be indicated by making shortcut semlinks to them
from hyponyms.

> What I'm advocating here is
> that we ought to look to other methods that allow better
> representation of this kind of situations, besides being much more
> psychologically plausible.

This has already been done using Interlinguish, which is a computer
implementation of Panlingua.  Once again I would remind you that Panlingua
cannot exist without an ontology (semantic links and nodes), but that it
is separate from the ontology.

> My example is a little bit contrived because I wanted to use your
> apple. I have another situation which exposes the matter a little
> bit more clearly.
>
> Regards,
Sergio Navega.

Too bad no example.  But before you give it I will tell you what it
probably is:

Knowledge that cannot be represented in the ontology itself must be
represented in Panlingua structures.  Certain kinds of knowledge, and
probably by far the most important class of knowledge, can be represented
in ontologies.  However, ontologies are limited to information that can be
expressed in single semlinks, for example: roses are red.  Entire thoughts
often involve many such links, which I have explained as "para-knowledge"
to Mr. Krichman.  For example, in the simple sentence, "John loves Mary,"
we have two synlinks (syntactic links that never touch any part of the
ontology), thre lexlinks (links from the three words to their three
semnods), and the part of the ontology that is also activated, which are
the following semlinks:

John can love.
Mary can-be loved.

So seven links are required to express this simple sentence, but only five
of these are Panlingua, namely the synlinks and the lexlinks, whereas the
other two (the semlinks) remain more-or-less permanently in place in the
ontology while Panlingua structures come and go.

So as I have shown by this example, you can probably now understand that
knowledge employing single semlinks (roses are red) can be stored in an
ontology, whereas knowledge requiring more than one semlink ("John loves
Mary" required two) cannot, but can be represented in Panlingua, which is
capable of representing ANY THOUGHT THAT HAS EVER BEEN THOUGHT.

And because Panlingua can represent any thought that has ever been thought
or that will ever be thought, it is unnecessary and uneconomical to stray
outside our system and model in search of any other kind of knowledge
representation.

Hope this helps, because I like you--even if you are damned stubborn.

--CD.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Are Semantic Links Enough?
Date: 13 Jan 1999 00:00:00 GMT
Message-ID: <369cb593.0@news3.ibm.net>
References: <77gh1v$cpa$1@mochi.lava.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 13 Jan 1999 15:02:43 GMT, 129.37.182.57
Organization: SilWis
Newsgroups: comp.ai.nat-lang,comp.ai,comp.ai.philosophy

Chaumont Devin wrote in message <77gh1v$cpa$1@mochi.lava.net>...
>
>[snip]
>
>Yes, such problems are sometimes intractable for an ontology, which is a
>very important but very specific kind of knowledge structure, but is not
>the WHOLE data structure (does not constitute ALL the data structures)
>inside a language machine.
>
>In this case, however, the "blue apple" phenomenon might still be handled
>by an ontology as follows:
>
>1. Create a semnod for "The blue apple in the Dali picture on the manor
>wall".  SEMLEX might assign this one #69152.
>
>2. Apples can_be green.
>
>3. apples can_be red.
>
>4. apples can_not_be blue.
>
>5. 69152 is an apple.
>
>6. 69152 is-not red.
>
>7. 69152 is blue.
>
>This is because beside words, it is also possible to link any phrase or
>other external symbol to a semnod, because semnods can also have negative
>sense (I use the high-order bit of the link type to indicate this), and
>because exceptions can be indicated by making shortcut semlinks to them
>from hyponyms.
>

Yes, that seems to be a solution to the problem of blue apples. The
problem is that this construction ("Dali's blue apple") does not add
nothing relevant to the knowledge of the agent. The fact that we can
find one blue apple in a special circumstance should have lots of
consequences, like showing that it is a deviation of the standard
(real) apples, that this color have a function in an artificial
situation (a picture), that something different appears to be
the intention of Dali, etc. These *derivations* are what matter,
and this can only be obtained by one intelligent agent, not a
purely "representational" agent.

On the other hand, if we were to make explicit each "exception" of
what we encounter in the world, our machine would be assaulted by
an *enormous* quantity of specific and *useless* relations (Aunt
Sally slightly bent right foot's finger, Cousin Mary double trunk
tree in the south area of the second farm in Wisconsin, Bob's
scratched eyeglasses with a red spot on the left eye, etc).

This is not what brains are supposed to do. They are masters of
*reducing* complexity, not augmenting it. They reduce complexity
by intelligently combining similar information into categories.

But this is not the only problem of these relations. Another
problem is the incapacity of representing and allowing analogical
reasoning with most abstract concepts. To illustrate this, I'll
list one hypothetical session with an intelligent computer,
starting below.

>
>Knowledge that cannot be represented in the ontology itself must be
>represented in Panlingua structures.  Certain kinds of knowledge, and
>probably by far the most important class of knowledge, can be represented
>in ontologies.  However, ontologies are limited to information that can be
>expressed in single semlinks, for example: roses are red.  Entire thoughts
>often involve many such links, which I have explained as "para-knowledge"
>to Mr. Krichman.  For example, in the simple sentence, "John loves Mary,"
>we have two synlinks (syntactic links that never touch any part of the
>ontology), thre lexlinks (links from the three words to their three
>semnods), and the part of the ontology that is also activated, which are
>the following semlinks:
>
>John can love.
>Mary can-be loved.
>
>So seven links are required to express this simple sentence, but only five
>of these are Panlingua, namely the synlinks and the lexlinks, whereas the
>other two (the semlinks) remain more-or-less permanently in place in the
>ontology while Panlingua structures come and go.
>
>So as I have shown by this example, you can probably now understand that
>knowledge employing single semlinks (roses are red) can be stored in an
>ontology, whereas knowledge requiring more than one semlink ("John loves
>Mary" required two) cannot, but can be represented in Panlingua, which is
>capable of representing ANY THOUGHT THAT HAS EVER BEEN THOUGHT.
>
>And because Panlingua can represent any thought that has ever been thought
>or that will ever be thought, it is unnecessary and uneconomical to stray
>outside our system and model in search of any other kind of knowledge
>representation.
>

I agree with you that synlinks and lexlinks are fine to represent
the relation "John loves Mary". But it is nothing more than this:
an arbitrary representation. In that regard, I think that using
the ASCII characters "John loves Mary" will do a similar job, which
is, to "code" that relation in syntactic terms. This is not
enough to allow *understanding*. Understanding is that which
reads this representation and allows all kinds of derivation
based on the meaning of the phrase. Intelligence is what stands
*behind* those links. Try asking the system if John cares about
Mary. Then ask if John would give her a bouquet of flowers.
Ask if John would be upset if Mary fell sick. All these things
demand understanding. Note that you may eventually be able to
code each of these phrases with links. That's a waste of space,
because "Jill loves Robert", "Stephen loves Juliet" and, often,
"Daniel loves Raymond". If your system don't understand the
meaning of "love", you'd have to duplicate the definitions in
all instances of its use. But that's not all, what about
"Susan loves Mozart"? Should Susan be concerned if Mozart
fell sick? That's probably what any expert system would
answer "yes", even being informed that Mozart is long dead.

The second step (of about 5) to intelligence is perception of
regularity and perception of anomaly. This second step (and also
induction and analogical reasoning) is what I'll address in
my example below (I don't know why I like so much doing
this!). Lines starting with -> are entries from the operator,
lines starting with <- are responses from the system, all
other lines are my comments. The system was loaded with some
basic knowledge of movement, energy, flying as moving, etc.
(these primitive concepts should have been supported by
sensorimotor patterns, but that's another story...)

-> Birds can fly
<- Ok

-> Airplanes can fly
<- Ok

-> Birds have wings
<- Ok

<- Do airplanes have wings?
-> Yes
  Natural question, trying to see if similarities between
  birds and planes go that far.

<- Are airplanes similar to birds?
-> No
  Here the system is conjecturing an analogy between birds
  and airplanes because it perceived some regularities
  and tried inductive reasoning. We would answer "yes"
  if we were talking about "toto", the dog, and "fido",
  the dog. Even if we didn't answer, the system would keep
  this "doubt" internally to try to solve in the future.
  Note that we didn't say to the system *why* airplanes are
  different from birds, in spite of previous similarities.
  This will also be kept for future understanding.

-> Birds with broken wings cannot fly
<- Ok

<- Can airplane wings be broken?
-> Yes
  The system is being stubborn, trying to find other similarities
  between birds and airplanes. Our confirmation here raises its
  suspicion that birds and airplanes are similar according to
  an yet unknown criteria, although there seems to be a "no"
  somewhere. As this "no" is an anomaly, the system will keep
  one "eye" on it.

<- Can airplanes fly with broken wings?
-> No
  Another analogical reasoning, this one reinforces again
  similarities between birds and airplanes.

-> Airplanes need fuel to fly
<- Ok

<- Do birds need fuel to fly?
-> No
  This answer should cause "confusion". Something important
  appears to be lurking because birds and airplanes are
  very similar in one regard but not in another. What is
  this essential difference? The system already concludes
  that they have distinct categorization, although they're
  very similar.
  Needing something implies conditions for an action to take
  place. The system may know this because of analogies to
  previous situations (which defined flying as a movement
  operation and movement requiring action). Knowing such
  a previous model helps the system in recognizing that
  something is missing here for birds (a pattern needs
  completion). By the way, every high level concept that
  the system learns during this session may be used in
  analogical reasoning in future experiences. Besides,
  its concept of "flying" will incorporate some of the
  things learned here (refinement of past knowledge).

<- What is necessary for birds to fly?
-> Birds need food to fly
<- Ok
  This answer is supposed (both by the system and the human
  operator) to be relative to the context of the conversation.
  Taken in isolation "what's necessary for birds to fly" would
  have dozens of possible answers (it must be in an air filled
  environment, etc.). But we humans choose the one who makes
  sense regarding our current context.

<- Is food the bird's fuel?
-> Yes!
  A surprising question.
  Sometimes, questions reveal understanding.
  Again, analogical reasoning and induction, confirmed by the
  operator, which allows the system to draw one interesting
  causal model inside itself. This causal model (although
  still primitive and lacking detail) must, obviously, be
  represented and subject to change in the future according
  to new experiences. Aren't these weak causal models the
  way children start understanding the world?

I could go on and list a hypothetical session teaching the
system what is "life". I could draw from the present session
to explain that airplanes use engines and birds use muscle
and after several other abstract concepts could reach finally
on a chance to explain to the system what life is and the
essential difference between a mechanical device and a living
organism.

Life is a typical word that cannot be translated in a single
link. It may require a bunch of experiences (including all
this session we had about birds and airplanes) to grasp
the *essential features*. It is necessary to lay
all preconditions (patterns) to allow the system to
*recognize* if something is alive or not. Being able to
grasp those essential concepts is what is needed for the
system to understand such high-level analogies as "My car
seems to be alive!".

You may think that my example shows the essential points of
what I consider an intelligent system. Nothing could be far
from the truth. This is just the second step of a sequence
I envision, in which all elements are necessary. The
establishment and refinement of these additional steps,
in a biologically and psychologically plausible fashion,
is the object of my current work. Granted, lots of things
remain to be discovered, but I'm working hard.

>Hope this helps, because I like you--even if you are damned stubborn.
>
>--CD.
>

I too like you, although in terms of stubbornness, I grant you
the winning place :-)

Regards,
Sergio Navega.

From: Chaumont Devin <devil@lava.net>
Subject: Re: Are Semantic Links Enough?
Date: 13 Jan 1999 00:00:00 GMT
Message-ID: <77j4ul$jqh$1@mochi.lava.net>
X-Complaints-To: usenet@mochi.lava.net
X-Trace: mochi.lava.net 916264725 20305 199.222.42.2 (13 Jan 1999 21:58:45 GMT)
Organization: Access International
NNTP-Posting-Date: 13 Jan 1999 21:58:45 GMT
Newsgroups: comp.ai.nat-lang

On Wed, 13 Jan 1999 11:15:40 -0200, "Sergio Navega" <snavega@ibm.net> wrote:

> The fact that we can
> find one blue apple in a special circumstance should have lots of
> consequences, like showing that it is a deviation of the standard
> (real) apples, that this color have a function in an artificial
> situation (a picture), that something different appears to be
> the intention of Dali, etc. These *derivations* are what matter,
> and this can only be obtained by one intelligent agent, not a
> purely "representational" agent.

Let me make it clear that I have not so far dabbled in any kind of
processing this abstracted.  But at the same time let me say that the
representational system for such processing should not be any different than
the representational system for anything else.  When you write me these
words about Dali's apple, do you have to use some foreign language because
they are at a different level of abstraction, or do you just use the same
old English we all know?  You use English because there is really nothing
you know of in the universe that cannot somehow be expressed in English, and
that is precisely the way Panlingua works.  The difference between Panlingua
and English is not one of representational potential (what each can or
cannot represent), but of automational tractability (what processing has to
be done to each kind of representation before you can do anything with it
using a machine).

> On the other hand, if we were to make explicit each "exception" of
> what we encounter in the world, our machine would be assaulted by
> an *enormous* quantity of specific and *useless* relations (Aunt
> Sally slightly bent right foot's finger, Cousin Mary double trunk
> tree in the south area of the second farm in Wisconsin, Bob's
> scratched eyeglasses with a red spot on the left eye, etc).

> This is not what brains are supposed to do. They are masters of
> *reducing* complexity, not augmenting it. They reduce complexity
> by intelligently combining similar information into categories.

Yes, but we are trained to perceive paintings as something special--a
communication of something mysterious.  Give a machine this knowledge and it
should be possible to make that machine treat paintings in a similar
fashion.  But this is not particularly important to our present discussion,
because it involves a higher level of abstraction (a lot more higher-level
psychological coding and problem- solving capability).  My focus at present
is simply to provide a clean approximation of Panlingua and to get to it and
from it to English.  Once this has been done, it will open up a whole new
vista of research upon just such problems as you are discussing.

My problem with this NewsGroup at this time is that people would seem to
like, as it were, to busy themselves with the roof and superstructure
without even knowing what building materials they will use and without any
thought at all about the walls.

> But this is not the only problem of these relations. Another
> problem is the incapacity of representing and allowing analogical
> reasoning with most abstract concepts. To illustrate this, I'll
> list one hypothetical session with an intelligent computer,
> starting below.

>>
>>Knowledge that cannot be represented in the ontology itself must be
>>represented in Panlingua structures.  Certain kinds of knowledge, and
>>probably by far the most important class of knowledge, can be represented
>>in ontologies.  However, ontologies are limited to information that can be
>>expressed in single semlinks, for example: roses are red.  Entire thoughts
>>often involve many such links, which I have explained as "para-knowledge"
>>to Mr. Krichman.  For example, in the simple sentence, "John loves Mary,"
>>we have two synlinks (syntactic links that never touch any part of the
>>ontology), thre lexlinks (links from the three words to their three
>>semnods), and the part of the ontology that is also activated, which are
>>the following semlinks:
>>
>>John can love.
>>Mary can-be loved.
>>
>>So seven links are required to express this simple sentence, but only five
>>of these are Panlingua, namely the synlinks and the lexlinks, whereas the
>>other two (the semlinks) remain more-or-less permanently in place in the
>>ontology while Panlingua structures come and go.
>>
>>So as I have shown by this example, you can probably now understand that
>>knowledge employing single semlinks (roses are red) can be stored in an
>>ontology, whereas knowledge requiring more than one semlink ("John loves
>>Mary" required two) cannot, but can be represented in Panlingua, which is
>>capable of representing ANY THOUGHT THAT HAS EVER BEEN THOUGHT.
>>
>>And because Panlingua can represent any thought that has ever been thought
>>or that will ever be thought, it is unnecessary and uneconomical to stray
>>outside our system and model in search of any other kind of knowledge
>>representation.
>>

> I agree with you that synlinks and lexlinks are fine to represent
> the relation "John loves Mary". But it is nothing more than this:
> an arbitrary representation.

I think you are very wrong on this score because you fail to see that it is
NOT arbitrary, but really the one and only way this can actually be done at
a minimum level.  Show me any other representational system and I will show
you how that representational system is nothing but some kind of mask over
the face of mother Panlingua.  The difference between what I have explained
above and any arbitrary representational system is that mine fits into a
complete theoretical framework covering ALL the internal workings of
language from the ontology up whereas other ideas of language and there
representational systems are disjoint, taking a little from here or there
but not knowing how it links to the rest.  As an example take Link Grammar,
which is exceptionally good for people working in the dark.  It seems to
explain a great deal, and reflects Panlingua in many ways, but it fails to
connect because in fact it is NOT Panlingua, but only an adulterated and
hackneyed little part of the theoretical whole, distorted and blown out of
proportion.

> In that regard, I think that using
> the ASCII characters "John loves Mary" will do a similar job, which
> is, to "code" that relation in syntactic terms.

And that is precisely where it ends.  English is intractable, and no further
processing can occur until it has been disambiguation.  Why is this so
difficult to understand?

> This is not
> enough to allow *understanding*. Understanding is that which
> reads this representation and allows all kinds of derivation
> based on the meaning of the phrase.

Precisely.  But how can you make an understanding machine if you can't even
understand the difference between a string of English words and a
representation that has been locked into the whole linguistic apparatus and
is ready for internal processing?

> Intelligence is what stands
> *behind* those links.

Very good, but this is just like saying "Intelligence is that weightless,
massless something stored on computer disks."  It means nothing much of
anything.  However the bits stored electromagnetically on a spinning disk
can be processed by a computer while shoving reems of paper into a computer
won't work, and that's the difference I am trying to make plain to you.

> Try asking the system if John cares about
> Mary. Then ask if John would give her a bouquet of flowers.

Okay.  A lot of this can be worked out using ontologies.  But the rest is
simply a matter for cyclopedic references (in the same Panlingua) about
human nature.

> Ask if John would be upset if Mary fell sick. All these things
> demand understanding.

So what is this "understanding" you demand, if not just further processing
of the facts, and how can you process these facts unless they are first put
in some kind of form you can process?  Otherwise they are just like trying
to push reems of paper into a computer.

> Note that you may eventually be able to
> code each of these phrases with links.

No "eventual" about it at all.  I have already shown you how this can be
done NOW, but you chose to ignore my words.

> That's a waste of space,
> because "Jill loves Robert", "Stephen loves Juliet" and, often,
> "Daniel loves Raymond". If your system don't understand the
> meaning of "love", you'd have to duplicate the definitions in
> all instances of its use. But that's not all, what about
> "Susan loves Mozart"? Should Susan be concerned if Mozart
> fell sick? That's probably what any expert system would
> answer "yes", even being informed that Mozart is long dead.

Sergeo, I give up because we have already been over and over this same
ground, and you refuse to understand the explanations I have given.  Instead
of moving on, you remain forever stuck in this same rut.

> The second step (of about 5) to intelligence is perception of
> regularity and perception of anomaly. This second step (and also
> induction and analogical reasoning) is what I'll address in
> my example below (I don't know why I like so much doing
> this!). Lines starting with -> are entries from the operator,
> lines starting with <- are responses from the system, all
> other lines are my comments. The system was loaded with some
> basic knowledge of movement, energy, flying as moving, etc.
> (these primitive concepts should have been supported by
> sensorimotor patterns, but that's another story...)

You say these things glibly without explaining what you mean by "primitive
concept", and I have challenged the use of this term on this NewsGroup many
times.  You also fail to say what you mean by "the system was loaded," etc.,
all of which covers a tremendous amount of ground which you simplemindedly
gobble up for granted.

-> Birds can fly
<- Ok

-> Airplanes can fly
<- Ok

-> Birds have wings
<- Ok

<- Do airplanes have wings?
-> Yes
  Natural question, trying to see if similarities between
  birds and planes go that far.

<- Are airplanes similar to birds?
-> No
>   Here the system is conjecturing an analogy between birds
>   and airplanes because it perceived some regularities
>   and tried inductive reasoning.

I think you are wrong.  The most rudimentary use of an ontology would tell
the system that both birds and airplanes have wings and that both birds and
airplanes can fly.  This is no great "inductive" problem as you make out at
all, but just an examination of a few minor semlinks in an ontological black
box.

> We would answer "yes"
>   if we were talking about "toto", the dog, and "fido",
>   the dog. Even if we didn't answer, the system would keep
>   this "doubt" internally to try to solve in the future.
>   Note that we didn't say to the system *why* airplanes are
>   different from birds, in spite of previous similarities.
>   This will also be kept for future understanding.

Bosh!  Any developed ontology should tell you that birds are made of
biological cells whereas airplanes are made of metal, that birds eat whereas
airplanes refuel, etc.

-> Birds with broken wings cannot fly
<- Ok

<- Can airplane wings be broken?
-> Yes
>   The system is being stubborn, trying to find other similarities
>   between birds and airplanes. Our confirmation here raises its
>   suspicion that birds and airplanes are similar according to
>   an yet unknown criteria, although there seems to be a "no"
>   somewhere. As this "no" is an anomaly, the system will keep
>   one "eye" on it.

Your hypothetical system is pretty stupid, and makes an issue out of things
that are non-issues at this stage in our development of the technology.  You
must know better, so my question is "Why?"

<- Can airplanes fly with broken wings?
-> No

>   Another analogical reasoning, this one reinforces again
>   similarities between birds and airplanes.

I don't think so.  Both machines and people have to be told or shown that
things with broken wings can't fly.  This is not analogical reasoning, but
mere observation.

BTW, Sergeo, I have to thank you because all of this has just uncovered a
bug in SEMLEX.  Here is a session I just tried:

airplanes can fly
Kitty isa airplane
thing_with_broken_wing can not fly
Kitty isa thing_with_broken_wing
can Kitty fly

The answer was "Yes".

Thanks for making me find this out.  The solution is simple--just make the
ontology search further to make sure there are no negative links before
saying "Yes", and yet not so simple because a good deal of complicated
programming may be involved.  The good thing about it is that it CAN be
done.

-> Airplanes need fuel to fly
<- Ok

I should point out at this time that unless a new semlink type is coined
(requires-for-operation), we have just moved beyond the ontology and into a
Panlingua structure.  But no, this is obviously an ontology kind of
knowledge, simple and straightforward, so perhaps I will now create a new
link type of RFO, meaning requires for operation.  Thanks again.

<- Do birds need fuel to fly?
-> No
>   This answer should cause "confusion". Something important
>   appears to be lurking because birds and airplanes are
>   very similar in one regard but not in another. What is
>   this essential difference? The system already concludes
>   that they have distinct categorization, although they're
>   very similar.
>   Needing something implies conditions for an action to take
>   place. The system may know this because of analogies to
>   previous situations (which defined flying as a movement
>   operation and movement requiring action). Knowing such
>   a previous model helps the system in recognizing that
>   something is missing here for birds (a pattern needs
>   completion).

The ontology is there.  You can buy SEMLEX for $100, and I have provided a
way for your own programs to use its internal functions.  All the things you
have spoken about thus far can evidently be done using just an ontology, so
why don't you purchase a copy and experiment around with this "high-level"
stuff for yourself?

> By the way, every high level concept that
>   the system learns during this session may be used in
>   analogical reasoning in future experiences. Besides,
>   its concept of "flying" will incorporate some of the
>   things learned here (refinement of past knowledge).

<- What is necessary for birds to fly?
-> Birds need food to fly
<- Ok

>   This answer is supposed (both by the system and the human
>   operator) to be relative to the context of the conversation.
>   Taken in isolation "what's necessary for birds to fly" would
>   have dozens of possible answers (it must be in an air filled
>   environment, etc.). But we humans choose the one who makes
>   sense regarding our current context.

This is a parsing problem.  I am by no means there yet with my work on
Child, but I know how it must be done.  The parser looks at link weights and
recent parsings (the Panlingua log), and uses these to infer which links
should be chosen in ambiguous situations.

<- Is food the bird's fuel?
-> Yes!

This last is NOT a parsing problem.  It must be handled by looking at a
Panlingua cyclopedic structure.  Computers will be able to do these things
once we have made such structures readily available for them (and their
programmers) to work on.  This is what I mean when I say that making it
possible to get automatically to and from English to Panlingua will open up
huge new vistas of endeavor.

>   A surprising question.
>   Sometimes, questions reveal understanding.
>   Again, analogical reasoning and induction, confirmed by the
>   operator, which allows the system to draw one interesting
>   causal model inside itself. This causal model (although
>   still primitive and lacking detail) must, obviously, be
>   represented and subject to change in the future according
>   to new experiences. Aren't these weak causal models the
>   way children start understanding the world?

All this is very good, but without a precise idea of how it may be done you
are just dreaming and rambling.  I have shown you the beginnings of this
path, but you prefer to ignore it in favor of your dreaming.  This is the
problem I have currently with comp.ai.nat-lang.  It is kind of a Santa
Clause wish forum, and not a place where very much about the nitty-gritty
problem solving is being discussed.  What if people went to airplane
factories and just sat around discussing all the wonderful things that might
be done if people could only fly instead of actually building airplanes?  I
would recommend that they open a comp.ai.airplane-fly!

> I could go on and list a hypothetical session teaching the
> system what is "life". I could draw from the present session
> to explain that airplanes use engines and birds use muscle
> and after several other abstract concepts could reach finally
> on a chance to explain to the system what life is and the
> essential difference between a mechanical device and a living
> organism.

There you go!  Airplane fly!

> Life is a typical word that cannot be translated in a single
> link. It may require a bunch of experiences (including all
> this session we had about birds and airplanes) to grasp
> the *essential features*.

Not really.  Instead it is only necessary to tell SEMLEX what all living
things in general can and cannot do.

> It is necessary to lay
> all preconditions (patterns) to allow the system to
> *recognize* if something is alive or not.

Not really.  Just test the ontology for all the semlinks that tell what
living things can and cannot do and see whether the subject can and cannot
do the same.  This should be no great problem for a good computer
programmer.

> Being able to
> grasp those essential concepts is what is needed for the ...

Here you use the words, "grasp" and "concepts", both of which remain
undefined.  Why does a system need concepts, and if it can answer correctly
then why does it need to "grasp" anything?  To me this is just more
anthropocentric dreaming, not hard computer science, and not even reality.

> ... system to understand such high-level analogies as "My car
> seems to be alive!".

Not really.  If a car has a lot of the same properties as a living thing,
and if it can do a lot of the same things, a simple computer program can be
written to use the links of the ontology in order to come up with or else to
verify or dispute such analogies.  This is no great technical problem or
unattainable achievement.  To YOU it seems difficult, but this is only
because you have failed to understand the things that I have said.

> You may think that my example shows the essential points of
> what I consider an intelligent system. Nothing could be far
> from the truth. This is just the second step of a sequence
> I envision, in which all elements are necessary. The
> establishment and refinement of these additional steps,
> in a biologically and psychologically plausible fashion,
> is the object of my current work. Granted, lots of things
> remain to be discovered, but I'm working hard.

No you are not.  Otherwise you would have already "grasped" what I have been
telling you for the past year or more.  Instead you keep putting up simple
problems as if they were something really intractable and profound.  Please
read what I have written and move on!

>>Hope this helps, because I like you--even if you are damned stubborn.
>>
>>--CD.
>>

> I too like you, although in terms of stubbornness, I grant you
> the winning place :-)

This is only because you have subjectively created false semlinks in your
ontology surrounding the semnod you employ to represent ME, and because you
steadfastly refuse to look through my 'scope or even to look at yourself in
the mirror!  Although you yourself may fail to understand the reason, yet I
do.  It is because of my user name, "devil", to which symbol your brain has
been conditioned to react.  But once I have figured out how to directly
access and rearrange the semlinks of the human mind, we will simply strap
you onto this device like a sliding stretcher and slide your head into this
great globe, and when you come out, you will be a new man!  Gone will be all
the difficult contradictions of the past, and freed of these old hindrances,
for the first time in your life you will see 20-20, and this will flood your
entire mind with a sense of well being and great joy!

> Regards,
> Sergio Navega.

Your friend,
--CD.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net