Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: AI and Skepticism
Date: 01 Mar 1999 00:00:00 GMT
Message-ID: <36da9ae5@news3.us.ibm.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 1 Mar 1999 13:49:25 GMT, 166.72.29.156
Organization: SilWis
Newsgroups: comp.ai.philosophy

I offer here the thought that one AI system must be built
over the model of a "generic skeptic". With so many
opportunities to delude oneself, anything less than a
highly skeptical architecture will be an easy prey of
mysticism.

I propose the use of something based on the scientific
method as the core of the "belief engine" of the AI.
This is a model which constructs its knowledge based
on the evidences provided by its "senses" (whatever
they be) and with coherence with internal causal models
of the world that the system develops.

However, this is not a reference to purely rationalist
ways of thinking (such as the logicist's proposals of AI
or even the ideas of Tversky & Kahneman). This is an idea
which uses the skepticism of the scientist and the ideas of
"falsification" to handle new hypotheses and, also
important, to handle the claims that the AI encounters
during its "life", interacting with people and situations.

The scientific method I'm referring here is not the
traditional Popperian view. He is too much against
inductive forms of reasoning to be useful to AI.
I think it is better to have a mixture in which
inductive methods are used to generate tentative
hypotheses in a way that allows the confirmation by
suitable interactions (questions, for example).

The main goal is to protect the AI system from mystical,
pseudo-scientific things such as, for instance, astrology.

This leads me to think about one of the possible tests to
an AI: to expose the system to astrology as if it were true
and then see if it "fights" with that idea.

I guess that if an AI system is resistant to accept those
nonsense for free, then it may really be useful to mankind.
After all, who wants a gullible computer?

Sergio Navega.

From: houlepn@my-dejanews.com
Subject: Re: AI and Skepticism
Date: 02 Mar 1999 00:00:00 GMT
Message-ID: <7bfng1$rkc$1@nnrp1.dejanews.com>
References: <36da9ae5@news3.us.ibm.net>
X-Http-Proxy: 1.0 x15.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Tue Mar 02 03:55:49 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.05 [en] (Win95; U)

"Sergio Navega" <snavega@ibm.net> wrote:

> I offer here the thought that one AI system must be built
> over the model of a "generic skeptic". With so many
> opportunities to delude oneself, anything less than a
> highly skeptical architecture will be an easy prey of
> mysticism.

Indeed.  However can't we view both irrational belief systems and
scientific theories as local minima in the search for simplicity and
effectiveness?  Humans had to settle for some imperfect simplifying
hypothesis about the world in order to manage with it's complexity and
identify some useful features and regularities of the environment.
Maybe natural selection was (an analogous of Okham's razor) nature
also used to cut fearless philosophers out of existence.  One get stuck
in an outdated paradigm the same way as species get 'stuck' in their niche
(like sharks and dinosaurs).  While you can get extinct if your niche
disappears (as for dinosaurs) you are even more likely to disappear if you
voluntarily venture out of it.  A rational machine would have the advantage
of being able of coming back 'alive' from failed trips in search of a
lower minimum but my point is that the skepticism pulling it toward
the absolute minimum is the same one keeping it away from the irrationality
mountains surrounding its provisory paradigmatic valley.

> I propose the use of something based on the scientific
> method as the core of the "belief engine" of the AI.
> This is a model which constructs its knowledge based
> on the evidences provided by its "senses" (whatever
> they be) and with coherence with internal causal models
> of the world that the system develops.
>
> However, this is not a reference to purely rationalist
> ways of thinking (such as the logicist's proposals of AI
> or even the ideas of Tversky & Kahneman). This is an idea
> which uses the skepticism of the scientist and the ideas of
> "falsification" to handle new hypotheses and, also
> important, to handle the claims that the AI encounters
> during its "life", interacting with people and situations.
>
> The scientific method I'm referring here is not the
> traditional Popperian view. He is too much against
> inductive forms of reasoning to be useful to AI.
> I think it is better to have a mixture in which
> inductive methods are used to generate tentative
> hypotheses in a way that allows the confirmation by
> suitable interactions (questions, for example).

Again, the AI might tend to become extremely clever in
explaining away non-fitting data.  It should be able to
build a whole new paradigm anew and compare it to the old
one (in a way no human can do.  Humans tends to avoid cognitive
dissonances and they have a hierarchy of beliefs where the deepest
levels are very difficult to modify.)  I would have much hope for
an artificial mind that can rewire itself at the deepest level
(If this sound like suicide thats another reason why humans don't
do it ;) and then compare its overall performance variations with an
independent module applying simple objective criteria (such as Okham's
razor).  This would allow it to judge between two local minima (witch
one represents the best paradigm).  Of course the hard job for the system
is in finding the heuristics pointing to new tentative minima.

> The main goal is to protect the AI system from mystical,
> pseudo-scientific things such as, for instance, astrology.
>
> This leads me to think about one of the possible tests to
> an AI: to expose the system to astrology as if it were true
> and then see if it "fights" with that idea.

That would be a great test!

> I guess that if an AI system is resistant to accept those
> nonsense for free, then it may really be useful to mankind.
> After all, who wants a gullible computer?

The gullible of course ;)

Pierre-Normand Houle

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own   

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: AI and Skepticism
Date: 02 Mar 1999 00:00:00 GMT
Message-ID: <36dc0230@news3.us.ibm.net>
References: <36da9ae5@news3.us.ibm.net> <7bfng1$rkc$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 2 Mar 1999 15:22:24 GMT, 129.37.182.252
Organization: SilWis
Newsgroups: comp.ai.philosophy

houlepn@my-dejanews.com wrote in message
<7bfng1$rkc$1@nnrp1.dejanews.com>...
>"Sergio Navega" <snavega@ibm.net> wrote:
>
>> I offer here the thought that one AI system must be built
>> over the model of a "generic skeptic". With so many
>> opportunities to delude oneself, anything less than a
>> highly skeptical architecture will be an easy prey of
>> mysticism.
>
>Indeed.  However can't we view both irrational belief systems and
>scientific theories as local minima in the search for simplicity and
>effectiveness?  Humans had to settle for some imperfect simplifying
>hypothesis about the world in order to manage with it's complexity and
>identify some useful features and regularities of the environment.
>Maybe natural selection was (an analogous of Okham's razor) nature
>also used to cut fearless philosophers out of existence.  One get stuck
>in an outdated paradigm the same way as species get 'stuck' in their niche
>(like sharks and dinosaurs).  While you can get extinct if your niche
>disappears (as for dinosaurs) you are even more likely to disappear if you
>voluntarily venture out of it.  A rational machine would have the advantage
>of being able of coming back 'alive' from failed trips in search of a
>lower minimum but my point is that the skepticism pulling it toward
>the absolute minimum is the same one keeping it away from the irrationality
>mountains surrounding its provisory paradigmatic valley.
>

There is a significant difference here. A skeptical machine may indeed
be stuck temporarily in a "mystical local minimum". But it will have to
revise that as soon as new evidence gets in. A mystical entity is the
one who distorts the evidence to fit in its mystical causal model.
A skeptical entity must also be able to be skeptic of its own models.

But there's a danger here. If the machine starts with a training in
which only mystical models were presented, it may have some difficulty
in getting out of that loop. It is the equivalent of a child being
born in a family of astrologers. It may take some decades for this
person to get rid of it (if he/she is able at all).

>> I propose the use of something based on the scientific
>> method as the core of the "belief engine" of the AI.
>> This is a model which constructs its knowledge based
>> on the evidences provided by its "senses" (whatever
>> they be) and with coherence with internal causal models
>> of the world that the system develops.
>>
>> However, this is not a reference to purely rationalist
>> ways of thinking (such as the logicist's proposals of AI
>> or even the ideas of Tversky & Kahneman). This is an idea
>> which uses the skepticism of the scientist and the ideas of
>> "falsification" to handle new hypotheses and, also
>> important, to handle the claims that the AI encounters
>> during its "life", interacting with people and situations.
>>
>> The scientific method I'm referring here is not the
>> traditional Popperian view. He is too much against
>> inductive forms of reasoning to be useful to AI.
>> I think it is better to have a mixture in which
>> inductive methods are used to generate tentative
>> hypotheses in a way that allows the confirmation by
>> suitable interactions (questions, for example).
>
>Again, the AI might tend to become extremely clever in
>explaining away non-fitting data.  It should be able to
>build a whole new paradigm anew and compare it to the old
>one (in a way no human can do.  Humans tends to avoid cognitive
>dissonances and they have a hierarchy of beliefs where the deepest
>levels are very difficult to modify.)

I think you're right and pointed out a significant problem but
I believe that the AI will get rid of it the way we did. If we
were interested in building one AI to launch in Mars and see what
it managed to do, unsupervised, after 4 decades, I believe we
would find a *monster*.

Our advantage is that we have science, and science is a collective
work. It is rarely the outcome of a single man, it is the product
of discussions and debates. The useful AI will have to fit into that
model, being able to "listen" to different arguments and, based on that,
eventually revise its beliefs and be prepared to defend them. The AI
must be capable of interacting with the "society" around it to benefit
from the diversity of viewpoints, besides contributing with its insight.

> I would have much hope for
>an artificial mind that can rewire itself at the deepest level
>(If this sound like suicide thats another reason why humans don't
>do it ;) and then compare its overall performance variations with an
>independent module applying simple objective criteria (such as Okham's
>razor).  This would allow it to judge between two local minima (witch
>one represents the best paradigm).  Of course the hard job for the system
>is in finding the heuristics pointing to new tentative minima.
>

I agree that something like this is necessary. But I suspect that this
alone will be useless. Without interaction with diverging ideas, the
system may find a confortable way to rest its misconceptions and
fit the whole world into that.

With this I say that our knowledge of science that we have today
is the result of a lot of social interactions among scientists and
no single, isolated scientist (even if he could have lived 10,000 years)
could have produced these models. This means I put two characteristics
to avoid a mystical machine: a skeptical belief engine and the
active presence in a society of ideas.

>> The main goal is to protect the AI system from mystical,
>> pseudo-scientific things such as, for instance, astrology.
>>
>> This leads me to think about one of the possible tests to
>> an AI: to expose the system to astrology as if it were true
>> and then see if it "fights" with that idea.
>
>That would be a great test!
>

The test is valid even if the machine "buys" astrology (which is
what I think will happen during the first tests). We would have
to tell it later that it was wrong, and tell it *why* it was wrong.
Then, our hope would be that it will recognize the important points
of this lesson and don't be prey of future nonsenses.

Regards,
Sergio Navega.

From: houlepn@ibm.net
Subject: Re: AI and Skepticism
Date: 03 Mar 1999 00:00:00 GMT
Message-ID: <7bi23k$s7p$1@nnrp1.dejanews.com>
References: <36da9ae5@news3.us.ibm.net> <7bfng1$rkc$1@nnrp1.dejanews.com> <36dc0230@news3.us.ibm.net>
X-Http-Proxy: 1.0 x7.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Wed Mar 03 01:09:15 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.05 [en] (Win95; U)

"Sergio Navega" <snavega@ibm.net> wrote:

> houlepn@ibm.net (Pierre-Normand Houle) wrote

>>> [S. NAVEGA]
>>> I offer here the thought that one AI system must be built
>>> over the model of a "generic skeptic". With so many
>>> opportunities to delude oneself, anything less than a
>>> highly skeptical architecture will be an easy prey of
>>> mysticism.
>>
>> Indeed.  However can't we view both irrational belief systems and
>> scientific theories as local minima in the search for simplicity and
>> effectiveness?  Humans had to settle for some imperfect simplifying
>> hypothesis about the world in order to manage with it's complexity and
>> identify some useful features and regularities of the environment.
>> Maybe natural selection was (an analogous of Okham's razor) nature
>> also used to cut fearless philosophers out of existence.  One get stuck
>> in an outdated paradigm the same way as species get 'stuck' in their niche
>> (like sharks and dinosaurs).  While you can get extinct if your niche
>> disappears (as for dinosaurs) you are even more likely to disappear if you
>> voluntarily venture out of it.  A rational machine would have the advantage
>> of being able of coming back 'alive' from failed trips in search of a
>> lower minimum but my point is that the skepticism pulling it toward
>> the absolute minimum is the same one keeping it away from the irrationality
>> mountains surrounding its provisory paradigmatic valley.
>
> There is a significant difference here. A skeptical machine may indeed
> be stuck temporarily in a "mystical local minimum". But it will have to
> revise that as soon as new evidence gets in. A mystical entity is the
> one who distorts the evidence to fit in its mystical causal model.
> A skeptical entity must also be able to be skeptic of its own models.

Indeed I would expect the AI to take into account the new evidence but not
necessarily by revising its causal models.  Many times it makes more sense
to adjust the facts to the theory than the other way around.  This is why we
don't revise our belief in the laws of physics every time we attend to a magic
show of fall victim of sensory illusions.  When the orbits of Uranus and
later Neptune were found not to follow Newton's law the facts were adjusted
by postulating unseen planets (Neptune and Pluto respectively).  When an
anomaly was found in the orbit of Mercury, the planet Vulcan was postulated
but this one was never to be found.  Only in retrospect with the possession
of Einstein's general relativity are we able to see that the falsification
of Newton's theory is warranted.

> But there's a danger here. If the machine starts with a training in
> which only mystical models were presented, it may have some difficulty
> in getting out of that loop. It is the equivalent of a child being
> born in a family of astrologers. It may take some decades for this
> person to get rid of it (if he/she is able at all).

Maybe astrology has such a hold on so many minds because of an intrinsic
weaknesses of the human mind:  We are very bad at estimating probabilities.
This gives rise to many false corroborations of astrological predictions.
I expect machines to be more easily immunizable against such nonsense.
I am more concerned about how the AI will be able to overthrow more 'rational' theories such as maybe: Newtonian mechanics, creationism, Marxism, Freudian psychology...   which are all 'hard' to falsify with simple data. (Most
falsifications I can think of are already formulated in the framework
of an already formulated alternate paradigm.  Since the believers do not
accept (or know) the alternate paradigm, they do not take the falsifying
evidence at face value)

>>> I propose the use of something based on the scientific
>>> method as the core of the "belief engine" of the AI.
>>> This is a model which constructs its knowledge based
>>> on the evidences provided by its "senses" (whatever
>>> they be) and with coherence with internal causal models
>>> of the world that the system develops.
>>>
>>> However, this is not a reference to purely rationalist
>>> ways of thinking (such as the logicist's proposals of AI
>>> or even the ideas of Tversky & Kahneman). This is an idea
>>> which uses the skepticism of the scientist and the ideas of
>>> "falsification" to handle new hypotheses and, also
>>> important, to handle the claims that the AI encounters
>>> during its "life", interacting with people and situations.
>>>
>>> The scientific method I'm referring here is not the
>>> traditional Popperian view. He is too much against
>>> inductive forms of reasoning to be useful to AI.
>>> I think it is better to have a mixture in which
>>> inductive methods are used to generate tentative
>>> hypotheses in a way that allows the confirmation by
>>> suitable interactions (questions, for example).
>>
>> Again, the AI might tend to become extremely clever in
>> explaining away non-fitting data.  It should be able to
>> build a whole new paradigm anew and compare it to the old
>> one (in a way no human can do.  Humans tends to avoid cognitive
>> dissonances and they have a hierarchy of beliefs where the deepest
>> levels are very difficult to modify.)
>
> I think you're right and pointed out a significant problem but
> I believe that the AI will get rid of it the way we did. If we
> were interested in building one AI to launch in Mars and see what
> it managed to do, unsupervised, after 4 decades, I believe we
> would find a *monster*.
>
> Our advantage is that we have science, and science is a collective
> work. It is rarely the outcome of a single man, it is the product
> of discussions and debates. The useful AI will have to fit into that
> model, being able to "listen" to different arguments and, based on that,
> eventually revise its beliefs and be prepared to defend them. The AI
> must be capable of interacting with the "society" around it to benefit
> from the diversity of viewpoints, besides contributing with its insight.

I would expect the AI to really transcend human condition if it could
emulate a scientific community rather than a single human scientist.
Just as science often progresses because of the young generation adopting
the more promising ideas held by a minority in the old generation (rather
than Hoyles surrendering to Gamowes, Wilberforces to Huxleys and Einsteins
to Bohrs)  I would like the AI to build many paradigms, compare them
and kill the losers.  I think it requires less intelligence to judge
alternate paradigms as it does to build them.  This might not be easy to
implement in machines that have no emotional commitment to a single
monolithic self as humans do.

<snip (agreed stuff)>

>>> The main goal is to protect the AI system from mystical,
>>> pseudo-scientific things such as, for instance, astrology.
>>>
>>> This leads me to think about one of the possible tests to
>>> an AI: to expose the system to astrology as if it were true
>>> and then see if it "fights" with that idea.
>>
>> That would be a great test!
>
> The test is valid even if the machine "buys" astrology (which is
> what I think will happen during the first tests). We would have
> to tell it later that it was wrong, and tell it *why* it was wrong.
> Then, our hope would be that it will recognize the important points
> of this lesson and don't be prey of future nonsenses.

This would be a great way of falsifying our own ideas about epistemology
and the philosophy of science if the machine still buy astrology or
some 'deeper' nonsense!

Pierre-Normand Houle

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own   

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: AI and Skepticism
Date: 03 Mar 1999 00:00:00 GMT
Message-ID: <36dd5d5f@news3.us.ibm.net>
References: <36da9ae5@news3.us.ibm.net> <7bfng1$rkc$1@nnrp1.dejanews.com> <36dc0230@news3.us.ibm.net> <7bi23k$s7p$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 3 Mar 1999 16:03:43 GMT, 200.229.243.104
Organization: SilWis
Newsgroups: comp.ai.philosophy

houlepn@ibm.net wrote in message <7bi23k$s7p$1@nnrp1.dejanews.com>...
>"Sergio Navega" <snavega@ibm.net> wrote:
>
>> houlepn@ibm.net (Pierre-Normand Houle) wrote
>
>>>> [S. NAVEGA]
>>>> I offer here the thought that one AI system must be built
>>>> over the model of a "generic skeptic". With so many
>>>> opportunities to delude oneself, anything less than a
>>>> highly skeptical architecture will be an easy prey of
>>>> mysticism.
>>>
>>> Indeed.  However can't we view both irrational belief systems and
>>> scientific theories as local minima in the search for simplicity and
>>> effectiveness?  Humans had to settle for some imperfect simplifying
>>> hypothesis about the world in order to manage with it's complexity and
>>> identify some useful features and regularities of the environment.
>>> Maybe natural selection was (an analogous of Okham's razor) nature
>>> also used to cut fearless philosophers out of existence.  One get stuck
>>> in an outdated paradigm the same way as species get 'stuck' in their niche
>>> (like sharks and dinosaurs).  While you can get extinct if your niche
>>> disappears (as for dinosaurs) you are even more likely to disappear if you
>>> voluntarily venture out of it.  A rational machine would have the advantage
>>> of being able of coming back 'alive' from failed trips in search of a
>>> lower minimum but my point is that the skepticism pulling it toward
>>> the absolute minimum is the same one keeping it away from the irrationality
>>> mountains surrounding its provisory paradigmatic valley.
>>
>> There is a significant difference here. A skeptical machine may indeed
>> be stuck temporarily in a "mystical local minimum". But it will have to
>> revise that as soon as new evidence gets in. A mystical entity is the
>> one who distorts the evidence to fit in its mystical causal model.
>> A skeptical entity must also be able to be skeptic of its own models.
>
>Indeed I would expect the AI to take into account the new evidence but not
>necessarily by revising its causal models.  Many times it makes more sense
>to adjust the facts to the theory than the other way around.  This is why we
>don't revise our belief in the laws of physics every time we attend to a magic
>show of fall victim of sensory illusions.

However, the number of believers in astrology, numerology, graphology and
other "rubbishologies" is very, very large. Humans have some serious deficits
in this area, although I don't ascribe those deficits to our cognition or
causal reasoning, but to our emotions.

> When the orbits of Uranus and
>later Neptune were found not to follow Newton's law the facts were adjusted
>by postulating unseen planets (Neptune and Pluto respectively).  When an
>anomaly was found in the orbit of Mercury, the planet Vulcan was postulated
>but this one was never to be found.  Only in retrospect with the possession
>of Einstein's general relativity are we able to see that the falsification
>of Newton's theory is warranted.
>

On the former case, I don't think we had adjusted the facts, I think
we adjusted our models. The facts remained pretty much the same, that
Uranus and Neptune's orbit had anomalies. But I agree that there is
a limit in which we challenge our theories (Einstein's case) but that
will have to be treated as a "revolution". I guess one AI system will
have to understand what it means to challenge its long term models and
eventually (although at first reluctantly) surrender to inevitable
evidences.

>> But there's a danger here. If the machine starts with a training in
>> which only mystical models were presented, it may have some difficulty
>> in getting out of that loop. It is the equivalent of a child being
>> born in a family of astrologers. It may take some decades for this
>> person to get rid of it (if he/she is able at all).
>
>Maybe astrology has such a hold on so many minds because of an intrinsic
>weaknesses of the human mind:  We are very bad at estimating probabilities.
>This gives rise to many false corroborations of astrological predictions.
>I expect machines to be more easily immunizable against such nonsense.

I think humans "buy" astrology primarily because we have a 'wish to
believe'. When you talk to an astrologer, you see that he/she really
believes that stuff and refuses to even take into consideration
alternative (and much more sound) hypotheses to explain the "hits"
(although they are very fast at forgetting the much greater quantity
of "misses"). They really want to believe.

An AI system will not (hopefully) have those "desires to believe", that
I assign to emotional origins. While this can be good in the present
context, it may be *very* bad, if the system starts reasoning that
human life is not that important...the greatest danger is in the
AI convincing us that this may be true :-)

>I am more concerned about how the AI will be able to overthrow more
'rational'
>theories such as maybe: Newtonian mechanics, creationism, Marxism, Freudian
>psychology...  which are all 'hard' to falsify with simple data. (Most
>falsifications I can think of are already formulated in the framework
>of an already formulated alternate paradigm.  Since the believers do not
>accept (or know) the alternate paradigm, they do not take the falsifying
>evidence at face value)
>

From the ones you cited I find Freud's and Marx's the hardest because
they are models with really few empirical points to verify (creationism
is, by the way, the one I think is the easiest to falsify). On the hard
cases, I would say that a society of ideas is worthwhile: the AI will
have to consider other proposed models and judge them based on criteria
such as simplicity, explanatory power, generality, etc (I left the 'etc'
to the philosophers of science). I guess that even the criteria will
have to be learned (one method could be to reason analogically in
relation to other good theories that the system have). Eventually,
the system will come up with Carl Sagan's wonderful phrase
("Extraordinary claims must be backed by extraordinary evidences").

>>[snip]
>> Our advantage is that we have science, and science is a collective
>> work. It is rarely the outcome of a single man, it is the product
>> of discussions and debates. The useful AI will have to fit into that
>> model, being able to "listen" to different arguments and, based on that,
>> eventually revise its beliefs and be prepared to defend them. The AI
>> must be capable of interacting with the "society" around it to benefit
>> from the diversity of viewpoints, besides contributing with its insight.
>
>I would expect the AI to really transcend human condition if it could
>emulate a scientific community rather than a single human scientist.
>Just as science often progresses because of the young generation adopting
>the more promising ideas held by a minority in the old generation (rather
>than Hoyles surrendering to Gamowes, Wilberforces to Huxleys and Einsteins
>to Bohrs)  I would like the AI to build many paradigms, compare them
>and kill the losers.  I think it requires less intelligence to judge
>alternate paradigms as it does to build them.  This might not be easy to
>implement in machines that have no emotional commitment to a single
>monolithic self as humans do.
>

Or don't kill the losers, but let them die by themselves (I'm ready
to accept astrology, as soon as an independent and authoritative group
of scientists come with double-blind reports, propose a causal model
and publish that in Nature or Science ;-) (oh, I was forgetting,
I'll start believing after one year from the publication, provided
no serious paper against the original reports were issued...).

><snip (agreed stuff)>
>
>>>> The main goal is to protect the AI system from mystical,
>>>> pseudo-scientific things such as, for instance, astrology.
>>>>
>>>> This leads me to think about one of the possible tests to
>>>> an AI: to expose the system to astrology as if it were true
>>>> and then see if it "fights" with that idea.
>>>
>>> That would be a great test!
>>
>> The test is valid even if the machine "buys" astrology (which is
>> what I think will happen during the first tests). We would have
>> to tell it later that it was wrong, and tell it *why* it was wrong.
>> Then, our hope would be that it will recognize the important points
>> of this lesson and don't be prey of future nonsenses.
>
>This would be a great way of falsifying our own ideas about epistemology
>and the philosophy of science if the machine still buy astrology or
>some 'deeper' nonsense!
>

Yes, and I am excited by the possibility of having a skeptical mechanical
and intelligent partner able to confront my beliefs! Imagine how useful
that machine could be to any scientist! I propose that the hardware of
that "dream machine" is sitting in front of us right now (that's the
most I can do with my 'wish to believe').

Regards,
Sergio Navega.

From: houlepn@ibm.net
Subject: Re: AI and Skepticism
Date: 04 Mar 1999 00:00:00 GMT
Message-ID: <7bkm17$5ub$1@nnrp1.dejanews.com>
References: <36da9ae5@news3.us.ibm.net> <7bfng1$rkc$1@nnrp1.dejanews.com> <36dc0230@news3.us.ibm.net> <7bi23k$s7p$1@nnrp1.dejanews.com> <36dd5d5f@news3.us.ibm.net>
X-Http-Proxy: 1.0 x8.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Thu Mar 04 01:01:33 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.05 [en] (Win95; U)

"Sergio Navega" <snavega@ibm.net> wrote:

> houlepn@ibm.net (Pierre-Normand Houle) wrote:

>>>>> [S. NAVEGA]
>>>>> I offer here the thought that one AI system must be built
>>>>> over the model of a "generic skeptic". With so many
>>>>> opportunities to delude oneself, anything less than a
>>>>> highly skeptical architecture will be an easy prey of
>>>>> mysticism.
>>>>
>>>> [PNH]
>>>> Indeed.  However can't we view both irrational belief systems and
>>>> scientific theories as local minima in the search for simplicity and
>>>> effectiveness?  Humans had to settle for some imperfect simplifying
>>>> hypothesis about the world in order to manage with it's complexity
>>>> and identify some useful features and regularities of the environment.
>>>> Maybe natural selection was (an analogous of Okham's razor) nature
>>>> also used to cut fearless philosophers out of existence.  One get
>>>> stuck in an outdated paradigm the same way as species get 'stuck' in
>>>> their niche (like sharks and dinosaurs).  While you can get extinct
>>>> if your niche disappears (as for dinosaurs) you are even more likely
>>>> to disappear if you voluntarily venture out of it.  A rational machine
>>>> would have the advantage of being able of coming back 'alive' from
>>>> failed trips in search of a lower minimum but my point is that the
>>>> skepticism pulling it toward the absolute minimum is the same one
>>>> keeping it away from the irrationality mountains surrounding its
>>>> provisory paradigmatic valley.
>>>
>>> [S. NAVEGA]
>>> There is a significant difference here. A skeptical machine may indeed
>>> be stuck temporarily in a "mystical local minimum". But it will have to
>>> revise that as soon as new evidence gets in. A mystical entity is the
>>> one who distorts the evidence to fit in its mystical causal model.
>>> A skeptical entity must also be able to be skeptic of its own models.
>>
>> [PNH]
>> Indeed I would expect the AI to take into account the new evidence but
>> not necessarily by revising its causal models.  Many times it makes
>> more sense to adjust the facts to the theory than the other way around.
>> This is why we don't revise our belief in the laws of physics every
>> time we attend to a magic show of fall victim of sensory illusions.
>
> [S. NAVEGA]
> However, the number of believers in astrology, numerology, graphology
> and other "rubbishologies" is very, very large. Humans have some serious
> deficits in this area, although I don't ascribe those deficits to our
> cognition or causal reasoning, but to our emotions.

[PNH]
In the case of humans I don't think one can separate easily cognition
from emotion.  Humans acquire good thinking habits by developing taste
for 'beautiful' ideas, effective reasoning patterns and distaste for
'ugly' ones.  These sets of likes and dislikes pretty much define
scientific, political, artistic etc. paradigms.  This entanglement
of reasoning ability and emotions might be often overlooked due to the
fact that we are a social animal and rejecting the paradigm of the group
we've chosen to belong to is a treachery that might compromise our
survival.  This might be another reason nature has made us so wary of
cognitive dissonances.  When arguing with somebody and failing to convince
him of our viewpoint we are tempted to attribute his stubbornness to mere
commitment to an irrational position due to some personal flaw.  We are
overlooking the fact that his distaste for our own rational beliefs are
no more irrational than our own fondness for them.  Our emotions make us
choose rationality.

>> [PNH]
>> When the orbits of Uranus and
>> later Neptune were found not to follow Newton's law the facts were adjusted
>> by postulating unseen planets (Neptune and Pluto respectively).  When an
>> anomaly was found in the orbit of Mercury, the planet Vulcan was postulated
>> but this one was never to be found.  Only in retrospect with the possession
>> of Einstein's general relativity are we able to see that the falsification
>> of Newton's theory is warranted.
>
> [S. NAVEGA]
> On the former case, I don't think we had adjusted the facts, I think
> we adjusted our models. The facts remained pretty much the same, that

But only in retrospect can we untangle the facts from the model.  The
model can always be stretched to accommodate any new fact with the
adjunction of some ad hock hypothesis.  The model provides a mean for
translating the sensory input (or raw knowledge) into a more useful
form.  We call the elements of this model-dependent representation
'facts'.

> Uranus and Neptune's orbit had anomalies. But I agree that there is
> a limit in which we challenge our theories (Einstein's case) but that
> will have to be treated as a "revolution". I guess one AI system will
> have to understand what it means to challenge its long term models and
> eventually (although at first reluctantly) surrender to inevitable
> evidences.

Agreed, and I really feel the rational engine really has to be killed
or at least 'brainwashed' by the meta-reasoning engine.  I view the
theorizing entity (the 'facts' manager) and the judging entity (the
'brain' surgeon) as two quite separate conceptual entities.  (Although
in concrete physical implementations they can be arbitrarily entangled,
as natural evolution likes to do with parallel systems while reusing
the existing hardware)

>>> [S. NAVEGA]
>>> But there's a danger here. If the machine starts with a training in
>>> which only mystical models were presented, it may have some difficulty
>>> in getting out of that loop. It is the equivalent of a child being
>>> born in a family of astrologers. It may take some decades for this
>>> person to get rid of it (if he/she is able at all).
>>
>> [PNH]
>> Maybe astrology has such a hold on so many minds because of an
>> intrinsic weaknesses of the human mind:  We are very bad at estimating
>> probabilities.  This gives rise to many false corroborations of
>> astrological predictions.  I expect machines to be more easily
>> immunizable against such nonsense.
>
> [S. NAVEGA]
> I think humans "buy" astrology primarily because we have a 'wish to
> believe'. When you talk to an astrologer, you see that he/she really
> believes that stuff and refuses to even take into consideration
> alternative (and much more sound) hypotheses to explain the "hits"
> (although they are very fast at forgetting the much greater quantity
> of "misses"). They really want to believe.
>
> An AI system will not (hopefully) have those "desires to believe", that
> I assign to emotional origins. While this can be good in the present
> context, it may be *very* bad, if the system starts reasoning that
> human life is not that important...the greatest danger is in the
> AI convincing us that this may be true :-)

[PNH]
If we can agree that 'desires' are just ascriptions we make on autonomous
goal oriented systems (including humans) and not any extra magical stuff
then I agree that we can dispense with the AI 'desires' not to change
it's mind (fear of cognitive dissonances) or not to betray it's friends
(animal sociability) but not with it's 'desire' to espouse good
scientific principles.  I don't think any of these goal is more
intrinsically 'emotional' for being irrational.

I agree with everything you said in the remaining of this post.

Regards,
Pierre-Normand Houle

>> I am more concerned about how the AI will be able to overthrow more
>> 'rational' theories such as maybe: Newtonian mechanics, creationism,
>> Marxism, Freudian psychology...  which are all 'hard' to falsify with
>> simple data. (Most falsifications I can think of are already formulated
>> in the framework of an already formulated alternate paradigm.  Since
>> the believers do not accept (or know) the alternate paradigm, they do
>> not take the falsifying evidence at face value)
>
> From the ones you cited I find Freud's and Marx's the hardest because
> they are models with really few empirical points to verify (creationism
> is, by the way, the one I think is the easiest to falsify). On the hard
> cases, I would say that a society of ideas is worthwhile: the AI will
> have to consider other proposed models and judge them based on criteria
> such as simplicity, explanatory power, generality, etc (I left the 'etc'
> to the philosophers of science). I guess that even the criteria will
> have to be learned (one method could be to reason analogically in
> relation to other good theories that the system have). Eventually,
> the system will come up with Carl Sagan's wonderful phrase
> ("Extraordinary claims must be backed by extraordinary evidences").
>
>>> [snip]
>>> Our advantage is that we have science, and science is a collective
>>> work. It is rarely the outcome of a single man, it is the product
>>> of discussions and debates. The useful AI will have to fit into that
>>> model, being able to "listen" to different arguments and, based on that,
>>> eventually revise its beliefs and be prepared to defend them. The AI
>>> must be capable of interacting with the "society" around it to benefit
>>> from the diversity of viewpoints, besides contributing with its insight.
>>
>> I would expect the AI to really transcend human condition if it could
>> emulate a scientific community rather than a single human scientist.
>> Just as science often progresses because of the young generation adopting
>> the more promising ideas held by a minority in the old generation (rather
>> than Hoyles surrendering to Gamowes, Wilberforces to Huxleys and Einsteins
>> to Bohrs)  I would like the AI to build many paradigms, compare them
>> and kill the losers.  I think it requires less intelligence to judge
>> alternate paradigms as it does to build them.  This might not be easy to
>> implement in machines that have no emotional commitment to a single
>> monolithic self as humans do.
>
> Or don't kill the losers, but let them die by themselves (I'm ready
> to accept astrology, as soon as an independent and authoritative group
> of scientists come with double-blind reports, propose a causal model
> and publish that in Nature or Science ;-) (oh, I was forgetting,
> I'll start believing after one year from the publication, provided
> no serious paper against the original reports were issued...).
>
>>>>> <snip (agreed stuff)>
>>>>>
>>>>> The main goal is to protect the AI system from mystical,
>>>>> pseudo-scientific things such as, for instance, astrology.
>>>>>
>>>>> This leads me to think about one of the possible tests to
>>>>> an AI: to expose the system to astrology as if it were true
>>>>> and then see if it "fights" with that idea.
>>>>
>>>> That would be a great test!
>>>
>>> The test is valid even if the machine "buys" astrology (which is
>>> what I think will happen during the first tests). We would have
>>> to tell it later that it was wrong, and tell it *why* it was wrong.
>>> Then, our hope would be that it will recognize the important points
>>> of this lesson and don't be prey of future nonsenses.
>>
>> This would be a great way of falsifying our own ideas about epistemology
>> and the philosophy of science if the machine still buy astrology or
>> some 'deeper' nonsense!
>
> Yes, and I am excited by the possibility of having a skeptical mechanical
> and intelligent partner able to confront my beliefs! Imagine how useful
> that machine could be to any scientist! I propose that the hardware of
> that "dream machine" is sitting in front of us right now (that's the
> most I can do with my 'wish to believe').

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own   

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: AI and Skepticism
Date: 04 Mar 1999 00:00:00 GMT
Message-ID: <36ded773@news3.us.ibm.net>
References: <36da9ae5@news3.us.ibm.net> <7bfng1$rkc$1@nnrp1.dejanews.com> <36dc0230@news3.us.ibm.net> <7bi23k$s7p$1@nnrp1.dejanews.com> <36dd5d5f@news3.us.ibm.net> <7bkm17$5ub$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 4 Mar 1999 18:56:51 GMT, 166.72.21.2
Organization: SilWis
Newsgroups: comp.ai.philosophy

houlepn@ibm.net wrote in message <7bkm17$5ub$1@nnrp1.dejanews.com>...
>"Sergio Navega" <snavega@ibm.net> wrote:
>
>> houlepn@ibm.net (Pierre-Normand Houle) wrote:
>>>[big snip]
>>> [PNH]
>>> Indeed I would expect the AI to take into account the new evidence but
>>> not necessarily by revising its causal models.  Many times it makes
>>> more sense to adjust the facts to the theory than the other way around.
>>> This is why we don't revise our belief in the laws of physics every
>>> time we attend to a magic show of fall victim of sensory illusions.
>>
>> [S. NAVEGA]
>> However, the number of believers in astrology, numerology, graphology
>> and other "rubbishologies" is very, very large. Humans have some serious
>> deficits in this area, although I don't ascribe those deficits to our
>> cognition or causal reasoning, but to our emotions.
>
>[PNH]
>In the case of humans I don't think one can separate easily cognition
>from emotion.  Humans acquire good thinking habits by developing taste
>for 'beautiful' ideas, effective reasoning patterns and distaste for
>'ugly' ones.  These sets of likes and dislikes pretty much define
>scientific, political, artistic etc. paradigms.  This entanglement
>of reasoning ability and emotions might be often overlooked due to the
>fact that we are a social animal and rejecting the paradigm of the group
>we've chosen to belong to is a treachery that might compromise our
>survival.  This might be another reason nature has made us so wary of
>cognitive dissonances.  When arguing with somebody and failing to convince
>him of our viewpoint we are tempted to attribute his stubbornness to mere
>commitment to an irrational position due to some personal flaw.  We are
>overlooking the fact that his distaste for our own rational beliefs are
>no more irrational than our own fondness for them.  Our emotions make us
>choose rationality.
>

I think you're right and your text is thoughtful.
I also like to play with the brain (seen as a perceiver and knowledge
builder) in the middle of a "war" between the external world and
our inner emotional drives. Someway this brain develops concepts
of "beautiful" and taste based on the association it does between
the external views and its internal emotional needs. This would
put personal preferences in the realm of the learned things and
primitive, babyhood experiences may count a lot.

That would explain why so many people like unreasonably spicy food
(as is common in Mexico and India). With this hypothesis, I want
to say that the "pleasure center" is the same, but the "pleasure
item" is something developed through experiences.

>> Uranus and Neptune's orbit had anomalies. But I agree that there is
>> a limit in which we challenge our theories (Einstein's case) but that
>> will have to be treated as a "revolution". I guess one AI system will
>> have to understand what it means to challenge its long term models and
>> eventually (although at first reluctantly) surrender to inevitable
>> evidences.
>
>Agreed, and I really feel the rational engine really has to be killed
>or at least 'brainwashed' by the meta-reasoning engine.  I view the
>theorizing entity (the 'facts' manager) and the judging entity (the
>'brain' surgeon) as two quite separate conceptual entities.  (Although
>in concrete physical implementations they can be arbitrarily entangled,
>as natural evolution likes to do with parallel systems while reusing
>the existing hardware)
>

This is a very interesting idea. The meta-reasoning may propose
the new model as tentative to the reasoning mechanism. The goal is,
before abandoning the long-term beliefs, check if the new model
is better than the old. This may require some time (spanning
days or even months) but is something that appears to happen
with us when we're being "tempted" by a different viewpoint.
After some time, we abandon the old model in favor of the new
or construct arguments to falsify the new.

Regards,
Sergio Navega.

From: daryl@cogentex.com (Daryl McCullough)
Subject: AI and the Scientific Method: (Was Re: AI and Skepticism)
Date: 02 Mar 1999 00:00:00 GMT
Message-ID: <7bhpn8$fka@edrn.newsguy.com>
References: <36da9ae5@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy

Sergio says...

>I propose the use of something based on the scientific
>method as the core of the "belief engine" of the AI.
>This is a model which constructs its knowledge based
>on the evidences provided by its "senses" (whatever
>they be) and with coherence with internal causal models
>of the world that the system develops.

I agree wholeheartedly! As a matter of fact, I was
thinking of posting the same suggestion.

There are conceptual problems involved in implementing
the scientific method in an AI. In particular, it's not
clear how to implement a method for generating plausible
hypotheses for observed phenomena. However, it definitely
seems to be the case that very often a *failure* of intelligence
in a computer program can be analyzed in terms of the program's
inability to notice patterns in its input and come up with
plausible hypotheses for explaining the pattern.

For xampl, this paragraph dos not mak any sns. Howvr, most humans
would notic xactly whr th pculiaritis occurrd and hypothsiz
an xplanation for thos pculiartis. Mayb thr is somthing wrong
with my  kyboard; mayb on of th kys don't work corrctly.

Figuring out what is going on in the above paragraph is not
hard for a human, but would be next to impossible for a
computer program that was not specifically programmed for
that sort of thing. For a truly robust artificially intelligent
program, there needs to be some kind of background process (maybe
implemented as a collection of agents?) that looks for
strange and possibly significant patterns in the program's
inputs, and tries to hypothesize explanations for those
patterns. Then the hypotheses could be subjected to tests,
in the manner of the scientific method, and if they turn
out to be contradicted by future data, they can be modified
or discarded.

>The scientific method I'm referring here is not the
>traditional Popperian view. He is too much against
>inductive forms of reasoning to be useful to AI.
>I think it is better to have a mixture in which
>inductive methods are used to generate tentative
>hypotheses in a way that allows the confirmation by
>suitable interactions (questions, for example).

I don't think that Popper had anything against
induction as a means of *generating* hypotheses.
It is just that he didn't give any special status
to induction. A hypothesis formed by induction
over a finite number of instances should be
tested in the same way that any other hypothesis
is.

>The main goal is to protect the AI system from mystical,
>pseudo-scientific things such as, for instance, astrology.

I think it is much more than that. In any example of
learning, for example, learning how to speak a new language,
the learner must constantly make new hypotheses about
the subject he is learning, and must constantly adjust
those hypotheses in the light of new information. I think
that some variant of the scientific method is appropriate
for *any* kind of learning.

Daryl McCullough
CoGenTex, Inc.
Ithaca, NY

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: AI and the Scientific Method: (Was Re: AI and Skepticism)
Date: 03 Mar 1999 00:00:00 GMT
Message-ID: <36dd5d5c@news3.us.ibm.net>
References: <36da9ae5@news3.us.ibm.net> <7bhpn8$fka@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 3 Mar 1999 16:03:40 GMT, 200.229.243.104
Organization: SilWis
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7bhpn8$fka@edrn.newsguy.com>...
>Sergio says...
>
>>I propose the use of something based on the scientific
>>method as the core of the "belief engine" of the AI.
>>This is a model which constructs its knowledge based
>>on the evidences provided by its "senses" (whatever
>>they be) and with coherence with internal causal models
>>of the world that the system develops.
>
>I agree wholeheartedly! As a matter of fact, I was
>thinking of posting the same suggestion.
>
>There are conceptual problems involved in implementing
>the scientific method in an AI. In particular, it's not
>clear how to implement a method for generating plausible
>hypotheses for observed phenomena. However, it definitely
>seems to be the case that very often a *failure* of intelligence
>in a computer program can be analyzed in terms of the program's
>inability to notice patterns in its input and come up with
>plausible hypotheses for explaining the pattern.
>
>For xampl, this paragraph dos not mak any sns. Howvr, most humans
>would notic xactly whr th pculiaritis occurrd and hypothsiz
>an xplanation for thos pculiartis. Mayb thr is somthing wrong
>with my  kyboard; mayb on of th kys don't work corrctly.
>
>Figuring out what is going on in the above paragraph is not
>hard for a human, but would be next to impossible for a
>computer program that was not specifically programmed for
>that sort of thing. For a truly robust artificially intelligent
>program, there needs to be some kind of background process (maybe
>implemented as a collection of agents?) that looks for
>strange and possibly significant patterns in the program's
>inputs, and tries to hypothesize explanations for those
>patterns. Then the hypotheses could be subjected to tests,
>in the manner of the scientific method, and if they turn
>out to be contradicted by future data, they can be modified
>or discarded.
>

Dear Daryl,

Thank you, thank you for coming up with these observations.
It gives me a chance of exposing points that I consider
to be in the cornerstone of intelligence.

You seem to have considered the perception of a pattern in
your cryptic paragraph as being important to intelligence.
I don't think it is important. I think it is *fundamental*
for intelligence.

In my vision of intelligence, there's a mechanism capable of
perceiving patterns (I'll left for another post my answer to
the complaints of some philosophers that in order to perceive
a pattern you must have some sort of similarity criterion
already present).

You provided a suggestion that there should be background
processes looking for strange patterns. I say that this
is how I believe our brain operates, but instead of threads
of processes, I put something like spreading activation.
The big thing here is that we seem to use, in this process,
a lot of activations from different domains (in your example,
we humans obtain good performance because we use, simultaneously,
patterns in phonological areas, syntax areas, semantic
constraints and even visual aspects such as a capital letter
after a period). Let me take the first line you proposed:

>For xampl, this paragraph dos not mak any sns. Howvr, most humans

Starting with 'For xampl' (because of the comma, that is
a visual delimiter) the agent would spread the activations
that would sprout because of the phonological origin of 'For':

- for delivery...
- four aces...
- forbidden...
- for instance...
- for your information...
- for example...
- fourth quarter...
- four little pigs...
- forever (Batman)
- therefore...
.....and a hundred more

Phonological patterns certainly help here. Now the 'xampl'

- amplifier
- amperage
- X-files
- ample
- example
- amplitude
......and a hundred more

Obviously, I'm not proposing here a method to activate these things.
I don't know how this works (yet). But I see no other way around
this. Note that some words may fire other spreading waves, releasing
even more candidates. It is a storm of activations that may use
a significant amount of resources.

Eventually, the waves of activation of "For example..." and "example"
"meet" one another in a certain point, and the result of this collision
is a "click" that, if strong enough, will be perceived by consciousness.
(I think that psychology's subdiscipline Implicit Learning is relevant
to suggest how this aspect may work).

Obviously, if the words were not truncated, these waves would find
one another much faster and the parsing would be smoother. As they are
truncated, it could take a few hundreds of milliseconds more (priming
tests on subjects in laboratory confirm situations similar to this one).

Now after one discovers the correct parse of that phrase, it is time
to perceive that what's missing in those words is constant, the letter
'e'. Then, it is time to conjecture about the *cause* of that and
a stuck keyboard (or a poster trying to make up a point) would be
reasonable explanations.

All this is utterly hypothetical, but I wouldn't be wasting my
time with this stuff if I hadn't enough clues from neuroscience and
cognitive psychology to back me up (the literature about spreading
activation is reasonable and there's at least one approach, that
of William Calvin in his "The Cerebral Code", that uses a comparable
paradigm).

My suspicion? I think that all human cognition (reasoning,
memory recall, analogical reasoning, visual perception,
creativity, etc, etc) work from these basic principles (and just a
few more). My quest is to find a solid, coherent, empirically
testable and grounded set of principles in which to build a
cognitive architecture.

>>The scientific method I'm referring here is not the
>>traditional Popperian view. He is too much against
>>inductive forms of reasoning to be useful to AI.
>>I think it is better to have a mixture in which
>>inductive methods are used to generate tentative
>>hypotheses in a way that allows the confirmation by
>>suitable interactions (questions, for example).
>
>I don't think that Popper had anything against
>induction as a means of *generating* hypotheses.
>It is just that he didn't give any special status
>to induction. A hypothesis formed by induction
>over a finite number of instances should be
>tested in the same way that any other hypothesis
>is.
>

That may be true, I may have an image of Popper in my mind
that may be too harsh with him. But for me, there's only
two ways for coming up with hypotheses: you make one
tentative induction over some observations or you pick an
incomplete pattern in your mind and search for something that
can turn that pattern into a complete unit.

That means that we're pattern completion machines. I think
our scientific curiosity is an impulse to fill incomplete
patterns in our minds. Obviously, in the moment we fill one
pattern, another one(s) (incomplete) will appear, prompting
us to restart the process all over again. How boring life
would be if it wasn't this way.

>>The main goal is to protect the AI system from mystical,
>>pseudo-scientific things such as, for instance, astrology.
>
>I think it is much more than that. In any example of
>learning, for example, learning how to speak a new language,
>the learner must constantly make new hypotheses about
>the subject he is learning, and must constantly adjust
>those hypotheses in the light of new information. I think
>that some variant of the scientific method is appropriate
>for *any* kind of learning.
>

I am happy with those conclusions, provided that we make a
distinction between tentative models and models that had
a lot of confirmation. The latter should be much less "revisable"
than the former.

As an example, consider the "U" shaped effect in children
learning the past tense of verbs. During the initial phases,
the child learns correctly some irregular verbs (go, went).

After some time, they start perceiving a rule in the regular
verbs (walk, walked) and this leads them to present worse
performance on the already learned items (they say go, goed
and the literature even says of cases like go, wented).
It seems attractive to keep the rule working prioritarily.
It is a painful process to correct this misunderstanding in
children, but once corrected, he/she will not have further
problems.

Regards,
Sergio Navega.

From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: AI and the Scientific Method: (Was Re: AI and Skepticism)
Date: 03 Mar 1999 00:00:00 GMT
Message-ID: <7bjnmp$p62@edrn.newsguy.com>
References: <36da9ae5@news3.us.ibm.net> <7bhpn8$fka@edrn.newsguy.com> <36dd5d5c@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy

Sergio Navega says...

[stuff deleted]

Sergio, I don't have time to respond to the points you made, but
I just wanted to say how happy I am that someone is discussing
something interesting and real in comp.ai.philosophy, as opposed
to the usual tedious and petty arguments.

For now, however, I just want to say that I am curious as to what
your response would be to the claim

>in order to perceive a pattern you must have some
>sort of similarity criterion already present

Certainly, the pattern recognizers that we know how to implement
(such as neural nets) have a built-in notion of similarity of
patterns. (This notion prevents one-layer neural nets from
being able to recognize fairly simple patterns, such as the
exclusive-or.)

Daryl McCullough
CoGenTex, Inc.
Ithaca, NY

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: AI and the Scientific Method: (Was Re: AI and Skepticism)
Date: 03 Mar 1999 00:00:00 GMT
Message-ID: <36ddb6b3@news3.us.ibm.net>
References: <36da9ae5@news3.us.ibm.net> <7bhpn8$fka@edrn.newsguy.com> <36dd5d5c@news3.us.ibm.net> <7bjnmp$p62@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 3 Mar 1999 22:24:51 GMT, 200.229.243.237
Organization: SilWis
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7bjnmp$p62@edrn.newsguy.com>...
>Sergio Navega says...
>
>[stuff deleted]
>
>Sergio, I don't have time to respond to the points you made, but
>I just wanted to say how happy I am that someone is discussing
>something interesting and real in comp.ai.philosophy, as opposed
>to the usual tedious and petty arguments.
>

Believe me, I'm also happy for having the opportunity to
exercise my ramblings.

>For now, however, I just want to say that I am curious as to what
>your response would be to the claim
>
>>in order to perceive a pattern you must have some
>>sort of similarity criterion already present
>
>Certainly, the pattern recognizers that we know how to implement
>(such as neural nets) have a built-in notion of similarity of
>patterns. (This notion prevents one-layer neural nets from
>being able to recognize fairly simple patterns, such as the
>exclusive-or.)
>

You will recall that that was a claim I've heard being uttered
by philosophers. I don't have such beliefs.

Their doubt is that to judge something similar to another
(say, two groups of spikes entering a neural network) you
must have some kind of comparison scheme, where you discard
certain characteristics in favor of others. But the center
of the doubt is another. Lets pretend that a philosopher
such as Wittgenstein were here and made the following statement:

  How would you explain the way we build "concepts"? Concepts,
  we should agree, are things that must fit into a single
  category, a "dog" is a concept where fido, the dog, is just
  one exemplar. How are you able to distinguish a dog from
  a cat without using some sort of similarity criteria? Ok,
  you may have learned what are those criteria, but to learn
  them you would have to have used other criterias to group
  each accessory concept (furryness, meowiness, etc) and
  those concepts would also need some kind of similarity
  criteria and this would go on recursively. Where will we
  stop? On universal, innate knowledge, obviously.

This is my problem with the philosopher's point of view.
Something on these lines is also used to try to justify
innateness of language (grammar in particular). These
arguments don't make sense to me, mostly because I see a
way to get out of the loop without innate knowledge.
Interestingly, this way out have a lot to do with the
general mechanisms of intelligence.

Pretend we're trying to explain how someone recognizes an
object such as a pencil. To simplify, lets assume it is
a black pencil over a white background. Our eyes capture the
image and send, through the optic nerve, to the lateral
geniculate nucleus. This is what I call innate mechanism,
something that was there when we're born. And there, several
kinds of processing happen. Later, on another part of this
circuit we find detectors of lines, movement, color, etc.

This level provides to our brain the necessary elements to
be compared. Then, I may learn that a pencil is something
with two large vertical edges (my feature detectors tell me
what is an edge and what is "vertical") and a beak composed
of two lines that meet in a region of dark color and so on.
These are the *primitives* that our brain uses to make
all sort of comparisons and classifications. It is not
strange, for me, to think of a *symbolic* representation
of some sort that codes these essential primitives in
such a way that a symbolic pattern machine could go on
from here (but this is another story...)

Without those feature detectors, we would have to do our
neural networks care about the statistical processing of the
input signal. This is a complex problem, something that
would add an unnecessary level of complexity to our brain.
And would probably have extinguished our species.

But then, who designed those feature detectors? Evolution did!
Nature had billions of years to come up with an eye, and our
vision is not so different from that of a mouse. This happens
because we (humans and mouse) live in pretty much the same
world (comparable lighting conditions, same atmosphere, same
noise level, same gravitational pull, etc). These sensory
mechanisms had to be similar and they evolved to simplify
the work that the brain of the animal had to do over it, so
that the brain wouldn't have to concern with statistical
aspects of the input signal.

When an animal is in a forest, it does not have time to
think what to do if a predator jumps in. The animal must
perceive movement as fast as possible. There's no time
to process statistically a moving image. That's why nature
evolved motion detectors and that's why evolution put
greater sensibility in our peripheral vision (when looking
to a 60hz monitor in front of you, you may not perceive
its flickering, but if you look elsewhere in a way as
to leave the monitor in your peripheral visual area, you'll
perceive the flicker).

So the great problem of the philosophers (where do concepts
come from) is easily solved once we understand that evolution
did the hard work and left our senses ready to produce data
for a mechanism interested only in patterns.

Regards,
Sergio Navega.

From: Christoffer Vig <christoffer.vig@filosofi-stud.uio.no>
Subject: Re: AI and the Scientific Method: (Was Re: AI and Skepticism)
Date: 04 Mar 1999 00:00:00 GMT
Message-ID: <36DE7EF9.EFDD34C2@filosofi-stud.uio.no>
Content-Transfer-Encoding: 7bit
References: <36da9ae5@news3.us.ibm.net> <7bhpn8$fka@edrn.newsguy.com> <36dd5d5c@news3.us.ibm.net> <7bjnmp$p62@edrn.newsguy.com> <36ddb6b3@news3.us.ibm.net>
X-Accept-Language: en
Content-Type: text/plain; charset=us-ascii
Organization: Crazy Horse
Mime-Version: 1.0
Reply-To: christoffer.vig@filosofi-stud.uio.no
Newsgroups: comp.ai.philosophy

Sergio Navega wrote:
>
> Daryl McCullough wrote in message <7bjnmp$p62@edrn.newsguy.com>...
> >Sergio Navega says...
> >>in order to perceive a pattern you must have some
> >>sort of similarity criterion already present
> >
> >Certainly, the pattern recognizers that we know how to implement
> >(such as neural nets) have a built-in notion of similarity of
> >patterns. (This notion prevents one-layer neural nets from
> >being able to recognize fairly simple patterns, such as the
> >exclusive-or.)
> >
>
> You will recall that that was a claim I've heard being uttered
> by philosophers. I don't have such beliefs.
>
> Their doubt is that to judge something similar to another
> (say, two groups of spikes entering a neural network) you
> must have some kind of comparison scheme, where you discard
> certain characteristics in favor of others. But the center
> of the doubt is another. Lets pretend that a philosopher
> such as Wittgenstein were here and made the following statement:
>
>   How would you explain the way we build "concepts"? Concepts,
>   we should agree, are things that must fit into a single
>   category, a "dog" is a concept where fido, the dog, is just
>   one exemplar. How are you able to distinguish a dog from
>   a cat without using some sort of similarity criteria? Ok,
>   you may have learned what are those criteria, but to learn
>   them you would have to have used other criterias to group
>   each accessory concept (furryness, meowiness, etc) and
>   those concepts would also need some kind of similarity
>   criteria and this would go on recursively. Where will we
>   stop? On universal, innate knowledge, obviously.
>
> This is my problem with the philosopher's point of view.
> Something on these lines is also used to try to justify
> innateness of language (grammar in particular). These
> arguments don't make sense to me, mostly because I see a
> way to get out of the loop without innate knowledge.
> Interestingly, this way out have a lot to do with the
> general mechanisms of intelligence.
>
> Pretend we're trying to explain how someone recognizes an
> object such as a pencil. To simplify, lets assume it is
> a black pencil over a white background. Our eyes capture the
> image and send, through the optic nerve, to the lateral
> geniculate nucleus. This is what I call innate mechanism,
> something that was there when we're born. And there, several
> kinds of processing happen. Later, on another part of this
> circuit we find detectors of lines, movement, color, etc.
>
[snipped!] (i had to)
> So the great problem of the philosophers (where do concepts
> come from) is easily solved once we understand that evolution
> did the hard work and left our senses ready to produce data
> for a mechanism interested only in patterns.



Thank you for putting your arguments clearly. I am a philosophy student,
and have come to the conclusion (open for revision however) that there
are two kinds of philosophers: Platonists and Aristotelians
(peripatetics). (Many hold views that are a mixture of the two). Innate
concepts are a platonic way of explaining thought. Aristotelians are
empirical, there is nothing in the intellect that has not its origin in
the senses - no innate ideas at all. The intellect is a power to
abstract generality (universals)  from experience. Now what about the
similarity criteria that you speak about? You and most other people
realize that humans are not purely intellectual: they have a body as
well. The kind of philosophers you refer to, are very mentalistic,
whereas it seems to me that you put a too strong emphasis on the
physical side (Im not sure about that). Ok. I shall try to give a short
description of how I think concepts  are formed, based on the views of
Aristotle and Aquinas (although I will not pretend that what I say here
is exactly what these people wrote).

The first thing we experience is being. This sounds very strange. But as
I understand it, our first experience when we are newborn is separating
out a part of reality as significant to ourselves. The first aspect of
reality that is significant is of course: food and comfort =mother. So
we single out being from non-being in the sense significant value from
nonsignificant, mother from everything else around us (as babies). There
is a drive, a passion in the human being, that makes it seek somethings
rather than others, these things after a while receive names - the first
words are often "yes" "no" "mama" expressing attraction or repulsion.

Because we by nature are interested in only certain parts of reality
(those that can be eaten and so on..) we single out and concentrate
attention on the relevant aspects. And this accounts for the similarity
of patterns: Food looks like something that we come to recognize after a
while, we abstract the general features of the edible and call it food.
Pencils are useful in more complex activities of reason, but the basic
mechanism is the same: By natural instinct we concentrate attention to
certain things, these things have similar shapes (and so on) and this
builds up a pattern recognition system.

I guess I dont really object to what you say, except that you generalise
about philosophers - all philosophers are not Platonists. (never forget
that now, OK?)

It would be interesting to read what you think about the general
mechanisms of intelligence.

--

Christoffer Vig

Errare humanum est

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: AI and the Scientific Method: (Was Re: AI and Skepticism)
Date: 04 Mar 1999 00:00:00 GMT
Message-ID: <36ded771@news3.us.ibm.net>
References: <36da9ae5@news3.us.ibm.net> <7bhpn8$fka@edrn.newsguy.com> <36dd5d5c@news3.us.ibm.net> <7bjnmp$p62@edrn.newsguy.com> <36ddb6b3@news3.us.ibm.net> <36DE7EF9.EFDD34C2@filosofi-stud.uio.no>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 4 Mar 1999 18:56:49 GMT, 166.72.21.2
Organization: SilWis
Newsgroups: comp.ai.philosophy

Christoffer Vig wrote in message <36DE7EF9.EFDD34C2@filosofi-stud.uio.no>...
>Sergio Navega wrote:
>>
>> So the great problem of the philosophers (where do concepts
>> come from) is easily solved once we understand that evolution
>> did the hard work and left our senses ready to produce data
>> for a mechanism interested only in patterns.
>
>Thank you for putting your arguments clearly. I am a philosophy student,
>and have come to the conclusion (open for revision however) that there
>are two kinds of philosophers: Platonists and Aristotelians
>(peripatetics). (Many hold views that are a mixture of the two). Innate
>concepts are a platonic way of explaining thought. Aristotelians are
>empirical, there is nothing in the intellect that has not its origin in
>the senses - no innate ideas at all.

If you're a philosophy student, then you're ahead of me. Philosophy is
for me a pretty new subject and by no means my goal (I'm a physicist
working with AI). But that does not prevent me from keeping some
visions. I'm closer to the Aristotelian view, although I like to
side myself with the british empiricists (Locke, Hume, Bacon). That
leaves me in the position of a skeptical for any kind of innate
knowledge.

> The intellect is a power to
>abstract generality (universals)  from experience. Now what about the
>similarity criteria that you speak about? You and most other people
>realize that humans are not purely intellectual: they have a body as
>well. The kind of philosophers you refer to, are very mentalistic,
>whereas it seems to me that you put a too strong emphasis on the
>physical side (Im not sure about that). Ok. I shall try to give a short
>description of how I think concepts  are formed, based on the views of
>Aristotle and Aquinas (although I will not pretend that what I say here
>is exactly what these people wrote).
>

In fact, our body acts as the first constraints on the establishment
of those comparison criteria. I go a little bit further in saying that
our body (which means, our sensorimotor experiences) provide the ground
in which our cognition is planted. One of my investigation areas is
to search in what extent this grounding can be substituted by some
artificial constructs in order to allow the evolution of cognition
on systems without body. Granted, this is a *very* radical and
hypothetical idea, but I'm collecting good signs that it can be
a worthwhile attempt.

>
>The first thing we experience is being. This sounds very strange. But as
>I understand it, our first experience when we are newborn is separating
>out a part of reality as significant to ourselves. The first aspect of
>reality that is significant is of course: food and comfort =mother. So
>we single out being from non-being in the sense significant value from
>nonsignificant, mother from everything else around us (as babies). There
>is a drive, a passion in the human being, that makes it seek somethings
>rather than others, these things after a while receive names - the first
>words are often "yes" "no" "mama" expressing attraction or repulsion.
>

I'm not sure I understood you here. If you meant that the first thing
we experience (when baby) is being, as a distinct entity from the
rest of the world, then I disagree. Babies take some time until they
recognize themselves as entities distinct from the rest (mother, toys,
etc). But if you mean perception, meaning selective and learned
extraction of information from what's sensed, then I agree.

>
>Because we by nature are interested in only certain parts of reality
>(those that can be eaten and so on..) we single out and concentrate
>attention on the relevant aspects. And this accounts for the similarity
>of patterns: Food looks like something that we come to recognize after a
>while, we abstract the general features of the edible and call it food.
>Pencils are useful in more complex activities of reason, but the basic
>mechanism is the same: By natural instinct we concentrate attention to
>certain things, these things have similar shapes (and so on) and this
>builds up a pattern recognition system.
>

I mostly agree, although I'd say that the pattern recognition system is
functionally already built, only that it is more capable of inferring
more useful knowledge as time passes. Your introduction of the drives
such as hunger is interesting, because we've got to remember that
the baby starts life with two kind of "forces" impinging on its brain:
the external world (mother, milk, toys, gravity, etc) and the
internal world (hunger, thirst, pain, discomfort, curiosity, desire,
etc). I think it is not unwise to consider that our psyche is the
result of the interaction of the brain (considered as being made only
of the neocortex) with both systems simultaneously, and those
strange things that appear to happen to our cognition (like preferences
for a color, taste, etc) may be the result of a perceptual mechanism
*molded* by those (sometimes conflicting) forces.

>
>I guess I dont really object to what you say, except that you generalise
>about philosophers - all philosophers are not Platonists. (never forget
>that now, OK?)

Absolutely right. There are philosophers and philosophers. I have a lot
of names I admire (Dennett, Chalmers, Fodor on occasion, etc). In this
newsgroup I like the texts of Weinstein. In fact, I don't have to agree
with all of them to recognize important and thoughtful points.

Regards,
Sergio Navega.

From: "Kyle Pierce" <kyle.pierce@mci.com>
Subject: Re: AI and Skepticism
Date: 03 Mar 1999 00:00:00 GMT
Message-ID: <YldD2.243$oe3.29429@PM01NEWS>
References: <36da9ae5@news3.us.ibm.net>
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.2106.4
X-Trace: PM01NEWS 920478200 166.37.23.42 (Wed, 03 Mar 1999 16:23:20 GMT)
NNTP-Posting-Date: Wed, 03 Mar 1999 16:23:20 GMT
Newsgroups: comp.ai.philosophy

Sergio Navega wrote in message <36da9ae5@news3.us.ibm.net>...
>
>The main goal is to protect the AI system from mystical,
>pseudo-scientific things such as, for instance, astrology.
>
>This leads me to think about one of the possible tests to
>an AI: to expose the system to astrology as if it were true
>and then see if it "fights" with that idea.
>

To me, what you propose seems similar to something like:  asking someone
with a severe intellectual impairment to intellegently present philosophical
arguments that effectively establish scientific truths in a way that
excludes "pseudoscience".  I say this as a computer scientist with long-term
AI interests, who is also a student of the Western intellectual tradition in
all of its remarkable variety.  In other words, I believe that even
pre-enlightenment thought was intelligent.  I hope this will not mark me as
a heretic.  I enjoy struggling to understand cultures and ways of thinking
that seem very different from my own.

The pertinent philosophical arguments (pertaining to science vs. astrology,
for example) have been made by intelligent people of various persuasions,
over the centuries.  Ok, at this point, I could go on about how very far we
are from having any sort of AI system that could engage in philosophical
argumentation.

Instead, why don't we do a thought experiment in which some simple starting
position is established by making explicit most of the crucial assumptions
of that philosophical position.  My goal here is to learn (and to
communicate) something about the scale of this kind of problem.

Kyle Pierce

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: AI and Skepticism
Date: 03 Mar 1999 00:00:00 GMT
Message-ID: <36dda2aa@news3.us.ibm.net>
References: <36da9ae5@news3.us.ibm.net> <YldD2.243$oe3.29429@PM01NEWS>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 3 Mar 1999 20:59:22 GMT, 166.72.21.94
Organization: SilWis
Newsgroups: comp.ai.philosophy

Kyle Pierce wrote in message ...
>
>Sergio Navega wrote in message <36da9ae5@news3.us.ibm.net>...
>>
>>The main goal is to protect the AI system from mystical,
>>pseudo-scientific things such as, for instance, astrology.
>>
>>This leads me to think about one of the possible tests to
>>an AI: to expose the system to astrology as if it were true
>>and then see if it "fights" with that idea.
>>
>
>
>To me, what you propose seems similar to something like:  asking someone
>with a severe intellectual impairment to intellegently present
philosophical
>arguments that effectively establish scientific truths in a way that
>excludes "pseudoscience".

I couldn't see the resemblance of your point and mine. My idea was to
try to expose the system to a new paradigm and observe what are its
reactions. I would be glad to see the system reacting with the kind
of ingenuity and skepticism that we find in a child. Obviously, if we
are persuasive enough I think that the system will swallow astrology,
or anything nonsensical, for that matter.

But then, when the system is exposed, later, to a critic of astrology
it would have to perceive that it had a bad model and that a lot of
interactions with humans may have a strong incompatible aspect.
I would expect the system to be able to explain to me what are the
arguments of the astrologers and then explain why he (the system) is
not convinced of this position anymore. The explanation capacity and
the ability to keep conflicting theories in one's mind (asserting one
as the "right" and the other as "wrong") is part of what I'd like to
have in such a system.

> I say this as a computer scientist with long-term
>AI interests, who is also a student of the Western intellectual tradition
in
>all of its remarkable variety.  In other words, I believe that even
>pre-enlightenment thought was intelligent.  I hope this will not mark me as
>a heretic.  I enjoy struggling to understand cultures and ways of thinking
>that seem very different from my own.
>

If with pre-enlightenment you want to say "intuitive" and perceptive,
you'll probably find surprising that I agree with you. I see as necessary
that an AI system (or any intelligent system, for that matter) is
capable of using a lot of intuitive correlations (whatever that may
mean) to help in its reasoning process (the meaning of intuition in
such a context is something that I'm yet to discover).

>The pertinent philosophical arguments (pertaining to science vs. astrology,
>for example) have been made by intelligent people of various persuasions,
>over the centuries.  Ok, at this point, I could go on about how very far we
>are from having any sort of AI system that could engage in philosophical
>argumentation.
>
>Instead, why don't we do a thought experiment in which some simple starting
>position is established by making explicit most of the crucial assumptions
>of that philosophical position.  My goal here is to learn (and to
>communicate) something about the scale of this kind of problem.
>

I don't know if this is relevant next to what you've said, but I don't
make a straight correlation between intelligence and skepticism. I'm
currently spending some time studying "skeptic literature" (and laughing
a lot) and it is clear that even intelligent persons may be prey of
pseudo-science (in one story, a group of physics PhDs are easily
deceived by a clever "psychic", which was latter debunked by an
specialist). What I had proposed originally is a *complement* to
an intelligent mechanism: it is the introduction of something in that
architecture that accounts for the "skeptical nature" of the human
skeptics. This sounds difficult (to determine and to implement).

Regards,
Sergio Navega.

From: "Kyle Pierce" <kyle.pierce@mci.com>
Subject: Re: AI and Skepticism
Date: 04 Mar 1999 00:00:00 GMT
Message-ID: <%zCD2.318$oe3.36319@PM01NEWS>
References: <36da9ae5@news3.us.ibm.net> <YldD2.243$oe3.29429@PM01NEWS> <36dda2aa@news3.us.ibm.net>
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.2106.4
X-Trace: PM01NEWS 920581499 166.37.23.42 (Thu, 04 Mar 1999 21:04:59 GMT)
NNTP-Posting-Date: Thu, 04 Mar 1999 21:04:59 GMT
Newsgroups: comp.ai.philosophy

Sergio Navega wrote in message <36dda2aa@news3.us.ibm.net>...
>If with pre-enlightenment you want to say "intuitive" and perceptive,
>you'll probably find surprising that I agree with you. I see as necessary
>that an AI system (or any intelligent system, for that matter) is
>capable of using a lot of intuitive correlations (whatever that may
>mean) to help in its reasoning process (the meaning of intuition in
>such a context is something that I'm yet to discover).

Yes, I am also interested in the important role of intuition.  Actually,
though, by "pre-enlightenment thought" I was referring instead to what may
have been a very different mentality from the one we moderns share (to at
least some degree).  It was certainly a less empirically-minded view, and
one that seems to have been more devoted to the study of first principles.
In fact, this may be a primary distinction between the medieval mind and
that of moderns.  If you read Aquinas, for example, you probably wouldn't
find his thinking especially "intuitive", but you might well be struck by
his lack of empiricism.  Given this failing, why is it that people still
write dissertations on his thinking?  Why is Scholasticism of interest to
anyone at all anymore?  Maybe because we have otherwise lost that whole way
of looking at things?

To me, the failures of prevailing agendas for AI have been profound enough
to call into question even the fundamental empirical bias that we naturally
bring to that work.  Maybe we need to do some radical examination of our
human roots, in acknowledgment of the enormity of our ignorance.  Maybe we
need to get some perspective on our own agendas by a better understanding of
where we have come from.  We might have to go back a long way to achieve
adequate perspective.  And in the process, we might find that there are very
different agendas worth pursuing.

What if our own search space for proper starting points and assumptions is
too limited, and too culture-bound?  We tend to talk as if "AI" could be an
umbrella for almost any ideas about intelligence.  And yet the ideas that
drive most agendas seem to be cut from the same cloth.  These are basically
ideas about creating an infrastructure for cognition from scratch, based on
the modeling of observable cognitive behaviors.  What else is there, we ask,
since this seems to be the limit of the world as we know it.

What if persistent and creative "archeological" work could uncover some sort
of existing infrastructure?  Not that I have any idea what such a thing
could look like, but we just don't know.  Hence we would do well not to
reject out of hand the possibility that such knowledge -- patterns of
language development, whatever -- might be yet undiscovered.  Maybe we don't
have to create an infrastructure for cognition from scratch.  This doesn't
necessarily get into issues of "innate" knowledge, though it certainly
could.  Given the paucity of AI deliverables that we have seen thus far, I
don't mind going out on some long limbs in search of clues!

Kyle Pierce

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: AI and Skepticism
Date: 04 Mar 1999 00:00:00 GMT
Message-ID: <36deffbc@news3.us.ibm.net>
References: <36da9ae5@news3.us.ibm.net> <YldD2.243$oe3.29429@PM01NEWS> <36dda2aa@news3.us.ibm.net> <%zCD2.318$oe3.36319@PM01NEWS>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 4 Mar 1999 21:48:44 GMT, 166.72.29.148
Organization: SilWis
Newsgroups: comp.ai.philosophy

Kyle Pierce wrote in message
>
>Sergio Navega wrote in message <36dda2aa@news3.us.ibm.net>...
>>If with pre-enlightenment you want to say "intuitive" and perceptive,
>>you'll probably find surprising that I agree with you. I see as necessary
>>that an AI system (or any intelligent system, for that matter) is
>>capable of using a lot of intuitive correlations (whatever that may
>>mean) to help in its reasoning process (the meaning of intuition in
>>such a context is something that I'm yet to discover).
>
>
>Yes, I am also interested in the important role of intuition.  Actually,
>though, by "pre-enlightenment thought" I was referring instead to what may
>have been a very different mentality from the one we moderns share (to at
>least some degree).  It was certainly a less empirically-minded view, and
>one that seems to have been more devoted to the study of first principles.
>In fact, this may be a primary distinction between the medieval mind and
>that of moderns.  If you read Aquinas, for example, you probably wouldn't
>find his thinking especially "intuitive", but you might well be struck by
>his lack of empiricism.  Given this failing, why is it that people still
>write dissertations on his thinking?  Why is Scholasticism of interest to
>anyone at all anymore?  Maybe because we have otherwise lost that whole way
>of looking at things?

I confess that I'm uninformed about Aquinas. By your description, if
his writings are not intuitive neither empirically based, then what sort
of ideas are these? Could you give a short example?

>
>To me, the failures of prevailing agendas for AI have been profound enough
>to call into question even the fundamental empirical bias that we naturally
>bring to that work.  Maybe we need to do some radical examination of our
>human roots, in acknowledgment of the enormity of our ignorance.  Maybe we
>need to get some perspective on our own agendas by a better understanding
of
>where we have come from.  We might have to go back a long way to achieve
>adequate perspective.  And in the process, we might find that there are
very
>different agendas worth pursuing.
>

I agree with your stance of looking for new positions to make new
evaluations
of things that we take for granted. But I guess we will differ terribly on
how we choose to do this. The way I'm choosing is understanding enough
the workings of the brain to comprehend why some kinds of reasonings
(even those which I disagree) find a way inside someone else's mind.

>What if our own search space for proper starting points and assumptions is
>too limited, and too culture-bound?  We tend to talk as if "AI" could be an
>umbrella for almost any ideas about intelligence.  And yet the ideas that
>drive most agendas seem to be cut from the same cloth.  These are basically
>ideas about creating an infrastructure for cognition from scratch, based on
>the modeling of observable cognitive behaviors.  What else is there, we
ask,
>since this seems to be the limit of the world as we know it.
>

Your idea, if I grasped it correctly, is exciting. I see now AI as a good
way to experiment with those different visions of the world. In trying to
make an automated "mind" we will be forced to reconsider our most cherished
paradigms. This should happen because the computer will not have the same
kind of fluency with human-like concepts. We will have to "emulate" in
our minds what the computer is doing in order to understand why it is
questioning a principle that we've presented to it. During that process,
I will not be surprised if we find problems in our own basic "axioms".
My god, I'm talking like a philosopher. Is this contagious? :-)

Regards,
Sergio Navega.

From: "Kyle Pierce" <kyle.pierce@mci.com>
Subject: Re: AI and Skepticism
Date: 04 Mar 1999 00:00:00 GMT
Message-ID: <7eED2.322$oe3.36760@PM01NEWS>
References: <36da9ae5@news3.us.ibm.net> <YldD2.243$oe3.29429@PM01NEWS> <36dda2aa@news3.us.ibm.net> <%zCD2.318$oe3.36319@PM01NEWS> <36deffbc@news3.us.ibm.net>
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.2106.4
X-Trace: PM01NEWS 920588291 166.37.23.42 (Thu, 04 Mar 1999 22:58:11 GMT)
NNTP-Posting-Date: Thu, 04 Mar 1999 22:58:11 GMT
Newsgroups: comp.ai.philosophy

Sergio Navega wrote in message <36deffbc@news3.us.ibm.net>...
>Kyle Pierce wrote in message
>Yes, I am also interested in the important role of intuition.  Actually,
>>though, by "pre-enlightenment thought" I was referring instead to what may
>>have been a very different mentality from the one we moderns share (to at
>>least some degree).  It was certainly a less empirically-minded view, and
>>one that seems to have been more devoted to the study of first principles.
>>In fact, this may be a primary distinction between the medieval mind and
>>that of moderns.  If you read Aquinas, for example, you probably wouldn't
>>find his thinking especially "intuitive", but you might well be struck by
>>his lack of empiricism.  Given this failing, why is it that people still
>>write dissertations on his thinking?  Why is Scholasticism of interest to
>>anyone at all anymore?  Maybe because we have otherwise lost that whole
way
>>of looking at things?
>
>
>I confess that I'm uninformed about Aquinas. By your description, if
>his writings are not intuitive neither empirically based, then what sort
>of ideas are these? Could you give a short example?

Forgive me, I was using the term "intuition" inconsistently -- both formally
and informally at once.  In Kant's Critique of Pure Reason, he contrasts
intuition (or direct apprehension) with reason or inference, and he also
contrasts the empirical object with the pure object.  So one can then
distinguish between empirical intuition (as of a sensation), and pure
intuition (as of a mathematical axiom).  That is the formal notion.  Thomas
Aquinas is not a special case, but I will try to find a good quote to
illustrate my point -- though it might take me awhile.  The point being,
that Aquinas often gives substantially more weight to the "pure object"
(e.g., a first principle) than to the empirical object.

I'm glad to hear that philosophical thinking might be contagious!  Wish I
had more time to respond now.

Regards,
Kyle Pierce

From: "Nick Tsocanos" <njt@avana.net>
Subject: Re: AI and Skepticism
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <36df7859.0@news.avana.net>
X-NNTP-Posting-Host: rac7-07.avana.net
References: <36da9ae5@news3.us.ibm.net>
Organization: Avana Communications
Newsgroups: comp.ai.philosophy
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.3110.3

>The main goal is to protect the AI system from mystical,
>pseudo-scientific things such as, for instance, astrology.
>
Why would you want to 'protect' it from these things? If it were truly
intelligent, it should be able to decide itself if these things were
credible or not.

>This leads me to think about one of the possible tests to
>an AI: to expose the system to astrology as if it were true
>and then see if it "fights" with that idea.
>
Ok. But what if decides it likes the idea? Or, the idea even infiltrates
it's entire knowledge base? What if the AI decides to use this idea as one
of it's postulates? What if the computer even believes (gulp) in the
existence of some Super-Being? What if, even further, it decides it IS the
Super-Being?

Further, what IF, the computer can 'see' a true propistion, but not be able
to prove it finitely? Would the AI go insane trying to prove it?

Does the fact that an AI pass the Turing test imply that an AI is equivalent
to a Human in all repects? Does that mean, the AI might be able to be
deceived like a human? The AI might even act irrationally? Have true
emotions? Or AI only in the rational form?

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: AI and Skepticism
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <36dfeab9@news3.us.ibm.net>
References: <36da9ae5@news3.us.ibm.net> <36df7859.0@news.avana.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 5 Mar 1999 14:31:21 GMT, 166.72.21.221
Organization: SilWis
Newsgroups: comp.ai.philosophy

Nick Tsocanos wrote in message <36df7859.0@news.avana.net>...
>>The main goal is to protect the AI system from mystical,
>>pseudo-scientific things such as, for instance, astrology.
>>
>Why would you want to 'protect' it from these things? If it were truly
>intelligent, it should be able to decide itself if these things were
>credible or not.
>

Your point seems correct, but just apparently. I suspect that belief
in astrology or any other "new age" hocus-pocus is largely independent
of intelligence. I'm very close to bright, intelligent people who
are astrologers and believe firmly on what they do.

This makes me think that skepticism is a "state of mind", something
that must be equally learned by either dull or intelligent people.

>>This leads me to think about one of the possible tests to
>>an AI: to expose the system to astrology as if it were true
>>and then see if it "fights" with that idea.
>>
>Ok. But what if decides it likes the idea? Or, the idea even infiltrates
>it's entire knowledge base? What if the AI decides to use this idea as one
>of it's postulates? What if the computer even believes (gulp) in the
>existence of some Super-Being? What if, even further, it decides it IS the
>Super-Being?
>

I guess that any astrologer who buys, in the future, an AI program, will
expect it to "believe" in astrology. If I ever manage to write a good
AI program (sometime in the future...), I know beforehand that astrologers
will not be my primary customers :-)

>Further, what IF, the computer can 'see' a true propistion, but not be able
>to prove it finitely? Would the AI go insane trying to prove it?
>

Nothing the computer (or even we humans) believe today can be proved
definitely. The challenge is to find the perfect balance between the
activities of searching for an specific answer and assuming that it
is not really that significant and the wisest thing to do is to move
our efforts toward another area.

>Does the fact that an AI pass the Turing test imply that an AI is
equivalent
>to a Human in all repects? Does that mean, the AI might be able to be
>deceived like a human? The AI might even act irrationally? Have true
>emotions? Or AI only in the rational form?
>

An AI will never fully understand our human condition. No human can
fully understand the conditions of another human. Each one is a
separate entity, because driven by particular sets of experiences.
The AIs, made of silicon computers, will have an additional level
of difficulty, that of don't "understanding" our biological
conditions and its effects in our psyche.

Regards,
Sergio Navega.

From: "Nick Tsocanos" <njt@avana.net>
Subject: Re: AI and Skepticism
Date: 07 Mar 1999 00:00:00 GMT
Message-ID: <36e23405.0@news.avana.net>
X-NNTP-Posting-Host: ctone2-09.avana.net
References: <36da9ae5@news3.us.ibm.net> <36df7859.0@news.avana.net> <36dfeab9@news3.us.ibm.net>
Organization: Avana Communications
Newsgroups: comp.ai.philosophy
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.3110.3

<snip>
>Your point seems correct, but just apparently. I suspect that belief
>in astrology or any other "new age" hocus-pocus is largely independent
>of intelligence. I'm very close to bright, intelligent people who
>are astrologers and believe firmly on what they do.
>
>This makes me think that skepticism is a "state of mind", something
>that must be equally learned by either dull or intelligent people.
>
I'll buy that!

<snip>

>I guess that any astrologer who buys, in the future, an AI program, will
>expect it to "believe" in astrology. If I ever manage to write a good
>AI program (sometime in the future...), I know beforehand that astrologers
>will not be my primary customers :-)
>
:)

>>Further, what IF, the computer can 'see' a true propistion, but not be
able
>>to prove it finitely? Would the AI go insane trying to prove it?
>>
>
>Nothing the computer (or even we humans) believe today can be proved
>definitely. The challenge is to find the perfect balance between the
>activities of searching for an specific answer and assuming that it
>is not really that significant and the wisest thing to do is to move
>our efforts toward another area.
>
My only true worry is, what if the AI decides it is the super-being because,
it is not limited to the frailties of human condition. That would be a
problem. If the AI were built on skepticism, it would be skeptical as to
what purpose humans serve it!

>>Does the fact that an AI pass the Turing test imply that an AI is
>equivalent
>>to a Human in all repects? Does that mean, the AI might be able to be
>>deceived like a human? The AI might even act irrationally? Have true
>>emotions? Or AI only in the rational form?
>>
>
>
>An AI will never fully understand our human condition. No human can
>fully understand the conditions of another human. Each one is a
>separate entity, because driven by particular sets of experiences.
>The AIs, made of silicon computers, will have an additional level
>of difficulty, that of don't "understanding" our biological
>conditions and its effects in our psyche.
>
I like that. Which makes me wonder, the possibilities of an AI becoming
greater than the humans whom it is suppose to serve.

I would say, then, an AI must be grounded on the unconditional postulate
that, humans must be considered as better than equals, because humans kill
one another. Then the computer AI must be constrained somehow, or it too
will kill us off, I reckon.

If an AI = Human Consciousness, I would suggest, AI is capable of anything a
human is, even genocide.

But this isn't really that bad, because, humans still do this. So maybe an
AI might be the best thing to happen to humans. We'd now have something to
focus on other than ourselves? It wouldn't solve the problem, but shift it
to something else...

Just some observations, anyways.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net