Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@attglobal.net>
Subject: Re: Emotion in Robots
Date: 11 Jan 2000 00:00:00 GMT
Message-ID: <387b9738_4@news3.prserv.net>
References: <387A4B75.4995919@infolink.com.br> <85dpv2$nko@ux.cs.niu.edu> <85e8du$1cg4$1@newssvr04-int.news.prodigy.com> <85fort$r94@ux.cs.niu.edu> <387B80C2.CCE1AFF4@cruzio.com>
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: abuse@prserv.net
X-Trace: 11 Jan 2000 20:48:56 GMT, 32.101.186.24
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

andi babian wrote in message <387B80C2.CCE1AFF4@cruzio.com>...
>
>Neil W Rickert wrote:
>>
>> When we get to the stage that desktop computers have 10^99 bytes of
>> memory, and operate at a speed of 10^99 MHz, our current methods will
>> prove to be no more successful at solving the AI problem than they
>> are today.
>>
>> The problem is not one of cost.  The problem is with the naive notion
>> that intelligence = logic.
>
>Well what is it then?  And why won't bigger, faster computers be helpful?
>Is it that the theory is wrong but current computers are adequate, or
>would a faster computer work given a suitable theory, or are computers
>the wrong type of thing to handle intelligence?  Are computers not suitably
>'universal' in some way?
>

The issue is really thought-provoking, and I guess each one of us will
have a different answer to it. I could see the question from several
angles (probably all of them non-consensual), but I'll choose just one.

If one proposes that intelligence is logic, then what one is saying is
that intelligence is deduction. There are dozens of ways to criticize
this, but lets just take the 'memory storage' view. A deductive scheme
will try to see the world through a series of intertwined
antecedent/consequent pairs. Each "thing" that happens in the world
would have to be "translated" into a series (possibly a huge number)
of propositions, along with logical relations among them.

Unfortunately, when one is dealing with 'block worlds', simplified
worlds with few (and known) rules, it's easy to devise a program
that behaves appropriately. I say unfortunately, because this is
misleading: it drives the researcher into thinking that this
process is scalable. It isn't. The growth factor of the number
of rules necessary to perform correctly is certainly of
exponential order.

But that's not the biggest problem. The big issue is that the
entity, when facing the natural world, *does not know* the rules
of this world. It must discover them prior to use them in
deductions! And deduction itself is useless for this purpose.

So instead of brains performing better the more they know, they
would have to perform slower. But that's not what happens. Experts
solve problems (in general) much faster and reliably than novices.
Brains don't use logic (in case anyone still doubt this, consult
any chauvinist pig; I bet he would say a bit more about this
in relation to women's brains ;-)

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net