Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 12 Feb 1999 00:00:00 GMT
Message-ID: <36c42790@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36C35972.53CA3705@jhuapl.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 12 Feb 1999 13:07:28 GMT, 166.72.29.191
Organization: SilWis
Newsgroups: comp.ai

james d. hunter wrote in message <36C35972.53CA3705@jhuapl.edu>...
>John Ownby wrote:
>>
>
>[...]
>
> > Sorry for the ramble.  Please don't think I'm taking a jab at
>academia, because  that's not my intention.
> > It's just that we need to get people to understand that this is just
>another  technology, not some
> > mistical form of rocket science.  Just my $.02.
>
>  That's true. AI is not quite brain surgery.

Yes, that's right, AI is not quite brain surgery.

It is *much* worse!

In brain surgery you make slight alterations and incisions (mostly
driven by empirical knowledge) and and let the system heal itself,
because of its fantastic self-adaptive nature. In AI, you've got to
get it started *from zero*, which demands knowledge of how intelligence
really works. The problem is that, after all these decades
of research, we've got only minor clues about what makes some piece
of meat (or a bunch of chips) to act intelligently.

Regards,
Sergio Navega.

From: "james d. hunter" <jim.hunter@jhuapl.edu>
Subject: Re: Hahahaha
Date: 12 Feb 1999 00:00:00 GMT
Message-ID: <36C500CE.CE6364E2@jhuapl.edu>
Content-Transfer-Encoding: 7bit
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36C35972.53CA3705@jhuapl.edu> <36c42790@news3.us.ibm.net>
Content-Type: text/plain; charset=us-ascii
X-Complaints-To: usenet@houston.jhuapl.edu
X-Trace: houston.jhuapl.edu 918880461 10800 128.244.27.28 (13 Feb 1999 04:34:21 GMT)
Organization: jhu/apl
Mime-Version: 1.0
Reply-To: jim.hunter@spam.free.jhuapl.edu.
NNTP-Posting-Date: 13 Feb 1999 04:34:21 GMT
Newsgroups: comp.ai

Sergio Navega wrote:
>
> james d. hunter wrote in message <36C35972.53CA3705@jhuapl.edu>...
> >John Ownby wrote:
> >>
> >
> >[...]
> >
> > > Sorry for the ramble.  Please don't think I'm taking a jab at
> >academia, because  that's not my intention.
> > > It's just that we need to get people to understand that this is just
> >another  technology, not some
> > > mistical form of rocket science.  Just my $.02.
> >
> >  That's true. AI is not quite brain surgery.
>
> Yes, that's right, AI is not quite brain surgery.
>
> It is *much* worse!
>
> In brain surgery you make slight alterations and incisions (mostly
> driven by empirical knowledge) and and let the system heal itself,
> because of its fantastic self-adaptive nature. In AI, you've got to
> get it started *from zero*, which demands knowledge of how intelligence
> really works. The problem is that, after all these decades
> of research, we've got only minor clues about what makes some piece
> of meat (or a bunch of chips) to act intelligently.
>

  You're definitely not wrong, but I don't the situation is
  all that dismal. How intelligence "works" is really not
  based on AI research. It more based on centuries of
  philosophy, math, science, and engineering.

  We pretty much can already make machines act intelligently.
  They can do specific jobs, tasks, as good as or better
  than humans. Really wants left is "blending" the pieces
  together.

  i.e. A human neuro-surgeon of course knows his brain
  theory pretty good, but in a pinch they probably
  can easily fix a leg, teach, etc.

  ---
  Jim

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 13 Feb 1999 00:00:00 GMT
Message-ID: <36c594c8@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36C35972.53CA3705@jhuapl.edu> <36c42790@news3.us.ibm.net> <36C500CE.CE6364E2@jhuapl.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 13 Feb 1999 15:05:44 GMT, 166.72.21.182
Organization: SilWis
Newsgroups: comp.ai

james d. hunter wrote in message <36C500CE.CE6364E2@jhuapl.edu>...
>Sergio Navega wrote:
>>
>> james d. hunter wrote in message <36C35972.53CA3705@jhuapl.edu>...
>> >John Ownby wrote:
>> >>
>> >
>> >[...]
>> >
>> > > Sorry for the ramble.  Please don't think I'm taking a jab at
>> >academia, because  that's not my intention.
>> > > It's just that we need to get people to understand that this is just
>> >another  technology, not some
>> > > mistical form of rocket science.  Just my $.02.
>> >
>> >  That's true. AI is not quite brain surgery.
>>
>> Yes, that's right, AI is not quite brain surgery.
>>
>> It is *much* worse!
>>
>> In brain surgery you make slight alterations and incisions (mostly
>> driven by empirical knowledge) and and let the system heal itself,
>> because of its fantastic self-adaptive nature. In AI, you've got to
>> get it started *from zero*, which demands knowledge of how intelligence
>> really works. The problem is that, after all these decades
>> of research, we've got only minor clues about what makes some piece
>> of meat (or a bunch of chips) to act intelligently.
>>
>
>  You're definitely not wrong, but I don't the situation is
>  all that dismal. How intelligence "works" is really not
>  based on AI research. It more based on centuries of
>  philosophy, math, science, and engineering.
>

>  We pretty much can already make machines act intelligently.
>  They can do specific jobs, tasks, as good as or better
>  than humans. Really wants left is "blending" the pieces
>  together.
>

The programs and machines we have today are nothing more than
a sort of "intelligence freezers". They accumulate the intelligence
of their designers, but are not able to produce intelligent
behaviors by themselves. Put any of these machines under a totally
new and not programmed situation and their behaviors will be
miserable. Even if this machine is run for a million of years
in a row. That's a dull machine, for me.

Regards,
Sergio Navega.

From: "james d. hunter" <jim.hunter@jhuapl.edu>
Subject: Re: Hahahaha
Date: 13 Feb 1999 00:00:00 GMT
Message-ID: <36C629FE.61689D22@jhuapl.edu>
Content-Transfer-Encoding: 7bit
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36C35972.53CA3705@jhuapl.edu> <36c42790@news3.us.ibm.net> <36C500CE.CE6364E2@jhuapl.edu> <36c594c8@news3.us.ibm.net>
Content-Type: text/plain; charset=us-ascii
X-Complaints-To: usenet@houston.jhuapl.edu
X-Trace: houston.jhuapl.edu 918956542 18066 128.244.27.28 (14 Feb 1999 01:42:22 GMT)
Organization: jhu/apl
Mime-Version: 1.0
Reply-To: jim.hunter@spam.free.jhuapl.edu.
NNTP-Posting-Date: 14 Feb 1999 01:42:22 GMT
Newsgroups: comp.ai

Sergio Navega wrote:
>
> james d. hunter wrote in message <36C500CE.CE6364E2@jhuapl.edu>...
> >Sergio Navega wrote:
> >>
> >> james d. hunter wrote in message <36C35972.53CA3705@jhuapl.edu>...
> >> >John Ownby wrote:
> >> >>
> >> >
> >> >[...]
> >> >
> >> > > Sorry for the ramble.  Please don't think I'm taking a jab at
> >> >academia, because  that's not my intention.
> >> > > It's just that we need to get people to understand that this is just
> >> >another  technology, not some
> >> > > mistical form of rocket science.  Just my $.02.
> >> >
> >> >  That's true. AI is not quite brain surgery.
> >>
> >> Yes, that's right, AI is not quite brain surgery.
> >>
> >> It is *much* worse!
> >>
> >> In brain surgery you make slight alterations and incisions (mostly
> >> driven by empirical knowledge) and and let the system heal itself,
> >> because of its fantastic self-adaptive nature. In AI, you've got to
> >> get it started *from zero*, which demands knowledge of how intelligence
> >> really works. The problem is that, after all these decades
> >> of research, we've got only minor clues about what makes some piece
> >> of meat (or a bunch of chips) to act intelligently.
> >>
> >
> >  You're definitely not wrong, but I don't the situation is
> >  all that dismal. How intelligence "works" is really not
> >  based on AI research. It more based on centuries of
> >  philosophy, math, science, and engineering.
> >
>
> >  We pretty much can already make machines act intelligently.
> >  They can do specific jobs, tasks, as good as or better
> >  than humans. Really wants left is "blending" the pieces
> >  together.
> >
>
> The programs and machines we have today are nothing more than
> a sort of "intelligence freezers". They accumulate the intelligence
> of their designers, but are not able to produce intelligent
> behaviors by themselves. Put any of these machines under a totally
> new and not programmed situation and their behaviors will be
> miserable. Even if this machine is run for a million of years
> in a row. That's a dull machine, for me.

  That could be true, but unfortunately you have
  also described humans pretty good.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 15 Feb 1999 00:00:00 GMT
Message-ID: <36c82327@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36C35972.53CA3705@jhuapl.edu> <36c42790@news3.us.ibm.net> <36C500CE.CE6364E2@jhuapl.edu> <36c594c8@news3.us.ibm.net> <36C629FE.61689D22@jhuapl.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 15 Feb 1999 13:37:43 GMT, 200.229.240.162
Organization: SilWis
Newsgroups: comp.ai

james d. hunter wrote in message <36C629FE.61689D22@jhuapl.edu>...
>Sergio Navega wrote:
>>
>> james d. hunter wrote in message <36C500CE.CE6364E2@jhuapl.edu>...
>>
>> The programs and machines we have today are nothing more than
>> a sort of "intelligence freezers". They accumulate the intelligence
>> of their designers, but are not able to produce intelligent
>> behaviors by themselves. Put any of these machines under a totally
>> new and not programmed situation and their behaviors will be
>> miserable. Even if this machine is run for a million of years
>> in a row. That's a dull machine, for me.
>
>  That could be true, but unfortunately you have
>  also described humans pretty good.

I think I missed your highly philosophical argument here.

Just in case, throw a man in a desert island. If that island have
enough resources to keep a biological life (fruits, trees, animals),
even a wall-street yuppie will be able to get along. He/She will be
able to devise a shelter, to hunt for fish, to grab fruits, to
survive. We are surviving machines and additionally we've got
all machinery necessary for abstract thinking. I'm not saying that
our computers should be able to do the same sort of things. I'll
be happy if they perceive my daily routine of work and anticipate
somethings to help me.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 18 Feb 1999 00:00:00 GMT
Message-ID: <36cc1ea2@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36C35972.53CA3705@jhuapl.edu> <36c42790@news3.us.ibm.net> <36C500CE.CE6364E2@jhuapl.edu> <36c594c8@news3.us.ibm.net> <36c973d3.0@seralph9>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 18 Feb 1999 14:07:30 GMT, 166.72.21.242
Organization: SilWis
Newsgroups: comp.ai

Kostas Kostiadis wrote in message <36c973d3.0@seralph9>...
>
>>Sergio Navega:
>>The programs and machines we have today are nothing more than
>>a sort of "intelligence freezers". They accumulate the intelligence
>>of their designers, but are not able to produce intelligent
>>behaviors by themselves. Put any of these machines under a totally
>
>
>NOT TRUE.  A number of machine LEARNING techniques do
>exists that start from very little and learn from their interaction
>with their environment.  Pretty much like we do...
>

Indeed. I should have said "several of today's machines". My
generalization was unfortunate.

But even learning alone is not a good example of what intelligence
should be. My PKZIP compressor can be said to "learn" things about the
file it is compressing. I guess that getting to a good conceptualization
of what is intelligence is really tough. When I said "intelligence
freezers" I was thinking specifically of Expert Systems, that,
oddly, are still one of the "cash cows" of today's AI business.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 12 Feb 1999 00:00:00 GMT
Message-ID: <36c42795@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 12 Feb 1999 13:07:33 GMT, 166.72.29.191
Organization: SilWis
Newsgroups: comp.ai,comp.ai.philosophy

John Ownby wrote in message
<9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.ne
t>...
>[snip]
>People who scoff at AI today don't realize that the
>spell checker they take for granted today would have
>been considered AI just a few decades ago.  It's true
>that AI has been  oversold in the past, just the same
>as futurists were predicting 20 hour work weeks before
>the turn of the century 30 years ago!  I don't
>have anything against the academic community, but
>I think the academic community has played a role
>in hampering the acceptance and progress of AI over
>the years via the process of mystifying it with
>modern AI jargon.  It has almost become too painful
>to attempt to translate most AI papers from "AI
>Speak" into something intelligible.
[snip]

I can't say that I agree in full with "spider's" original
criticism (origin of this thread). But I see some points of
relevance. I don't consider today's spell checkers as examples
of AI (neither today neither when they where originally developed).

Ask *any* AI researcher what are the practical
applications of AI in use today and they will limit themselves
to reciting the old stuff about neural nets in credit checking or
stock trade prediction, expert systems in hospitals and
configuration/maintenance enterprises, or speech recognition
packages, or natural language access to databases. With few
different things, all acclaimed AI applications are old hat.
I even doubt that they can really be considered intelligent.

Then you may hear someone saying that when one AI
application leaves the lab to the industry it is usually
considered software engineering, and that's why we don't
have good examples of AI. Rubbish! We don't have AI because
*none* of those applications were intelligent in the first
place!

So what is an intelligent software? How can we know that,
after leaving the lab, the software will still be considered
intelligent?

In my vision, intelligent software is the one that *improves*
its performance automatically, the more it runs (that's exactly
the opposite of what happens with most of the softwares
classified as AI), just like a baby when growing to adult.

Not only improvement with *fixed*, previously designed algorithms
(like neural nets in finance). But an improvement in which the
software, by itself, detects some ways to perform better, to
solve problems with methods analogized from past situations,
to develop new heuristics and test it in the "mind's eye".

It should learn from the user, even without its awareness.
It should perceive things that the user does and attempt to
correlate with its current status (sort of "sensory perceptions").
It should often propose new operations to the user and, based
on what the user informs ("ok, do it"; "no, don't do that!"),
adjust its "knowledge" of its environment (the computer it's
running, the position of the opened windows of the application,
the focus of attention of the user, its native concepts of
what a file is, etc).

One AI software should, after some time, "know" how its user
works and proceed doing things for the user. Reasonable things,
because it should have learned what is "reasonable".

My Outlook Express mail reader is dumb. My Netscape browser
is as intelligent as a door (locked ;-). All software I run
today is similarly dull. I want to turn on my computer in the
morning and have my computer dial to Internet and check my
e-mail automatically. Not because I put the mail reader in the
"startup" folder. But because EVERY SINGLE DAY that's the FIRST
thing I do. I can go on for CENTURIES doing this and my computer
will NEVER PERCEIVE that, and will never help me. That's what I
call a DUMB software.

Now try to think about *all* the repeating operations
that you do on a daily basis. Try to think how nice it
would be if an intelligent program could detect
the redundant things you do and correlate with what you've
done in the past. Try to think in "teaching" the system to
do certain things according to certain situations (even
using abstract concepts, if you can "ground" those abstractions
in the systems' "sensory perceptions"). Any recognition of this
sort I would easily consider to be the dawn of intelligence.

AI is a failure today because we failed to recognize what
are the *initial* steps that all intelligent entities must
possess. Something like recognition of repeating situations,
grouping of those situations into categories and elaboration
of (even primitive) causal models. We've spent too much time
thinking about what these things mean for us, humans, and
we forget that a dog can, in a limited fashion, do things
like that. A dog is much more intelligent than any of today's
computers. A software able to do this in a reasonable manner
will finally put AI in the hands of all of us and start a new
era for intelligent systems. So it is not that our computers
cannot be intelligent. It is *we all* that are being dull in
not recognizing how to do it.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 13 Feb 1999 00:00:00 GMT
Message-ID: <36c594ce@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36c42795@news3.us.ibm.net> <B0EA919C70D90318.8EA65E9738BE4D61.0D7BD081DF9DF0EE@library-proxy.airnews.net> <36C4C137.802D801F@ils.nwu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 13 Feb 1999 15:05:50 GMT, 166.72.21.182
Organization: SilWis
Newsgroups: comp.ai

Aaron Khoo wrote in message <36C4C137.802D801F@ils.nwu.edu>...
>
>I think most AI researchers would agree that learning is one aspect
>of intelligence, it is not the ONE criteria for it. There are people who
>work exculsively on learning systems, but IMHO, there's plenty of other
>problems to solve before we even think about the ability to learn. Sometimes,
>in fact, I think it's arguable whether or not learning is necessarily present
>for a system to appear "intelligent".

Oh, boy, I disagree. I may accept that concentrating efforts just on
learning aspects is not enough. But without learning, we can't have
any kind of intelligent behavior greater than the one presented by
a pocket calculator.

>I'm not going to attempt to define
>Intelligence, since better minds than mine have tried (and not necessarily
>succeeded). However, I fail to see why we're knocking AI research for
>not having succeeded in it's ultimate goal, instead of applauding it for
the
>intermediate successes that it's accomplished in
>its short lifespan (compared to the physical sciences).
>

AI's lifespan is really short, and we couldn't expect much more.
But we've been thinking about intelligence in humans for centuries.
If one don't want to go as far as Aristotle, we've got to go at
least around 1700, with Locke and Hume. We're thinking about
intelligence for a great time and it is time now to see what's
the real nature of the problem.

>[snip]
>We're making the mistake of judging the current accomplishments of AI research
>against the level of what humans are capable of. It took an Einstein centuries
>after Newton to discover the theory of relavitiy, yet we expect AI researchers
>to develop HAL in 40 years? Human intelligence is a difficult, almost mystical
>thing, involving areas spanning psychology to computer science to neuroscience.
>And we're expecting researchers to replicate it in a few short decades? Many
>people fail to appreciate just how difficult it is even to deal with the simplest
>

This is part of the problem of AI research. Scientists tried to
make computers intelligent as adults and compared them with
the kind of reasoning that a mathematician does when proving
a theorem. This is nonsense. Things ought to start at the "baby"
level.

And why I say so? I say so because intelligence is what makes
a baby evolve into an adult. Without that mechanism, adults
will not reason correctly when faced with a new problem in
their lives (which is much similar to what a baby has to
face during its initial years of life). You see, most of
what we do on a daily basis (perception, recognition, problem
solving, analogical reasoning, etc) are operations that
are used by a baby to acquire knowledge from the world.
Starting a system with preloaded knowledge (such as the CYC
project) is just one way of saying that the designers of
the program are intelligent, not the machine.

>things we take for granted. If you grew up in America, nobody thinks you're
>smart if you understand English, but Natural Language Processing systems
>are still clumsy at best (and that's being kind). Such simple things as object
>recognition, learning to walk, common sense understanding, etc, these are
>all really hard. Ironically, it is the problems that humans find difficult (and
>hence tend to be well defined) that tend to be easier for computers to deal with.
>
>Give AI a break. It IS a relatively young field (and so is the rest of the
>computer
>science) compared to many other fields of science. Sure, there'll be stumbles
>and falls along the way (just like every other field). But there's definite
>legitimacy
>and accomplishment in what has been done. Let's give credit where credit is
>due.
>

On the net results, I think I agree. We've had so many brilliant minds
thinking about this issue in the last 4 decades that we may see that a huge
progress has been done. We ought to continue that tradition, but without
accepting freely everything that was said in the past. I think we ought
to be skeptical with every single bit of information that came from
AI research. That's the way not only to understand it better, but also
to eventually being able to perceive where it got wrong.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 13 Feb 1999 00:00:00 GMT
Message-ID: <36c594cc@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36c42795@news3.us.ibm.net> <B0EA919C70D90318.8EA65E9738BE4D61.0D7BD081DF9DF0EE@library-proxy.airnews.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 13 Feb 1999 15:05:48 GMT, 166.72.21.182
Organization: SilWis
Newsgroups: comp.ai

John Ownby wrote in message ...
>
>"Sergio Navega" <snavega@ibm.net> wrote:
>>
>In my vision, intelligent software is the one that *improves*
>>its performance automatically, the more it runs (that's exactly
>>the opposite of what happens with most of the softwares
>>classified as AI), just like a baby when growing to adult.
>(snip)
>>
>You make a good point.  My definition of AI, however, is a little
>broader.  To me AI is the replication of some human attribute.
>Learning is one, but there is also reasoning, motion, speaking,
>and hearing.  Of all these, learning is probably one of the tougher
>ones.  All of my AI work has been targeted toward replicating the
>human reasoning process.  And, although considered crude by
>some standards, a rule based approach has worked well for most of
>the applications I have developed thus far.
>

I think lots of researchers share your vision of what AI should be.
I don't. AI should concern itself with the "plataform independent"
aspects of intelligence. If we take human-behaviors as our goal,
then we will certainly fail to succeed. The reason is that most
of what we know about human-like intelligence points toward a
*strong* interaction with the world. Without that level of interaction,
without the "particulars" of biologically human actions on the
world, machines will not be able to have comparable intelligence.

However, there's nothing to restrict us in having a "machine-like"
intelligence. That's the concept we should focus: what are the
essential mechanisms necessary to transform one entity into
an intelligent entity. We must find the basic principles.

>What I've found is that replicating the reasoning process is generally
>very difficult, but it is doable -- just requires tons and tons of
attention
>to a great deal of detail.  I tried to explain this a couple of weeks ago
>to a group of international agricultural experts.  I told them that it's
>like tying your shoes.  You can tie your shoes, you can show someone
>how to tie their shoes, but it get's really hard when you try to build
>a system that's designed to tell someone how to tie shoes!  It's doable,
>but difficult.
>

I'd say that it is very, very difficult. But does that don't seem
strange? Any kid 5 years old can be taught how to do this. And
riding a bicycle? Once you learn, it is so easy. The question is
that most of the things we learn during these activities are not
logical or propositional expressions, but are sensorimotor patterns.
And most of our abstract conceptualizations are *grounded* on those
patterns. Human cognition is utterly dependent on these patterns.
A machine will never understand most of our metaphors unless
it have some kind of similar grounding.

>I suspect machine learning is like that --- certainly doable, but also
>certainly difficult.  But, the key word here is doable!
>

I'm also an optimist. The question is that there are so many ways
to pursue this goal that we can easily go to a dead end (in fact,
that's what's happening with AI since its beginning). So what's
the solution? In my humble opinion, to study human cognition and
neurobiology, and extract from these disciplines the fundamental
aspects of intelligence. Then, we'll be able to reproduce it
in machines.

>And you're right about once an AI application get's  into production it
>no longer being considerd AI  but "software engineering."  Neena Buck
>wrote a great article in the January 13, 1997 edition of Computerworld.
>The title of the article was, "Just Don't Call It AI."  It's a great article,
>even though they degraded it by putting my picture on the first page...
>
>People arent' quite as afraid of software reengineering as they are of
>AI!  Go figgure....
>
>John Ownby

I guess that one good way to really test if some software can be called
AI is to see if their users still think the software is intelligent
after using it for, say, 6 months. An intelligent entity will show
on a daily basis why it can still be called intelligent.

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net