Selected Newsgroup Message

Dejanews Thread

From: "Kostas Kostiadis" <kkosti@essex.ac.uk>
Subject: Re: Hahahaha
Date: 16 Feb 1999 00:00:00 GMT
Message-ID: <36c97540.0@seralph9>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36c42795@news3.us.ibm.net>
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.3155.0
X-Trace: 16 Feb 1999 13:40:16 GMT, 155.245.163.106
Newsgroups: comp.ai,comp.ai.philosophy

From: "Sergio Navega" <snavega@ibm.net>
>In my vision, intelligent software is the one that *improves*
>its performance automatically, the more it runs (that's exactly
>the opposite of what happens with most of the softwares
>classified as AI), just like a baby when growing to adult.
>

This is the EXACT definition behind all machine learning
techniques...  I have personally build software that imporves
with experience...

If people misclassify some software as AI for various reasons,
that does not mean that AI software does not exist.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 18 Feb 1999 00:00:00 GMT
Message-ID: <36cc1e9d@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36c42795@news3.us.ibm.net> <36c97540.0@seralph9>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 18 Feb 1999 14:07:25 GMT, 166.72.21.242
Organization: SilWis
Newsgroups: comp.ai,comp.ai.philosophy

Kostas Kostiadis wrote in message <36c97540.0@seralph9>...
>>In my vision, intelligent software is the one that *improves*
>>its performance automatically, the more it runs (that's exactly
>>the opposite of what happens with most of the softwares
>>classified as AI), just like a baby when growing to adult.
>>
>
>
>This is the EXACT definition behind all machine learning
>techniques...  I have personally build software that imporves
>with experience...
>
>If people misclassify some software as AI for various reasons,
>that does not mean that AI software does not exist.
>
>

Kostas, you're right to point out the insufficiency of definitions
of intelligence stemming only on learning. And I think I can
agree with the good results of some ML programs. What I doubt
is that those approaches represent valid steps toward *general*
forms of intelligence (like ours). Unfortunately, one system that
does good in one domain in general fails miserably when put in
another domain.

Regards,
Sergio Navega.

From: "Kostas Kostiadis" <kkosti@essex.ac.uk>
Subject: Re: Hahahaha
Date: 19 Feb 1999 00:00:00 GMT
Message-ID: <36cd5b30.0@seralph9>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36c42795@news3.us.ibm.net> <36c97540.0@seralph9> <36cc1e9d@news3.us.ibm.net>
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.3155.0
X-Trace: 19 Feb 1999 12:38:08 GMT, 155.245.163.106
Newsgroups: comp.ai,comp.ai.philosophy

Sergio Navega wrote in message <36cc1e9d@news3.us.ibm.net>...
>
>Kostas, you're right to point out the insufficiency of definitions
>of intelligence stemming only on learning. And I think I can
>agree with the good results of some ML programs. What I doubt
>is that those approaches represent valid steps toward *general*
>forms of intelligence (like ours). Unfortunately, one system that
>does good in one domain in general fails miserably when put in
>another domain.
>

Unfortunately this generality has not been reached yet...  This will
take some time.  However you have to keep in mind that certain
systems are build for certain purposes.  The performance of a
system is measured for the environment it has been designed for.

You can't use a boat on a motorway in a similar way that you
can't use a car in the ocean.  One has been designed for the
road, on has been designed for the sea.  (James Bond vehicles
are excluded from this example :-) ).

The generality that you are refering to, i.e. a system that can
perform well in various different domains is far from possible
yet.  We've got a long way to go before we can build Mr. Data.

However there are certain domains that allow portability of
higher level designs to other domains.  The domain I am working on
is such a domain.  Part of the project involves implementing
a higher level decision mechanism for a complex dynamic
hostile environment.  I know that another University working
on the same project is porting their higher level decision mechanism
to an automated combat pilot for a project funded by
SAAB Military Aircrafts.  This transfer was feasible with
very minor low-level implementation changes.  So I suppose
achieving generality depends on the perspective...
Do you want to build something that will work everywhere (i.e. Mr. Data)
or do you want to design a higher level set of behaviours that will
work equaly well on various different systems?

Kostas.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 19 Feb 1999 00:00:00 GMT
Message-ID: <36cdbeb1@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36c42795@news3.us.ibm.net> <36c97540.0@seralph9> <36cc1e9d@news3.us.ibm.net> <36cd5b30.0@seralph9>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 19 Feb 1999 19:42:41 GMT, 166.72.29.71
Organization: SilWis
Newsgroups: comp.ai,comp.ai.philosophy

Kostas Kostiadis wrote in message <36cd5b30.0@seralph9>...
>
>Sergio Navega wrote in message <36cc1e9d@news3.us.ibm.net>...
>>
>>Kostas, you're right to point out the insufficiency of definitions
>>of intelligence stemming only on learning. And I think I can
>>agree with the good results of some ML programs. What I doubt
>>is that those approaches represent valid steps toward *general*
>>forms of intelligence (like ours). Unfortunately, one system that
>>does good in one domain in general fails miserably when put in
>>another domain.
>>
>
>Unfortunately this generality has not been reached yet...  This will
>take some time.  However you have to keep in mind that certain
>systems are build for certain purposes.  The performance of a
>system is measured for the environment it has been designed for.
>

In a way, that is true. The human brain was "designed" by evolution
to work specifically in the world we live. What is amazing is that
our brain is very flexible, capable of using knowledge developed
in one area to help to solve problems in another area. Then, the
question I'm trying to answer is this: How can we devise learning
mechanisms that can develop fruitfully in other domains?

>The generality that you are refering to, i.e. a system that can
>perform well in various different domains is far from possible
>yet.  We've got a long way to go before we can build Mr. Data.
>

Sometimes Mr. Data may represent an unachievable ideal, sometimes
an inspiration. I've heard that Rodney Brooks (Cog robot, MIT) was
very influenced by Arthur C. Clarke's HAL-9000. I try to keep in my
mind the challenge of doing something like that, although in practice
the methods may be very different.

>However there are certain domains that allow portability of
>higher level designs to other domains.  The domain I am working on
>is such a domain.  Part of the project involves implementing
>a higher level decision mechanism for a complex dynamic
>hostile environment.  I know that another University working
>on the same project is porting their higher level decision mechanism
>to an automated combat pilot for a project funded by
>SAAB Military Aircrafts.

I've heard something about SOAR being employed in military
navigation systems (something like IFOR-SOAR), is this what you're
refering? I have a certain admiration by SOAR and also by ACT-R.

> This transfer was feasible with
>very minor low-level implementation changes.  So I suppose
>achieving generality depends on the perspective...
>Do you want to build something that will work everywhere (i.e. Mr. Data)
>or do you want to design a higher level set of behaviours that will
>work equaly well on various different systems?
>

For now, I would be very happy to discover what are the methods
that allow one system to present this reuse of knowledge in different
domains. This seems to call upon *very* general principles and I'm
hunting with perseverance what these principles could be.

Regards,
Sergio Navega.

From: juola@mathcs.duq.edu (Patrick Juola)
Subject: Re: Hahahaha
Date: 19 Feb 1999 00:00:00 GMT
Message-ID: <7akj0f$ppm$1@quine.mathcs.duq.edu>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36cc1e9d@news3.us.ibm.net> <36cd5b30.0@seralph9> <36cdbeb1@news3.us.ibm.net>
Organization: Duquesne University, Pittsburgh PA  USA
Newsgroups: comp.ai,comp.ai.philosophy

In article <36cdbeb1@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net> wrote:
>Kostas Kostiadis wrote in message <36cd5b30.0@seralph9>...
>>
>>Sergio Navega wrote in message <36cc1e9d@news3.us.ibm.net>...
>>>
>>>Kostas, you're right to point out the insufficiency of definitions
>>>of intelligence stemming only on learning. And I think I can
>>>agree with the good results of some ML programs. What I doubt
>>>is that those approaches represent valid steps toward *general*
>>>forms of intelligence (like ours). Unfortunately, one system that
>>>does good in one domain in general fails miserably when put in
>>>another domain.
>>>
>>
>>Unfortunately this generality has not been reached yet...  This will
>>take some time.  However you have to keep in mind that certain
>>systems are build for certain purposes.  The performance of a
>>system is measured for the environment it has been designed for.
>
>
>In a way, that is true. The human brain was "designed" by evolution
>to work specifically in the world we live. What is amazing is that
>our brain is very flexible, capable of using knowledge developed
>in one area to help to solve problems in another area.

This comes close to being a circular consequence of the various
anthropic principles; what is amazing is that we can do what we
have evolved to be able to do.

Our measure of 'flexibility' is, of course, ourselves -- we don't
notice the things that are *completely* beyond our capacities,
whether physical or mental.  And when we gauge other systems in
terms of their 'flexibility', we implicitly downgrade them when
we have capacity that they don't have, while not giving them
appropriate credit for the things they can do that we can't.

Here again, Deepest Blue is a good example.  Somehow a chess
computer isn't 'flexible' because all it does is play chess better
than any human.  Why aren't we 'inflexible' because we don't have
anything close to its memory and/or attentional capacity?  I could
'easily' develop a system much more flexible than any human by
simply setting up a computer to run random programs for random
lengths of time.  "Eventually," this system will solve any
computable problem.  The only constraint on doing this is the
fact that humans have a seventy-year lifespan.  But doesn't this
mean that we're not as flexible as we thought?

        -kitten

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 20 Feb 1999 00:00:00 GMT
Message-ID: <36ceb2d7@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36cc1e9d@news3.us.ibm.net> <36cd5b30.0@seralph9> <36cdbeb1@news3.us.ibm.net> <7akj0f$ppm$1@quine.mathcs.duq.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 20 Feb 1999 13:04:23 GMT, 129.37.182.179
Organization: SilWis
Newsgroups: comp.ai,comp.ai.philosophy

Patrick Juola wrote in message <7akj0f$ppm$1@quine.mathcs.duq.edu>...
>In article <36cdbeb1@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
wrote:
>>
>>In a way, that is true. The human brain was "designed" by evolution
>>to work specifically in the world we live. What is amazing is that
>>our brain is very flexible, capable of using knowledge developed
>>in one area to help to solve problems in another area.
>
>This comes close to being a circular consequence of the various
>anthropic principles; what is amazing is that we can do what we
>have evolved to be able to do.
>
>Our measure of 'flexibility' is, of course, ourselves -- we don't
>notice the things that are *completely* beyond our capacities,
>whether physical or mental.  And when we gauge other systems in
>terms of their 'flexibility', we implicitly downgrade them when
>we have capacity that they don't have, while not giving them
>appropriate credit for the things they can do that we can't.
>
>Here again, Deepest Blue is a good example.  Somehow a chess
>computer isn't 'flexible' because all it does is play chess better
>than any human.  Why aren't we 'inflexible' because we don't have
>anything close to its memory and/or attentional capacity?  I could
>'easily' develop a system much more flexible than any human by
>simply setting up a computer to run random programs for random
>lengths of time.  "Eventually," this system will solve any
>computable problem.  The only constraint on doing this is the
>fact that humans have a seventy-year lifespan.  But doesn't this
>mean that we're not as flexible as we thought?
>

I can agree with that given our current status. But on the long
term, things may change a bit, when genetic engineering and
anthropomorphic replacement parts become available. This is
somewhat strange, because until now we've been essentially
driven by evolutionary (selfish-gene like) traits. Using
the intelligence that evolution gave us for the purpose of
propagating our genes, we've designed methods for the
survival of the weak and we will beat, sooner or later,
our small life-span. Without some crazy robot-like species
to exterminate us, I guess we'll have a pretty good future.

Regards,
Sergio Navega.

From: "Kostas Kostiadis" <kkosti@essex.ac.uk>
Subject: Re: Hahahaha
Date: 22 Feb 1999 00:00:00 GMT
Message-ID: <36d134cb.0@seralph9>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36c42795@news3.us.ibm.net> <36c97540.0@seralph9> <36cc1e9d@news3.us.ibm.net> <36cd5b30.0@seralph9> <36cdbeb1@news3.us.ibm.net>
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.3155.0
X-Trace: 22 Feb 1999 10:43:23 GMT, 155.245.163.106
Newsgroups: comp.ai,comp.ai.philosophy

>I've heard something about SOAR being employed in military
>navigation systems (something like IFOR-SOAR), is this what you're
>refering? I have a certain admiration by SOAR and also by ACT-R.
>

Yeap this is true...What I was refering to, is a project at Linkoping
University, Sweeden.  They are using their own architecture and they are saying that
their emphasis on cuncurrency is one of the main differences between
their system and Soar or dMARS.

You can go to my site:

http://privatewww.essex.ac.uk/~kkosti/

and follow the RoboCup link from there.  You will find tons of bibliography
on RoboCUp, and the fields it entails (e.g. multi-agent collaboration,
strategy acquisition, real-time planning and reasoning, sensor fusion,
strategic decision making, intelligent robot control,  machine learning
etc.).

If you want any additional info, let me know...
What I've done so far is an adaptive decision mechanism based on
RL (Q-learning), with minimal opponent modelling.  (i.e. learn from
opponent mistakes, adapt, and exploit weaknesses...).  I am currently
working on parallel computing to imrpove a real-time agent's responsivness
(using POSIX threads for the moment...)

Cheers,
Kostas.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 22 Feb 1999 00:00:00 GMT
Message-ID: <36d1cc08@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36c42795@news3.us.ibm.net> <36c97540.0@seralph9> <36cc1e9d@news3.us.ibm.net> <36cd5b30.0@seralph9> <36cdbeb1@news3.us.ibm.net> <36d134cb.0@seralph9>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 22 Feb 1999 21:28:40 GMT, 129.37.182.177
Organization: SilWis
Newsgroups: comp.ai,comp.ai.philosophy

Kostas Kostiadis wrote in message <36d134cb.0@seralph9>...
>
>>I've heard something about SOAR being employed in military
>>navigation systems (something like IFOR-SOAR), is this what you're
>>refering? I have a certain admiration by SOAR and also by ACT-R.
>>
>
>
>Yeap this is true...What I was refering to, is a project at Linkoping
>University,
>Sweeden.  They are using their own architecture and they are saying that
>their emphasis on cuncurrency is one of the main differences between
>their system and Soar or dMARS.
>
>You can go to my site:
>
>http://privatewww.essex.ac.uk/~kkosti/
>
>and follow the RoboCup link from there.  You will find tons of bibliography
>on RoboCUp, and the fields it entails (e.g. multi-agent collaboration,
>strategy acquisition, real-time planning and reasoning, sensor fusion,
>strategic decision making, intelligent robot control,  machine learning
>etc.).
>

Thanks for the pointer, I'll have a look at it. Meanwhile, good luck
with your work!

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net