Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 15 Feb 1999 00:00:00 GMT
Message-ID: <36c8232b@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36c42795@news3.us.ibm.net> <36C5C8D0.504D4DE1@clickshop.com>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 15 Feb 1999 13:37:47 GMT, 200.229.240.162
Organization: SilWis
Newsgroups: comp.ai,comp.ai.philosophy

Seth Russell wrote in message <36C5C8D0.504D4DE1@clickshop.com>...
>Sergio Navega wrote:
>
>> In my vision, intelligent software is the one that *improves*
>> its performance automatically, the more it runs (that's exactly
>> the opposite of what happens with most of the softwares
>> classified as AI), just like a baby when growing to adult.
>
>Yes I agree: robust intelligence must "improve its performance automatically",
>it must evolve like natural intelligence.  My question is:  what is the
>difference between the environments in which natural and artificial intelligence
>evolved?  Answers: 1) the artificial cybernetic environment is not nearly as
>stable as the natural environment and 2)  the relationship of the entity to its
>environment and its culture is currently far less defined.

Hi, Seth,
I'd say that AI and NI (Natural Intelligence) are different and are the
same.

They are different:
By our very biological constraints, no robot will ever grasp our way of
seeing the world. Some concepts will never be fully understood by robots,
because they will not have a body like our own. And the influence of the
bodily experiences on intelligence is dramatic.

They are the same:
Both schemes (AI and NI) have the same concern: to create knowledge from
incoming raw data and to interact with the "world". They differ on the
"world". Humans must interact with the "real world". Robots may interact
with that same world, although with different perceptions because of
its sensory and mechanic differences.

However, computers may be intelligent and they may model a *different*
world, a "cybernetic world" in which they may interact with other
computers, with lots humans on keyboards and with pre-stored texts.
Those computers will assemble a "world view" that is essentially
different than ours (not comparable even to the one developed by robots).
The question is whether this new "world view" they build will be useful
to us (I think it will).

> In fact it is
>questionable whether there is a culture (parents, previous learned knowledge
>etc.) in which a population could evolve.  So, we are in a situation where   each
>ai entity would need to achieve robust intelligence on its own - well lots of
>luck there - tis not the way human intelligence happened.
>
>So where does that train of thought lead us?  Well I would say, work on the
>environment, make the cybernetic environment parental to the baby growing
>entities, make it a place where their successful mutations can stick and be
>propagated, and create a stable enough space where evolution over time can have
>a chance to take place.  But ai researchers don't  want to hear that message,
>because it means that their individual work efforts are far less important than
>what they collectively agree to do.
>

I see two points here. First, it is reasonable to think of this environment
being assembled in a colossal network such as the Internet. Any intelligent
computer linked to it should be able to get some "knowledge" that may, as
I said, be very useful to us.

But the natural evolution of a network of such machines is a different
matter. It is a matter concerning the propagation of these computers as
"live" entities, as an independent species. I didn't think enough about
this subject to form an opinion. I'm too much concerned with the first
step.

Regards,
Sergio Navega.

From: Seth Russell <sethruss@clickshop.com>
Subject: What is the best environment for a robust ai program?
Date: 16 Feb 1999 00:00:00 GMT
Message-ID: <36CA3BB7.96019DF7@clickshop.com>
Content-Transfer-Encoding: 7bit
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36c42795@news3.us.ibm.net> <36C5C8D0.504D4DE1@clickshop.com> <36c8232b@news3.us.ibm.net>
To: Sergio Navega <snavega@ibm.net>
X-Accept-Language: en
Content-Type: text/plain; charset=us-ascii
Organization: http://www.clickshop.com
Mime-Version: 1.0
Newsgroups: comp.ai,comp.ai.philosophy

Sorry am changing the topic ...

Sergio Navega wrote:

> But the natural evolution of a network of such machines is a different
> matter. It is a matter concerning the propagation of these computers as
> "live" entities, as an independent species. I didn't think enough about
> this subject to form an opinion. I'm too much concerned with the first
> step.

Yep, not a whole lot of discussions have taken place in these groups about such
topics.  The discussions have been primarily limited to the Alife and the mobile
agent factions. But I don't understand how you can design an entity without
understanding the environment in which that entity must live and survive ...
which is why I think you have the cart before the horse.

--
Seth
The Public Domain Knowledge Bank
http://www.clickshop.com/pdkb/pdkb.html
In search of the fabric of artificial mind?
see http://plato.clickshop.com/pdkb/links.html
Thinking about how AI could work?
see http://www.clickshop.com/ai/conjecture.htm
And then on to the AI Jump List ...

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What is the best environment for a robust ai program?
Date: 17 Feb 1999 00:00:00 GMT
Message-ID: <36cad856@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36c42795@news3.us.ibm.net> <36C5C8D0.504D4DE1@clickshop.com> <36c8232b@news3.us.ibm.net> <36CA3BB7.96019DF7@clickshop.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 17 Feb 1999 14:55:18 GMT, 166.72.29.7
Organization: SilWis
Newsgroups: comp.ai,comp.ai.philosophy

Seth Russell wrote in message <36CA3BB7.96019DF7@clickshop.com>...
>Sorry am changing the topic ...
>
>Sergio Navega wrote:
>
>> But the natural evolution of a network of such machines is a different
>> matter. It is a matter concerning the propagation of these computers as
>> "live" entities, as an independent species. I didn't think enough about
>> this subject to form an opinion. I'm too much concerned with the first
>> step.
>
>Yep, not a whole lot of discussions have taken place in these groups about such
>topics.  The discussions have been primarily limited to the Alife and the mobile
>agent factions. But I don't understand how you can design an entity without
>understanding the environment in which that entity must live and survive
...
>which is why I think you have the cart before the horse.
>

Think about any kind of environment. Say, the one seen by an ant when it
walks in a table. Or the behavior of traffic in an Internet router.
Or the statistical flow of ASCII messages from a newswire feed service.

All of these environments are completely different from one another.
However, you can try to design an intelligent mechanism able to
"extract" what is possible from any of such environments. I don't
need a full fledged environment to design an intelligent mechanism.
I need *any* kind of environment and I need that my intelligent
agent be capable to act intelligently in this environment, to the
limit of what this environment is able to provide.

This is not a return to the blocks world idea. This is a way to see
intelligence in a general way, something that will allow the agent
to extract the *most* from the environment it is immersed. Once this
mechanism is developed, I bet we can transpose it to another, more
complex environment and go on developing its characteristics in a
much easier fashion than starting directly from the complex one.

Regards,
Sergio Navega.

From: Seth Russell <sethruss@clickshop.com>
Subject: Re: What is the best environment for a robust ai program?
Date: 17 Feb 1999 00:00:00 GMT
Message-ID: <36CB62CA.86B4C3A8@clickshop.com>
Content-Transfer-Encoding: 7bit
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36c42795@news3.us.ibm.net> <36C5C8D0.504D4DE1@clickshop.com> <36c8232b@news3.us.ibm.net> <36CA3BB7.96019DF7@clickshop.com> <36cad856@news3.us.ibm.net>
To: Sergio Navega <snavega@ibm.net>
X-Accept-Language: en
Content-Type: text/plain; charset=us-ascii
Organization: http://www.clickshop.com
Mime-Version: 1.0
Newsgroups: comp.ai,comp.ai.philosophy

Sergio Navega wrote:

> All of these environments are completely different from one another.
> However, you can try to design an intelligent mechanism able to
> "extract" what is possible from any of such environments. I don't
> need a full fledged environment to design an intelligent mechanism.
> I need *any* kind of environment and I need that my intelligent
> agent be capable to act intelligently in this environment, to the
> limit of what this environment is able to provide.

Yes, and I am saying that this pattern you call "capable to act intelligently in
this environment" is not a pattern that exists exclusively inside of the agent.
Nor is it a pattern that would generalize instantly to another environment.
Rather it is a *reflection* of the intelligent patterns in a particular
environment inside of an agent.  Such reflections are most easily formed by the
process of evolution and/or parenting from the environment.  But that is just my
intuition, and I can't prove it.  Who knows, maybe I'm full of shit ... maybe
there is this Golden Chalice of AI .... maybe you will find it ....

--
Seth
The Public Domain Knowledge Bank
http://www.clickshop.com/pdkb/pdkb.html
In search of the fabric of artificial mind?
see http://plato.clickshop.com/pdkb/links.html
Thinking about how AI could work?
see http://www.clickshop.com/ai/conjecture.htm
And then on to the AI Jump List ...

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What is the best environment for a robust ai program?
Date: 18 Feb 1999 00:00:00 GMT
Message-ID: <36cc1e9f@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36c42795@news3.us.ibm.net> <36C5C8D0.504D4DE1@clickshop.com> <36c8232b@news3.us.ibm.net> <36CA3BB7.96019DF7@clickshop.com> <36cad856@news3.us.ibm.net> <36CB62CA.86B4C3A8@clickshop.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 18 Feb 1999 14:07:27 GMT, 166.72.21.242
Organization: SilWis
Newsgroups: comp.ai,comp.ai.philosophy

Seth Russell wrote in message <36CB62CA.86B4C3A8@clickshop.com>...
>Sergio Navega wrote:
>
>> All of these environments are completely different from one another.
>> However, you can try to design an intelligent mechanism able to
>> "extract" what is possible from any of such environments. I don't
>> need a full fledged environment to design an intelligent mechanism.
>> I need *any* kind of environment and I need that my intelligent
>> agent be capable to act intelligently in this environment, to the
>> limit of what this environment is able to provide.
>
>Yes, and I am saying that this pattern you call "capable to act intelligently in
>this environment" is not a pattern that exists exclusively inside of the agent.

Yes, that may be so.

>Nor is it a pattern that would generalize instantly to another environment.

Not instantly, but will ease a lot. Think of an african indian being put
in the center of New York (good script for a movie). He may have lots of
difficulties, but once "pressed" by the local "authorities", he will climb
and walk over cars, hide in buildings, throw things on policeman, he will
be tough to catch. He is reusing things (ideas, motor patterns, vision
abilities, spatial perception, etc) that he  acquired on the jungle to
let him perform interestingly in a completely different environment.
This is reuse of knowledge in another domain. This is an intelligent guy.

>Rather it is a *reflection* of the intelligent patterns in a particular
>environment inside of an agent.  Such reflections are most easily formed by the
>process of evolution and/or parenting from the environment.  But that is just my
>intuition, and I can't prove it.  Who knows, maybe I'm full of shit ... maybe
>there is this Golden Chalice of AI .... maybe you will find it ....
>

Please keep your vision of the problem, or else we may forget how
important it is.

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net