Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Question
Date: 28 Dec 1998 00:00:00 GMT
Message-ID: <36877c7f.0@news3.ibm.net>
References: <3686BB92.AFB85B2D@domain.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 28 Dec 1998 12:41:35 GMT, 129.37.182.41
Organization: SilWis
Newsgroups: comp.ai.philosophy

Christoph Schock wrote in message <3686BB92.AFB85B2D@domain.com>...
>Hi,
>
>me and a fried of me would like to build a AI-System.
>But we aren't informed at all about the current
>solutions. So I would be very glad, if someone could
>tell me more about the nature of existing AIs.
>
>I'd like to program a system, that can learn simple
>associations between states of its environment. So
>it should perhaps learn to navigate in a simple 2D
>landcape (bitmap). Perhaps it will also be possible
>to teach the intelligence in some way. If it under-
>stands simple (written) words and it can save links
>between them, I could tell the system easy sentences
>how to behave.
>
>I know my English is terrible, but hope you understand
>me.
>
>Now, I know something about:
>- LCS-Systems, that save all thoughts in links between
>  possible input and useful output. These links can be
>  abstracted quite good.
>- Genetic Algorithms, who can't do any more than
>  searching useful values for functions. They can also
>  find a good architecture and weights of neural nets.
>- Neural Nets; I understand their function but not
>  their abilities.
>
>What are current AI Systems made of. Are they a
>combination of the listed points or are there any more
>"things" in their architecture.
>

Christoph,

AI things are not easy. I suggest that you start with a
very defined problem and try to solve it before thinking
on more ambitious projects. First of all, forget (for now)
about learning neural nets, genetic algorithms, fuzzy
logic and all that. This learning would fill your mind
with a lot of useless things and will leave no space
for the *really* important fundamental concepts.
Here's what I suggest to grasp what are these
fundamental concepts:

Devise a simple bidimensional world with some walls and
structures. You may put food in one corner. Make a simple
agent with two concerns in mind:

a) Innate predispositions
- Need to feed from time to time. This means that the
agent always have a goal of feeding itself but when
his energy is low (meaning he is hungry), this goal turns
up being prioritary. Put some mechanism of pleasure and
pain.

- Curiosity. The agent should be curious to explore
regions of its world, initially in a random manner,
later in a "thoughtful" manner (verifying unexplored
regions in an internal cognitive map).

- Perceptual system. The agent should have perceptions
such as distance from the wall in front of it, smell
of "things" nearby (increasing with proximity to
the smelly object, to locate food) lighting conditions
and other sensory perceptions.

- Cognitive map. Make the agent assemble in its mind
one "map" of its surroundings. This map could be made of
a series of "snapshots" of its "visual" system. Its vision
would always be looking at a small "window" of this map.
The representation may be simple, capturing only essential
features (position of lines, planes, etc).

b) Learning system
- Let the agent store things its perceptual system
captures. But instead of simply memorizing things,
try to do some "consolidation", grouping similar
things (for example, when it goes to the direction
of a wall and hit its nose on it, feeling pain,
remember that pain is caused by heading to a wall).

- A small amount of randomness may be useful (an
influence of about 10%) to provide the agent with
an escape from "looping conditions".

Interesting experiments:

Put food always in the same place during the "day"
(maximum lighting condition) and other place during
"night". The agent should learn where to look for
food when it feels hungry (feels hungry->moves to
increase pleasant smell-> find food).

During the night, put food in other place. The agent
will be confused (tried to find food in the known
position) but (due to curiosity) will explore
and find the other position of food. Next day, when it
starts feeling hungry, it should go to the position
of the "night snack". It will not find it there, but
it should remember where it found when it was day.
It should go there and find food. After some
rounds of this process (about 2 or three), the
agent should "learn" that during lighted conditions
(day) food stays in one place (represented by its
internal cognitive map) and during night in other
place.

------------------------------------------------
**a generic algorithm that allows this kind
of learning is the most important thing in your
simulation**
------------------------------------------------

You will notice that most intelligent acts that can
be ascribed to this simple agent will be the result
of a similar kind of reasoning: something we call
*induction* and which I think is the first fundamental
aspect of intelligence. If you develop a good algorithm
to come up with this kind of reasoning, be prepared to
amaze you and your friends with the agents "discoveries".

Now, let your imagination run wild and devise different
experiments. With time you should perceive that you
can "communicate" with the agent specifying words that
the agent should understand *based on a similar
inductive reasoning*. During this word learning period,
you will see your agent essentially as a child: you'll
have to repeat some times the "lessons" and sometimes
it will do all sort of "strange" things. Learning words
this way is what we can call "inductively grounding" of
words in sensory experiences of the agent.

I wish I had time to do such an experiment, it could be
very fun. Good Luck.

Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Question
Date: 29 Dec 1998 00:00:00 GMT
Message-ID: <3688c6b1.0@news3.ibm.net>
References: <3686BB92.AFB85B2D@domain.com> <36877c7f.0@news3.ibm.net> <769itv$609$1@hades.acadiau.ca>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 29 Dec 1998 12:10:25 GMT, 129.37.182.237
Organization: SilWis
Newsgroups: comp.ai.philosophy

David W. Murphy wrote in message <769itv$609$1@hades.acadiau.ca>...
>Sergio Navega <snavega@ibm.net> wrote:
>
>> Christoph,
>
>> AI things are not easy. I suggest that you start with a
>> very defined problem and try to solve it before thinking
>> on more ambitious projects. First of all, forget (for now)
>> about learning neural nets, genetic algorithms, fuzzy
>> logic and all that. This learning would fill your mind
>> with a lot of useless things and will leave no space
>> for the *really* important fundamental concepts.
>> Here's what I suggest to grasp what are these
>> fundamental concepts:
>
>> Devise a simple bidimensional world with some walls and
>> structures. You may put food in one corner. Make a simple
>> agent with two concerns in mind:
>
><chop>
>
>> I wish I had time to do such an experiment, it could be
>> very fun. Good Luck.
>
> I'm just about to start work on something very similar to this. Some other
>factors that may be introduced:
>
>1) the entity's 'hunger' for food keep it from 'exploring' too far away
>from a known food source.
>

This seems to be a good policy when we're dealing with a single entity.
But it is not, if we are devising a colony. Exploration of new and
potentially beneficial resources depends on the existence of "mad"
entities: those that defy this "natural" law and risk themselves
to uncharted territories. Columbus is the example that comes to
my mind.

>2) modifiers on the entity's actions based on environmental conditions
>(i.e. rain) and have it notice and learn from the positive effect of being
>in a sheltered area when it rains.
>

Good idea. I believe that once you have a good inductive learning
mechanism, all sort of adaptations will occur to strange environmental
conditions. This is pretty much what we humans (and other animals)
do for a living.

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net