Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: Why HLUTs are not intelligent
Date: 13 Mar 1999 00:00:00 GMT
Message-ID: <36ea8c14@news3.us.ibm.net>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com> <36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 13 Mar 1999 16:02:28 GMT, 129.37.183.236
Organization: SilWis
Newsgroups: comp.ai.philosophy

This is a response to Pierre-Normand Houle's message
<7ccpk5$o25$1@nnrp1.dejanews.com>

Pierre, I must again thank you for your answers, they were
important enough to let me realize another collection of new
points about this subject that I had to address. In fact, I
started commenting each paragraph of yours when I suddenly
realized what I think now is the "root" of the question
(I already said that, but a tree have several roots!).

So instead of answering your well put comments, I decided
to rewrite the argument under a new perspective.

That perception, now, makes me agree with most of your comments
about HLUTs and disagree with the "definition" of the problem
we're into (it's funny as a lot of discussions boils down
to disagreement in "definitions").

I hope you'll have another dime of patience left to understand
what I'll write and I'll be glad to know if you agree I
touched on the important points.

The fundamental point which I now agree is that a HLUT such
as the one you're proposing *can produce intelligent behavior,
in any circumstance*. I'm publically saying this now.

----
*But it cannot be considered to be intelligent*.
----

The subtle semantic difference between these phrases is the *root*
of our dispute and I really hope to be clear on the next paragraphs.

Present intelligent behavior, as Neil Rickert said recently, is an
external aspect, that is very different from the output from the HLUT.
Although this is the main point Neil uses to criticize the HLUT
thought experiment, I'll do it differently.

*To be intelligent* is not only to present intelligent behavior,
but *also* to grow an intelligent representation of the world
one is in. Is this difference such a big deal? Is this really a
so different concept? Isn't this just a syntatically different
way of saying the same thing? My answer is No.

In my vision, that's the essential difference. But to fully
understand what I'm proposing, I'll suggest a more intuitive
way of seeing things.

Can a HLUT capture a gaussian probabilistic distribution?
It obviously can! And to any desired accuracy! You take
the curve and insert it, point by point, in a table. What
can we say this table captured? It captured the *essence* of
a gaussian distribution, in such a way as to replicate
the *behavior* of the phenomena represented by that curve
given any incoming address.

Now take all distributions possible for the way a human
being moves its finger. It is a probabilistic distribution
in a determinate shape (probably non-gaussian) that the HLUT
will be able to simulate to *any desired degree of precision*.

As the person gets older and the stiffness of its finger
muscles alter (changing the probabilistic curve to be modeled),
so do the response of the HLUT, because it uses as address
not only the current sensory inputs, but also a history of
them, which is enough to address a *different* area of the
HLUT (which may have a different probabilistic distribution
for its finger).

This is what I have seen from the previous post of Pierre
and this is what makes me agree that such a HLUT will behave
intelligently.

In that same way, any kind of output that a human may
have can be conveniently reproduced by that HLUT, down
to the utterances a man does to his puppy dog that made
"that" thing in the living room.

This is all very hard to imagine, but once you get used
to the way mathematicians make inductions, it is not
difficult to accept the logical existence of that
table.

----
It is not difficult also to see how that HLUT can
present intelligent behavior, comparable to the human being
that it is simulating.
----

So in this case I may seem to be agreeing with Pierre,
Daryl, Balter and others. But...

The question is if this HLUT could be said **to be**
intelligent, and my next words will try to refine
what that means.

Thought Experiment
------------------

Suppose we take the HLUT of Sergio to a different universe
(all you guys that are discussing HLUTs cannot say that
this is an "unreasonable" thought experiment!).

Suppose, also, that this universe works with different
laws of physics than the one we're in. As an example,
in this contrived universe if you drop a rock it will
not fall on the ground, it will raise up to a height
of 4 meters and stabilize there (don't ask me why,
it's just the way that universe works!).

It is obvious (I hope you all agree!) that the HLUT
of Sergio in the previous universe (ours) will be *useless*
in this universe. All laws of physics are different,
all "models" the HLUT has do not correspond to the
circumstances of this new universe. So, that HLUT
is useless in this universe.

Ok, I hear you guys saying, no problem, we can find another
HLUT that is the correspondent of Sergio in this new universe.

-----
The question is that Sergio's brain need not be
*altered* to work in this new universe! And that
happens because it is *intelligent*!
-----

The second HLUT will present the intelligent behaviors
of Sergio in the second universe. But it will fail
miserably if put in the first universe. Sergio's brain,
however, will be intelligent in the first, second
and *any other universe* that can be imagined (by the
way, don't even think about joining both HLUTs in a
single one; besides the fact that you could not
differentiate which entry to use, there's *infinite*
different universes, and HLUTs are said to be *finite*).
We can think of Sergio's brain working on infinitely
many universes, but not HLUTs.

HLUTs may only provide *intelligent behaviors*, but they
cannot *be intelligent*. Human brains not only provide
intelligent behaviors, but they are *also* considered as
intelligent in any environment, because they are
able to *come up with their own models of that universe*,
extracted from the experiences and interactions with
that universe. A HLUT can't do that! So a HLUT *is not*
equivalent to a human brain!

Executive Summary-----------
Only brains are able to be finite in size and temporal
existence and also to present intelligent behavior in an
infinitely large number of universes.

Regards,
Sergio Navega.

From: modlin@concentric.net
Subject: Re: Why HLUTs are not intelligent
Date: 13 Mar 1999 00:00:00 GMT
Message-ID: <7cej3n$1q0@journal.concentric.net>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com> <36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com> <36ea8c14@news3.us.ibm.net>
Organization: Concentric Internet Services
Reply-To: modlin@concentric.net
Newsgroups: comp.ai.philosophy

In <36ea8c14@news3.us.ibm.net>, "Sergio Navega" <snavega@ibm.net> writes:
[snip]
>Thought Experiment
>------------------
>
>Suppose we take the HLUT of Sergio to a different universe
>(all you guys that are discussing HLUTs cannot say that
>this is an "unreasonable" thought experiment!).
>
>Suppose, also, that this universe works with different
>laws of physics than the one we're in. As an example,
>in this contrived universe if you drop a rock it will
>not fall on the ground, it will raise up to a height
>of 4 meters and stabilize there (don't ask me why,
>it's just the way that universe works!).
>
>It is obvious (I hope you all agree!) that the HLUT
>of Sergio in the previous universe (ours) will be *useless*
>in this universe. All laws of physics are different,
>all "models" the HLUT has do not correspond to the
>circumstances of this new universe. So, that HLUT
>is useless in this universe.
>
>Ok, I hear you guys saying, no problem, we can find another
>HLUT that is the correspondent of Sergio in this new universe.
>
>-----
>The question is that Sergio's brain need not be
>*altered* to work in this new universe! And that
>happens because it is *intelligent*!
>-----

Sorry, Sergio.  You still haven't got the point. No new HLUT
is needed.

Let's assume that you are right, and Sergio's brain can work
in this new universe, ignoring the fact that the different
physical laws will probably mess up his neurons badly...

Now think about it.  How does Sergio's brain know that
something is different in this odd place?

You got it.  He sees things happening differently, he gets
DIFFERENT INPUTS.

By hypothesis, the HLUT has the right responses for all
those different inputs.

Remember, it has entries for EVERY COMBINATORIAL POSSIBILITY
of inputs.  Not just the ones that make sense.  Not just
the ones that could actually happen in this universe.
Every combination of Sergio's sensory inputs is listed.

Please give it up... you really are on the wrong track
when you try to say there are external behaviors which
could distinguish between Sergio and Sergio-HLUT.  There
simply cannot be any such, by the hypothesis that there
is an HLUT.

(As I wrote to you privately, I agree that we shouldn't
define intelligence behaviorally.  But that is an entirely
separate matter, which you can't discuss reasonably until
you stop saying silly things about this dumb HLUT issue.)

Bill Modlin

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 15 Mar 1999 00:00:00 GMT
Message-ID: <36ed2b12@news3.us.ibm.net>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com> <36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com> <36ea8c14@news3.us.ibm.net> <7cej3n$1q0@journal.concentric.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 15 Mar 1999 15:45:22 GMT, 200.229.243.133
Organization: SilWis
Newsgroups: comp.ai.philosophy

modlin@concentric.net wrote in message
<7cej3n$1q0@journal.concentric.net>...
>
>Sorry, Sergio.  You still haven't got the point. No new HLUT
>is needed.
>

You're right, Bill. It is the same HLUT. I had again missed the point
and I thank you for having the patience of showing me. That came as
a thunder over me, because I was convinced that it made sense. This
whole story, as silly as it appears, was very useful to me. Some
intuitive concepts I had needed a refurbishment.

Would it be too much to ask you to read my last thoughts about all
this? It's in the post "What I Think HLUTs Mean", I guess that
pretty much sums what I've learned about HLUTs and I would appreciate
any comment you may have.

Regards,
Sergio Navega.

From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: Why HLUTs are not intelligent
Date: 18 Mar 1999 00:00:00 GMT
Message-ID: <7crbe9$ikv@ux.cs.niu.edu>
References: <36ea8c14@news3.us.ibm.net> <7cej3n$1q0@journal.concentric.net>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy

modlin@concentric.net writes:

Bill gives a clear demonstration of his ideological committments:

>By hypothesis, the HLUT has the right responses for all
>those different inputs.

In my simple minded way of looking at things, an hypothesis is
something proposed that could be true or false, and is subject to
empirical investigation.

Sergio gave an argument against the truth of this hypothesis.  And
how does Bill respond:

>Please give it up... you really are on the wrong track
>when you try to say there are external behaviors which
>could distinguish between Sergio and Sergio-HLUT.  There
>simply cannot be any such, by the hypothesis that there
>is an HLUT.

Bill says that arguments against the hypothesis are a waste of time,
because it is true by hypothesis.

In my book, that makes it a dogma, rather than a hypothesis.  This
dogma appears to be an article of faith in the religion of
conventional AI thinking.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 18 Mar 1999 00:00:00 GMT
Message-ID: <36f137ca@news3.us.ibm.net>
References: <36ea8c14@news3.us.ibm.net> <7cej3n$1q0@journal.concentric.net> <7crbe9$ikv@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 18 Mar 1999 17:28:42 GMT, 129.37.182.147
Organization: SilWis
Newsgroups: comp.ai.philosophy

Neil Rickert wrote in message <7crbe9$ikv@ux.cs.niu.edu>...
>modlin@concentric.net writes:
>
>Bill gives a clear demonstration of his ideological committments:
>
>>By hypothesis, the HLUT has the right responses for all
>>those different inputs.
>
>In my simple minded way of looking at things, an hypothesis is
>something proposed that could be true or false, and is subject to
>empirical investigation.
>
>Sergio gave an argument against the truth of this hypothesis.  And
>how does Bill respond:
>
>>Please give it up... you really are on the wrong track
>>when you try to say there are external behaviors which
>>could distinguish between Sergio and Sergio-HLUT.  There
>>simply cannot be any such, by the hypothesis that there
>>is an HLUT.
>
>Bill says that arguments against the hypothesis are a waste of time,
>because it is true by hypothesis.
>
>In my book, that makes it a dogma, rather than a hypothesis.  This
>dogma appears to be an article of faith in the religion of
>conventional AI thinking.
>

I managed to get to the same conclusion. My suggestion?
Why not founding the Church of the HLUT?
I have handy the first topic for a good parable for the
sessions at sunday morning: The Church of the HLUT hereby
declares that "Our main beliefs are inscrHLUTable".
Sorry guys, couldn't resist... :-]

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 22 Mar 1999 00:00:00 GMT
Message-ID: <36f64e2b@news3.us.ibm.net>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 22 Mar 1999 14:05:31 GMT, 166.72.21.174
Organization: SilWis
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7d3lm3$3en@edrn.newsguy.com>...
>rickert@cs.niu.edu (Neil Rickert) says...
>
>>Then the proposers of the hypothesis acted childishly
>>by saying that I was not supposed to examine those particular
>>consequences.
>
>Nobody said that. What people said was that you were wrong,
>the HLUT does *not* have the consequence that the world is
>deterministic.
>

This assertion depends on the way you define HLUTs.

Defining HLUTs in such a way as to give only *one* output for
*each* input condition <input,history> is the *same thing* as
saying that our brain is deterministic. Do you acknowledge that?

Regards,
Sergio Navega.

From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: Why HLUTs are not intelligent
Date: 22 Mar 1999 00:00:00 GMT
Message-ID: <7d5pds$ogb@edrn.newsguy.com>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy

Sergio says...
>This assertion depends on the way you define HLUTs.
>
>Defining HLUTs in such a way as to give only *one* output for
>*each* input condition <input,history> is the *same thing* as
>saying that our brain is deterministic. Do you acknowledge that?

If we are only interested in an HLUT that behaves intelligently
(or appears to behave intelligently, if you believe that an HLUT
can't possibly be intelligent) and behaves in a Sergio-like way
(or Einstein-like, or whatever case we are talking about), then
it is not necesssary fot the HLUT to be nondeterministic, even if
the brain of Sergio (or Einstein) is nondeterministic.

Here's the idea: Let IO be the set of all pairs <i,o> such that
i is a possible input history (of 150 years length) for Sergio,
and o is a possible output history (of 150 years length) for
Sergio, such that o represents a possible intelligent,
Sergio-like response to input history i. Now, let IO' be a
subset of IO with the following properties:

1. For each input history i, there is exactly one output
history o such that <i,o> is in IO'.
2. For any two input histories i1 and i2 that are identical
up to some time t, there are output histories o1 and o2
such that <i1,o1> and <i2,o2> are in IO', and o1 and o2
are identical up to time t.

So, IO' is deterministic, even though IO is not. And
by definition of IO, IO' produces an intelligent,
Sergio-like response to every possible input history.

Now, make an HLUT based on IO', and it will be deterministic,
but it will produce an intelligent, Sergio-like response
to every possible input sequence.

Another route to the same conclusion is this: Make a table
of all 4-tuples <ih,oh,o_next,p>, where ih is an input history
up to some time t, oh is an output history up to time t, and
p is the probability that the next output from Sergio would
be o_next. Then, it seems to me that a machine could behave
in a Sergio-like way (even including probabilities) by repeating
the following process:

    1. Receive an input, and update ih with that input.
    2. Given, ih and oh, use a random number generator
    to decide what output to produce, according to the
    probabilities in the table.
    3. Update oh with that output.

Now, replace the random number generator with a precomputed
table of a trillion random numbers. With this change, the
resulting system (with the table of random numbers) becomes
deterministic, and the whole shebang can be replaced by
an HLUT.

Daryl McCullough
CoGenTex, Inc.
Ithaca, NY

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 22 Mar 1999 00:00:00 GMT
Message-ID: <36f6a273@news3.us.ibm.net>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 22 Mar 1999 20:05:07 GMT, 200.229.242.155
Organization: SilWis
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7d5pds$ogb@edrn.newsguy.com>...
>Sergio says...
>>This assertion depends on the way you define HLUTs.
>>
>>Defining HLUTs in such a way as to give only *one* output for
>>*each* input condition <input,history> is the *same thing* as
>>saying that our brain is deterministic. Do you acknowledge that?
>
>If we are only interested in an HLUT that behaves intelligently
>(or appears to behave intelligently, if you believe that an HLUT
>can't possibly be intelligent) and behaves in a Sergio-like way
>(or Einstein-like, or whatever case we are talking about), then
>it is not necesssary fot the HLUT to be nondeterministic, even if
>the brain of Sergio (or Einstein) is nondeterministic.
>

Daryl, would you be patient with me and help me follow your argument
step by step? I may be again missing something.

>Here's the idea: Let IO be the set of all pairs <i,o> such that
>i is a possible input history (of 150 years length) for Sergio,
>and o is a possible output history (of 150 years length) for
>Sergio, such that o represents a possible intelligent,
>Sergio-like response to input history i.

Ok, this means that table I0 have lots of duplicated entries
(one entry i for lots of distinct outputs o). This is what
I had in mind and this is what I agree that can have not only
my very specific behavior of now but any possible behavior in
its entries.

>Now, let IO' be a
>subset of IO with the following properties:
>
>1. For each input history i, there is exactly one output
>history o such that <i,o> is in IO'.
>2. For any two input histories i1 and i2 that are identical
>up to some time t, there are output histories o1 and o2
>such that <i1,o1> and <i2,o2> are in IO', and o1 and o2
>are identical up to time t.
>

With these conditions, you seem to be removing the redundancy
from table IO. That means that for each i you have *only one*
corresponding output o, which is the very same thing I was
questioning from the beginning.

>So, IO' is deterministic, even though IO is not.

Agreed.

>And
>by definition of IO, IO' produces an intelligent,
>Sergio-like response to every possible input history.
>

>Now, make an HLUT based on IO', and it will be deterministic,
>but it will produce an intelligent, Sergio-like response
>to every possible input sequence.
>

I lost the things here. By your definition of IO', it does
not have repeated entries. Then, given the same input sequence
and history, it will produce *the same* output, always. That
does not happen with Sergio's brain, because it is not
deterministic. Then, IO' cannot be seen as behaving like
Sergio. Am I right in my interpretation?

>Another route to the same conclusion is this: Make a table
>of all 4-tuples <ih,oh,o_next,p>, where ih is an input history
>up to some time t, oh is an output history up to time t, and
>p is the probability that the next output from Sergio would
>be o_next. Then, it seems to me that a machine could behave
>in a Sergio-like way (even including probabilities) by repeating
>the following process:
>
>    1. Receive an input, and update ih with that input.
>    2. Given, ih and oh, use a random number generator
>    to decide what output to produce, according to the
>    probabilities in the table.
>    3. Update oh with that output.
>
>Now, replace the random number generator with a precomputed
>table of a trillion random numbers. With this change, the
>resulting system (with the table of random numbers) becomes
>deterministic, and the whole shebang can be replaced by
>an HLUT.
>

This example is easier to understand, although I would prefer
to simplify it with Pierre's suggestion of randomly selecting
one output from the possible (duplicated) entries. And then I
would accept your suggestion of replacing the random number
generator by a huge (although finite) table containing a zillion
random numbers pre-calculated and taking, for each transaction,
the next entry from a circular pointer, which will act just like
a fixed accuracy random number generator.

Although this will represent a deterministic, fixed table, it
carries the concept of selection of one output from a number of
possibilities, which was what I was trying to establish in the
beginning (meaning the nonrepetability of behaviors given exactly
equal initial conditions). So am I wrong in saying that my
original idea is still valid?

Regards,
Sergio Navega.

From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: Why HLUTs are not intelligent
Date: 22 Mar 1999 00:00:00 GMT
Message-ID: <7d6dd0$5d7@edrn.newsguy.com>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy

Sergio says...
>
>Daryl McCullough wrote in message <7d5pds$ogb@edrn.newsguy.com>

[stuff deleted]

>>So, IO' is deterministic, even though IO is not.
>
>Agreed.
>
>>And
>>by definition of IO, IO' produces an intelligent,
>>Sergio-like response to every possible input history.
>
>>Now, make an HLUT based on IO', and it will be deterministic,
>>but it will produce an intelligent, Sergio-like response
>>to every possible input sequence.
>
>I lost the things here. By your definition of IO', it does
>not have repeated entries. Then, given the same input sequence
>and history, it will produce *the same* output, always. That
>does not happen with Sergio's brain, because it is not
>deterministic. Then, IO' cannot be seen as behaving like
>Sergio. Am I right in my interpretation?

You need to be careful about what you mean. The system
I am describing doesn't have a "reset" button, any more
than the real Sergio does. The issue is this: Suppose
that someone is confronted with a person who looks just
like Sergio, but he doesn't know whether it is really
Sergio, or some HLUT programmed to simulate Sergio.
What test can he perform on this being that claims
to be Sergio to find out which one is really the
case? It's no fair cutting the being open.

You say: repeat the same input twice, and see
whether the being responds the same way both
times. But since the HLUT uses as its index
the entire *history* of inputs ever received,
it can tell the difference between the first
time an input was made and the second time
it was made, and so the HLUT can do something
different the second time. (It *will* do
something different the second time, if
the Sergio would have.)

The point is that by observing behavior
alone, you can't tell the difference between
a nondeterministic system and a deterministic
system, unless you have the ability to "reset"
the being to its inital state. There is no
such ability to reset a human, and there would
be no such button on the HLUT, either.

[Second approach described, using a table of random numbers]
>Although this will represent a deterministic, fixed table, it
>carries the concept of selection of one output from a number of
>possibilities, which was what I was trying to establish in the
>beginning (meaning the nonrepetability of behaviors given exactly
>equal initial conditions).

I'm not sure exactly what you mean about nonrepeatability.
Look at your own case. If I say the same thing to you twice,
you will respond differently the second time than you did
the first time. Although nondeterminism could very well
play a small role, the biggest role is played by your *memory*
of having heard the same thing earlier. The first time,
you might say "Oh, that's interesting.", while the second time,
you will say something like "Yes, I know! You already
told me that!". The difference between these two responses
is *not* nondeterminism. You aren't just flipping an internal
coin to decide whether you say "Oh, that's interesting" or
"Yes I know!", you are basing your second output not only
on the last input, but also on the memory of previous
outputs.

Without the ability to "reset" somebody to the state in
which they were a baby, it is impossible to put a person
in the same memory state twice. So it is impossible to
probe the extent to which their behavior is nondeterministic.

Okay, just to show that, unlike Neil Rickert, I don't
only consider evidence that supports my own views:
There is a slim possibility that you could use something
like the EPR experiment of quantum mechanics to test
the difference between nondeterminism and determinism.
Bell's inequality showed a statistical difference between
systems that have quantum-like nondeterminism and
hidden-variables. If this sort of experiment could
be applied to humans and HLUTs, perhaps you could
statistically prove that the HLUT was a hidden-variables
theory, while the human was truly nondeterministic.
I don't really know how such an experiment could
work on humans, but I just wanted to mention the
possibility, for the sake of thoroughness.

Daryl McCullough
CoGenTex, Inc.
Ithaca, NY

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 23 Mar 1999 00:00:00 GMT
Message-ID: <36f79fbe@news3.us.ibm.net>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 23 Mar 1999 14:05:50 GMT, 200.229.240.123
Organization: SilWis
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7d6dd0$5d7@edrn.newsguy.com>...
>Sergio says...
>>
>>Daryl McCullough wrote in message <7d5pds$ogb@edrn.newsguy.com>
>
>[stuff deleted]
>
>>>So, IO' is deterministic, even though IO is not.
>>
>>Agreed.
>>
>>>And
>>>by definition of IO, IO' produces an intelligent,
>>>Sergio-like response to every possible input history.
>>
>>>Now, make an HLUT based on IO', and it will be deterministic,
>>>but it will produce an intelligent, Sergio-like response
>>>to every possible input sequence.
>>
>>I lost the things here. By your definition of IO', it does
>>not have repeated entries. Then, given the same input sequence
>>and history, it will produce *the same* output, always. That
>>does not happen with Sergio's brain, because it is not
>>deterministic. Then, IO' cannot be seen as behaving like
>>Sergio. Am I right in my interpretation?
>
>You need to be careful about what you mean. The system
>I am describing doesn't have a "reset" button, any more
>than the real Sergio does.

Well, now it's my time to be surprised by the arguments
you're using. You see, from the beginning I was fighting
HLUTs using what my common sense said. I learned enough
about HLUTs during this discussion period to understand
that it is an experiment where, to do any kind of reasoning,
you *can't* use any kind of "real world" constraint.

A reset button for Sergio is just as unlikely as an omniscient
HLUT, with all entries preloaded, guessing all the future.
To accept the possibility of the latter, I think we should
accept the possibility of the former.

>The issue is this: Suppose
>that someone is confronted with a person who looks just
>like Sergio, but he doesn't know whether it is really
>Sergio, or some HLUT programmed to simulate Sergio.
>What test can he perform on this being that claims
>to be Sergio to find out which one is really the
>case? It's no fair cutting the being open.
>

Yes, to "open" the guy would be unfair ;-)
But judging HLUTs just by external behaviors will put
us into some of Neil's stronger arguments against the
whole story. External behaviors are one thing, while
*internal* outputs to the motor system are another.
I can only accept HLUTs with the latter.

Sticking with externally, subjective aspects of behavior
is not useful, because if we think a bit we'll see that
this test will be just a fancy version of the Turing test,
where, I believe, most arguments against would apply.
This is pretty much returning to GOFAI thesis.

>You say: repeat the same input twice, and see
>whether the being responds the same way both
>times. But since the HLUT uses as its index
>the entire *history* of inputs ever received,
>it can tell the difference between the first
>time an input was made and the second time
>it was made, and so the HLUT can do something
>different the second time. (It *will* do
>something different the second time, if
>the Sergio would have.)

Yes, and if we repeat that again, we'll have another
set of input histories, and so on, until we have
what I believe is *the real* HLUT, a HLUT where each
input would be associated to a huge number of
*possible* outputs, randomly selected to make it
different in successive runs. The reason for this
is to be like Sergio's behavior: it is always different,
from run to run (more about this later).

Some posts ago, Pierre-Normand Houle proposed such
a HLUT and gave an idea of an algorithm to select
the entries randomly. At the time, I swallowed that
argument, but I think there's a point which was
underestimated: the size of that HLUT, although
still finite, would have to be *much, much* larger
than the already large HLUT that was proposed.

>
>The point is that by observing behavior
>alone, you can't tell the difference between
>a nondeterministic system and a deterministic
>system, unless you have the ability to "reset"
>the being to its inital state. There is no
>such ability to reset a human, and there would
>be no such button on the HLUT, either.
>

As I said, a reset button is as unlikely as the HLUT having
all entries filled from the beginning, with the future
in it.

>[Second approach described, using a table of random numbers]
>>Although this will represent a deterministic, fixed table, it
>>carries the concept of selection of one output from a number of
>>possibilities, which was what I was trying to establish in the
>>beginning (meaning the nonrepetability of behaviors given exactly
>>equal initial conditions).
>
>I'm not sure exactly what you mean about nonrepeatability.
>Look at your own case. If I say the same thing to you twice,
>you will respond differently the second time than you did
>the first time. Although nondeterminism could very well
>play a small role, the biggest role is played by your *memory*
>of having heard the same thing earlier. The first time,
>you might say "Oh, that's interesting.", while the second time,
>you will say something like "Yes, I know! You already
>told me that!". The difference between these two responses
>is *not* nondeterminism. You aren't just flipping an internal
>coin to decide whether you say "Oh, that's interesting" or
>"Yes I know!", you are basing your second output not only
>on the last input, but also on the memory of previous
>outputs.

Yes, I agree with all that, but that was not what I was saying.
The nondeterminism I was mentioning is "deeper" in the system's
innards. Please, read the end of this post where I develop
further my point.

>
>Without the ability to "reset" somebody to the state in
>which they were a baby, it is impossible to put a person
>in the same memory state twice. So it is impossible to
>probe the extent to which their behavior is nondeterministic.
>

I'm glad to hear this, because this is coherent with my
thought that it is *impossible* to pre-fill a table with
all possible entries knowing beforehand *all the future*.
But I have been beaten up so much because of such "impossible"
thinking that I had to accept that it could be done, at least
in the "mind's world". So I don't see why I cannot accept
that I can't rewind the universe to a prior state and
"run it all again". It is just coherent with the former
arguments about HLUTs.

>Okay, just to show that, unlike Neil Rickert, I don't
>only consider evidence that supports my own views:
>There is a slim possibility that you could use something
>like the EPR experiment of quantum mechanics to test
>the difference between nondeterminism and determinism.
>Bell's inequality showed a statistical difference between
>systems that have quantum-like nondeterminism and
>hidden-variables. If this sort of experiment could
>be applied to humans and HLUTs, perhaps you could
>statistically prove that the HLUT was a hidden-variables
>theory, while the human was truly nondeterministic.
>I don't really know how such an experiment could
>work on humans, but I just wanted to mention the
>possibility, for the sake of thoroughness.
>

I commend you to thinking this way, it shows that you're
open to think against your own arguments, which is,
in my opinion, one of the ways of practicing science
inside one's own "skull". I confess that your argument
may be even better than the one I'm trying to develop
(at least, it is very different).

My primary reason to reluct accepting HLUTs is that
I see a clear position for randomness inside our
brain. Far from being a confirmed thing, this is
the result of a few neurophysiological investigations
of random spikes among neurons. One could ask, then,
what would be the necessity of randomness to the
workings of our brain? I'm finding a *fundamental*
position for this. I'm finding that, without it, we
wouldn't be considered intelligent, on a wide definition
of the word. The thing goes like this.

When we analyze creative behavior (intimately associated
with intelligence), we'll find that it demands the
creation of "unlikely" things. Often, what creativity
generates is an improbable association with very high
value. Often, what it generates is the "missing link"
that turns one hypothesis into a plausible theory.
Kekule, a century ago, found the structure of benzine
during a "dream" where he saw a snake running after its
tail. This image, as many others we have, do not seem
to be product of a formal, step by step, deterministic
process. It seems to be the end result of a series of
steps where random fluctuations alter subtle aspects of
our "rational" thinking, making our intuition explore
things that our formal education would forbid. These
alterations are *fundamental*, because they force us to
explore areas of our thought that we would not explore,
if we were only rational and deterministic beings.

I say that creativity is the direct result of these
fluctuations. I say that not only scientists and
artists are creative. To understand the world, a
child *must* use this kind of "irrational" mechanism
to allow he/she to *discover* non-obvious aspects of
the world. A child must be creative to understand the
categorization of objects in which thousands of
characteristics may be potential candidate for
conceptualization. I propose random mechanisms as a
dash that spice up the thought.

So, anything that goes against the need of randomness
to help in our daily discoveries is received with
skepticism by me, if it does not propose a substitute
mechanism with the same functional abilities.

As a final comment, I could say that if Albert Einstein
were put again in this world, in the exact same initial
conditions ("rewinding the universe"), I cannot guarantee
that he would discover again the Theory of Relativity.
Maybe he could have discovered the unification of
gravitation and quantum mechanics! Only his complete
HLUT would know!

Regards,
Sergio Navega.

From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: Why HLUTs are not intelligent
Date: 23 Mar 1999 00:00:00 GMT
Message-ID: <7d8mkn$aos@edrn.newsguy.com>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy

Sergio says...
>>You need to be careful about what you mean. The system
>>I am describing doesn't have a "reset" button, any more
>>than the real Sergio does.

[stuff deleted]

>A reset button for Sergio is just as unlikely as an omniscient
>HLUT, with all entries preloaded, guessing all the future.
>To accept the possibility of the latter, I think we should
>accept the possibility of the former.

Yes, if you like, we can certainly consider the possibility
of a Sergio with a reset button. In that case, one of Sergio's
possible inputs would be to push the reset button. And the
HLUT would have to store a Sergio-like response to this input.

Of course, hitting "reset" on the HLUT wouldn't *really*
reset it, it would just cause the HLUT to behave as if it
were the real Sergio after having his reset button pushed.
I never claimed that there were no *physical* differences
between the HLUT and Sergio.

>>The issue is this: Suppose
>>that someone is confronted with a person who looks just
>>like Sergio, but he doesn't know whether it is really
>>Sergio, or some HLUT programmed to simulate Sergio.
>>What test can he perform on this being that claims
>>to be Sergio to find out which one is really the
>>case? It's no fair cutting the being open.

>Yes, to "open" the guy would be unfair ;-)
>But judging HLUTs just by external behaviors will put
>us into some of Neil's stronger arguments against the
>whole story.

Neil doesn't have any arguments.

>External behaviors are one thing, while
>*internal* outputs to the motor system are another.
>I can only accept HLUTs with the latter.

I'm not exactly sure what you mean.

>Sticking with externally, subjective aspects of behavior
>is not useful,

I don't know how many times I have to say this, but
I have *never* suggested that the HLUT was "useful"
in any way. I'm not *advocating* building an HLUT,
I'm not suggesting that an HLUT is a useful way to
think about AI. So why don't I drop this discussion?
Good question. I really should.

>because if we think a bit we'll see that
>this test will be just a fancy version of the Turing test,
>where, I believe, most arguments against would apply.
>This is pretty much returning to GOFAI thesis.

Of course!

>Yes, and if we repeat that again, we'll have another
>set of input histories, and so on, until we have
>what I believe is *the real* HLUT, a HLUT where each
>input would be associated to a huge number of
>*possible* outputs, randomly selected to make it
>different in successive runs. The reason for this
>is to be like Sergio's behavior: it is always different,
>from run to run (more about this later).

Sorry Sergio, but as they say "You only go around once
in life" (unless you're a Hindu). You never get more
than one run.

>Some posts ago, Pierre-Normand Houle proposed such
>a HLUT and gave an idea of an algorithm to select
>the entries randomly. At the time, I swallowed that
>argument, but I think there's a point which was
>underestimated: the size of that HLUT, although
>still finite, would have to be *much, much* larger
>than the already large HLUT that was proposed.

Since nobody is proposing building an HLUT,
what difference does it make how big it is?

It's as if somebody said: "Imagine that you
have a mountain made of solid gold", and
you replied "Can't we just imagine that we
have a small hill made of solid gold? A
mountain would be way too expensive."

>>Without the ability to "reset" somebody to the state in
>>which they were a baby, it is impossible to put a person
>>in the same memory state twice. So it is impossible to
>>probe the extent to which their behavior is nondeterministic.
>
>I'm glad to hear this, because this is coherent with my
>thought that it is *impossible* to pre-fill a table with
>all possible entries knowing beforehand *all the future*.

I don't see what knowing the future has to do with it.
I can write an HLUT for playing tic-tac-toe:

    If you play an X in the upper left, I will play O in the center.
    If you play an X in the upper middle, I will play O in the lower left.
    If you play an X in the upper right, I will play O in the center.
    ...

I don't need to be able to predict what you are going to do in
order to write a table saying what *I* will do in each possible
case.

>But I have been beaten up so much because of such "impossible"
>thinking that I had to accept that it could be done, at least
>in the "mind's world". So I don't see why I cannot accept
>that I can't rewind the universe to a prior state and
>"run it all again". It is just coherent with the former
>arguments about HLUTs.

Well, you need to say what it means for two things to be
"behaviorally indistinguishable". If you set fire to an
HLUT, is it supposed to burn the same as a human does?
By "behaviorally indistinguishable", I always meant
"In any situation that Sergio is likely to be found in,
the robot using an HLUT will react in an intelligent,
Sergio-like way". You know, at parties, at relatives'
funerals and weddings, that sort of thing.

But if you insist---yes, rewinding the universe to
the state it was in 5 minutes ago will indeed reveal
the difference between Sergio and an HLUT. (Assuming
that we get *out* of the universe first. Otherwise,
our memories will get rewound, as well, and we won't
remember whether the HLUT repeated itself or not.)

[stuff about EPR deleted]

>My primary reason to reluct accepting HLUTs is that
>I see a clear position for randomness inside our
>brain.

Well, the HLUT is not supposed to capture *how*
humans do what they do---it *definitely* doesn't
produce behavior in the same way as humans does.

>When we analyze creative behavior (intimately associated
>with intelligence), we'll find that it demands the
>creation of "unlikely" things. Often, what creativity
>generates is an improbable association with very high
>value. Often, what it generates is the "missing link"
>that turns one hypothesis into a plausible theory.
>Kekule, a century ago, found the structure of benzine
>during a "dream" where he saw a snake running after its
>tail. This image, as many others we have, do not seem
>to be product of a formal, step by step, deterministic
>process.

Oh, I certainly agree with that. A nondeterministic
algorithm is potentially more powerful than a deterministic
one, in the sense that a nondeterministic algorithm
can come up with a correct answer instantly, while the
best the deterministic algorithm can do is to step-by-step
go through all the possibilities until it hits one that
works.

>I say that creativity is the direct result of these
>fluctuations. I say that not only scientists and
>artists are creative. To understand the world, a
>child *must* use this kind of "irrational" mechanism
>to allow he/she to *discover* non-obvious aspects of
>the world. A child must be creative to understand the
>categorization of objects in which thousands of
>characteristics may be potential candidate for
>conceptualization. I propose random mechanisms as a
>dash that spice up the thought.
>
>So, anything that goes against the need of randomness
>to help in our daily discoveries is received with
>skepticism by me, if it does not propose a substitute
>mechanism with the same functional abilities.

Well, I think that that's being overly protective
of your ideas. For an idea to be great, it isn't
necessary that it be the *only* idea that could
possibly work. It is good enough that it be *an*
idea that works. The *possibility* of implementing
intelligence in a different way does not negate
the fruitfulness of implementing intelligence
your way.

>As a final comment, I could say that if Albert Einstein
>were put again in this world, in the exact same initial
>conditions ("rewinding the universe"), I cannot guarantee
>that he would discover again the Theory of Relativity.

Probably not, but somebody else would have.

>Maybe he could have discovered the unification of
>gravitation and quantum mechanics! Only his complete
>HLUT would know!

Cheers!

Daryl McCullough
CoGenTex, Inc.
Ithaca, NY

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 23 Mar 1999 00:00:00 GMT
Message-ID: <36f80eb9@news3.us.ibm.net>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net> <7d8mkn$aos@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 23 Mar 1999 21:59:21 GMT, 166.72.21.59
Organization: SilWis
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7d8mkn$aos@edrn.newsguy.com>...
>
>[I snipped most agreed stuff to keep the post short, after
all, we're all tired of this HLUT stuff :-)]
>
>Sergio says...
>>Yes, and if we repeat that again, we'll have another
>>set of input histories, and so on, until we have
>>what I believe is *the real* HLUT, a HLUT where each
>>input would be associated to a huge number of
>>*possible* outputs, randomly selected to make it
>>different in successive runs. The reason for this
>>is to be like Sergio's behavior: it is always different,
>>from run to run (more about this later).
>
>Sorry Sergio, but as they say "You only go around once
>in life" (unless you're a Hindu). You never get more
>than one run.
>

I'm not a Hindu, so I can accept that without complaining :-).

If it is difficult to see the "rewinding story", let me
propose another way of seeing it. Suppose that my behavior
will be to pick up that pencil over the table. The pencil
is close to the edge. Due to a false spike (random), one
of my neurons fire and this ends up as being a small
"misposition" of my finger when it touched the pencil,
enough for me to be unable to grasp it before it starts
rolling and fall in the floor.

I know, by the very definition of the HLUT, that it represents
all behaviors, including this one. And I know that, because
the pencil rolls differently when it falls than if I managed
to get it, this will represent a *different* input vector
which could be enough to address a different part of the
HLUT and then give the different behavior, correspondent
to the pencil falling. So, no problem so far.

The problem happens when that neural noise caused something
that influenced the behavior but, because of my eventual
inability to perceive any difference (say, I drop a sheet
of paper when going through the desk of a colleague because
of a random twist of my arm), cause a perceivable difference
in behavior known only to *others*, but me. Then, I wouldn't
be able to have a different address to look up in the HLUT,
but my behavior would be *different*, as seen by outside
observers. I may even be categorized as accident-prone (by the
way, the real Sergio is ;-) without ever knowing. That's what
I call a significantly different behavior.

If this problem is never reported to me, I'd have shown a
different behavior in two situations (with and without
the neural noise responsible for the "problems"), perceivable
only by a third person which is watching me.

When I proposed this some weeks ago, somebody nailed
me (I guess it was Pierre-Normand, an equally tougher
defensor HLUTs!) insisting that the HLUT had,
**by definition**, all possible behaviors that I'd have,
including that random twist of my arm. Geeeeez, help!!

Ok, I swallowed that, but that means that for each
input vector, the HLUT would have to have lots of
different outputs, accounting for every possible
(I know, finite) variations that would be possible
due to noise, and that the HLUT would have to select
randomly among these values to present an accident-prone,
Sergio-like behavior. Then, my conclusion is that it
would have to be nondeterministic, as Sergio's brain is,
which is the point I was addressing in the beginning.

>>
>>So, anything that goes against the need of randomness
>>to help in our daily discoveries is received with
>>skepticism by me, if it does not propose a substitute
>>mechanism with the same functional abilities.
>
>Well, I think that that's being overly protective
>of your ideas. For an idea to be great, it isn't
>necessary that it be the *only* idea that could
>possibly work. It is good enough that it be *an*
>idea that works. The *possibility* of implementing
>intelligence in a different way does not negate
>the fruitfulness of implementing intelligence
>your way.
>

I would put it differently. Given the evidences that human
brain is nondeterministic, I'll accept only one of two
hypotheses: that the brain is proven to be deterministic,
contrary to what we thought (revision of my previous knowledge),
or the proposed mechanism, behaviorally similar to our brain,
include something nondeterministic.

Regards,
Sergio Navega.

From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: Why HLUTs are not intelligent
Date: 24 Mar 1999 00:00:00 GMT
Message-ID: <7dau65$b5o@edrn.newsguy.com>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net> <7d8mkn$aos@edrn.newsguy.com> <36f80eb9@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy

Sergio says...

[randomness causing differences in behavior
between the real Sergio and the HLUT]

>When I proposed this some weeks ago, somebody nailed
>me (I guess it was Pierre-Normand, an equally tougher
>defensor HLUTs!) insisting that the HLUT had,
>**by definition**, all possible behaviors that I'd have,
>including that random twist of my arm. Geeeeez, help!!
>
>Ok, I swallowed that, but that means that for each
>input vector, the HLUT would have to have lots of
>different outputs, accounting for every possible
>(I know, finite) variations that would be possible
>due to noise, and that the HLUT would have to select
>randomly among these values to present an accident-prone,
>Sergio-like behavior.

No, that's not necessary. Consider an HLUT simulation
of a pair of dice. Dicelike behavior has the following
characteristics: as you throw the dice again and again,
approximately 1/36 of the time, the results is 2, 2/36
of the time, the result is 3, 3/36 of the time, the result
is 4, etc. There are other characteristics of being
"dicelike" behavior, such as having no detectable pattern
that persists. Anyway, an HLUT can have, prerecorded,
a huge list of numbers, each between 2 and 12. Each
time the dice are rolled, the HLUT just outputs another
answer on its list.

Unlike the dice, the HLUT isn't really random, but
nobody will notice the lack of randomness, because
its outputs will *look* random.

The same thing could be true of an HLUT simulation
of Sergio. Rather than storing every possible output
that Sergio could possibly make, the HLUT could just
pick *one* possibility, and make sure that, over
the long run, its outputs have the same distribution
of possibilities that the real Sergio's outputs have.

Daryl McCullough
CoGenTex, Inc.
Ithaca, NY

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 24 Mar 1999 00:00:00 GMT
Message-ID: <36f94962@news3.us.ibm.net>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net> <7d8mkn$aos@edrn.newsguy.com> <36f80eb9@news3.us.ibm.net> <7dau65$b5o@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 24 Mar 1999 20:21:54 GMT, 166.72.21.27
Organization: SilWis
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7dau65$b5o@edrn.newsguy.com>...
>Sergio says...

>
>>Ok, I swallowed that, but that means that for each
>>input vector, the HLUT would have to have lots of
>>different outputs, accounting for every possible
>>(I know, finite) variations that would be possible
>>due to noise, and that the HLUT would have to select
>>randomly among these values to present an accident-prone,
>>Sergio-like behavior.
>
>No, that's not necessary. Consider an HLUT simulation
>of a pair of dice. Dicelike behavior has the following
>characteristics: as you throw the dice again and again,
>approximately 1/36 of the time, the results is 2, 2/36
>of the time, the result is 3, 3/36 of the time, the result
>is 4, etc. There are other characteristics of being
>"dicelike" behavior, such as having no detectable pattern
>that persists. Anyway, an HLUT can have, prerecorded,
>a huge list of numbers, each between 2 and 12. Each
>time the dice are rolled, the HLUT just outputs another
>answer on its list.
>

I agree that the HLUT of a pair of dice can be constructed
with a finite number of entries that can account for the
maximum number of throws one can do in a lifetime in such a
way as to duplicate precisely the probabilistic and
distributional behavior of those dices. Say one can throw
a pair of dices once each 10 seconds. A table that can
represent all throws one can give in a lifetime with
such a rate will certainly be *indistinguishable* by *any*
statistical method, from the real dices. That's ok,
if a human HLUT uses a similar method, it will work for me,
even being a "pseudo-random" number.

But I will put a significant difference between dices
and brains below.

>Unlike the dice, the HLUT isn't really random, but
>nobody will notice the lack of randomness, because
>its outputs will *look* random.
>

I agree entirely, take a HLUT containing a fixed
representation of Einstein, it may behave as
Einstein could, even if he *doesn't discover*
relativity, but this happens because *nobody*, at
the time of Einstein's life, could have *predicted*
that he would discover relativity. So even if that
Einstein didn't discover it, nobody would complain.

That means that his behavior will be credible, but
will not be the only possibility, if we are in the
position of the omniscient viewer. That's not
unreasonable, as a HLUT demand the existence of such
a guy to propose the situation in the first place.
Denying the possibility of a privileged viewer able
to evaluate the argument is the same as transforming
the whole story of the deterministic HLUT argument
into a tautology.

>The same thing could be true of an HLUT simulation
>of Sergio. Rather than storing every possible output
>that Sergio could possibly make, the HLUT could just
>pick *one* possibility, and make sure that, over
>the long run, its outputs have the same distribution
>of possibilities that the real Sergio's outputs have.
>

The question, Daryl, is that you're asking me to accept
the hypothesis of the HLUT, that cannot be built with the
matter of the universe, but you're saying that I can't
propose to evaluate the behaviors in more than "one run",
unprivileged vision. To accept your premise, you should
accept mine, because both are equally likely (or better,
unlikely). There's nothing in your premise that reduces
the likeliness of mine.

But let me get your excelent idea of dices and transform
those dices into the equivalent of what I think is a brain.

Suppose that you have a dice with the following preparation.
Internally, this dice have a radioactive element that
emits alpha particles. A small geiger counter receives those
particles and, as a funtion of the timing since last particle,
displaces one mass closer to one of the sides of the die.

In that way, the die will keep the probability of falling
in any of its 6 sides, but that probability will be a
*curve* emphasizing the face that have the mass closer to it.

The random nature of alpha particles *alters* the probabilistic
distribution among the numbers. The brain I'm thinking is,
obviously, even more complex: it learns to mold that curve
as a function of previous experiences, in such a way as,
for example, recognizing that, for instance, when the geiger
says to place the mass next to the number 4, it puts it
next to 5 because of some heuristic.

A HLUT will obviously capture that sort of thing. But to be
behaviorally (and probabilistically) indistinguishable from
that "die brain", it will have to use several possible outputs
with a probabilistic distribution among them, mimicking the
possibilities of learning of that brain.

I draw a more useful conclusion on this line in my recent
answer to Pierre-Normand.

Regards,
Sergio Navega.

From: houlepn@ibm.net
Subject: Re: Why HLUTs are not intelligent
Date: 24 Mar 1999 00:00:00 GMT
Message-ID: <7d9c3q$spl$1@nnrp1.dejanews.com>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net>
X-Http-Proxy: 1.0 x1.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Wed Mar 24 00:37:14 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.51 [en] (Win98; I)

"Sergio Navega" <snavega@ibm.net> wrote:

> Yes, and if we repeat that again, we'll have another
> set of input histories, and so on, until we have
> what I believe is *the real* HLUT, a HLUT where each
> input would be associated to a huge number of
> *possible* outputs, randomly selected to make it
> different in successive runs. The reason for this
> is to be like Sergio's behavior: it is always different,
> from run to run (more about this later).
>
> Some posts ago, Pierre-Normand Houle proposed such
> a HLUT and gave an idea of an algorithm to select
> the entries randomly. At the time, I swallowed that
> argument, but I think there's a point which was
> underestimated: the size of that HLUT, although
> still finite, would have to be *much, much* larger
> than the already large HLUT that was proposed.

To think of the non deterministic HLUT as much larger
than the deterministic HLUT begs the question.  The
size of the deterministic HLUT is of the order of
(10^10)^(10^10) = 10^(10^11).  Assuming one can
discriminate between just as much different motor outputs
as sensory inputs then the size of the non deterministic
HLUT is just: (10^10)^(2*10^10) = 10^(2*10^11).  So the
non deterministic HLUT for Sergio living 80 years just as
as big as the deterministic HLUT for Sergio living 40 years.
I tried to convey this point in my DeepBlue HLUT:  Once
the functional organization of Segios brain is understood,
the part of the code responsible for noize addition will
probably be less than 0.01% of the code for emulating the
deterministic part.

> My primary reason to reluct accepting HLUTs is that
> I see a clear position for randomness inside our
> brain.

Then you still believe the 10^(10^10) HLUT/FSA is fair
enough but 10^(2*10^10) HLUT/FSA is too big to swallow?

Regards,
Pierre-Normand Houle

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own   

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 24 Mar 1999 00:00:00 GMT
Message-ID: <36f9314b@news3.us.ibm.net>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net> <7d9c3q$spl$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 24 Mar 1999 18:39:07 GMT, 166.72.21.178
Organization: SilWis
Newsgroups: comp.ai.philosophy

houlepn@ibm.net wrote in message <7d9c3q$spl$1@nnrp1.dejanews.com>...
>"Sergio Navega" <snavega@ibm.net> wrote:
>
>> Some posts ago, Pierre-Normand Houle proposed such
>> a HLUT and gave an idea of an algorithm to select
>> the entries randomly. At the time, I swallowed that
>> argument, but I think there's a point which was
>> underestimated: the size of that HLUT, although
>> still finite, would have to be *much, much* larger
>> than the already large HLUT that was proposed.
>
>To think of the non deterministic HLUT as much larger
>than the deterministic HLUT begs the question.  The
>size of the deterministic HLUT is of the order of
>(10^10)^(10^10) = 10^(10^11).  Assuming one can
>discriminate between just as much different motor outputs
>as sensory inputs then the size of the non deterministic
>HLUT is just: (10^10)^(2*10^10) = 10^(2*10^11).  So the
>non deterministic HLUT for Sergio living 80 years just as
>as big as the deterministic HLUT for Sergio living 40 years.
>I tried to convey this point in my DeepBlue HLUT:  Once
>the functional organization of Segios brain is understood,
>the part of the code responsible for noize addition will
>probably be less than 0.01% of the code for emulating the
>deterministic part.

Randomness inside one intelligent brain is very different from
randomness in a dumb machine. Deepblue is not intelligent.
The importance for me of this thought is the only thing that
moves me to think about this HLUT thing and most of what I'll
say next can be subsumed in this initial phrase.

That 0.01% (if it can be thought to be so small) may be
responsible for a lot of changes, if those changes compound
cumulatively through time. That's the big problem!

What I'll try to show is that this cumulative process is
enough to make not only one behavior divert substantially, but
provide an enourmous amount of other possible behaviors. The
nondeterministic HLUT, by definition, have stored all behaviors,
so it will have all possible diversions. What I'm saying is that
the size of this HLUT is much, much greater than the size of the
deterministic one and understanding this is understanding where
I want to put randomness.

>
>> My primary reason to reluct accepting HLUTs is that
>> I see a clear position for randomness inside our
>> brain.
>
>Then you still believe the 10^(10^10) HLUT/FSA is fair
>enough but 10^(2*10^10) HLUT/FSA is too big to swallow?
>

I don't agree with your estimate of the size of
nondeterministic HLUTs. And, sincerely, any kind of HLUT
is not easily swallowed by me ;-)

Ok, let's reduce our nomenclature first. D-HLUTs are deterministic
HLUTs. ND-HLUTs are nondeterministic HLUTs. The points which
we seem to agree are these:

a) D-HLUTs have only one output for each input (including history).
They are big, but they have only one output for each input.

b) D-HLUTs represents one (and only one) possible "line of
behaviors" of Einstein, for instance. That's the issue I raised
on the previous post and, by now, I assume to be reasonably
understood by all.

c) ND-HLUTs have, for each input, lots of outputs, and the
selection from those possible outputs is done through
a linearly distributed, random process (flat curve).

d) The probabilistic aspects of outputs (one output being more
probable than the "neighbors") may be done through repetition of
that entry. In this way, a linearly random selection process will
pick that output *more often* than the less probable ones. This
is enough to account for the most likely outputs, given one
input and history, and this is enough to put Einstein-HLUT into a
very probable path of discovering relativity. I devised this
method to avoid using *any other kind of algorithm*, keeping our
attention focused on the *retrieval of entries* from a table,
and nothing more. I find this reasonable.

e) The ND-HLUT is able to represent *all* possible lines of
behavior of Einstein, including those unlikely routes in which
he does *not* discover relativity (also others in which he
discovers quantum electrodynamics, for example).

f) It is clear that ND-HLUT are greater than HLUT, although we're
in doubt how much greater.

Please acknowledge agreement with these premises, as all my
arguments below derive from these things.

Now, what I'll try to argue is this:

g) The size of ND-HLUTs are *much, much* greater than the size
of D-HLUTs (this is vague enough, so we may have problems finding
what is that much, much greater; suffice it to say that it is
not on the order of 40 years to 80 years life span as you
proposed, it is much greater).

This is the issue I'll focus below.

Given one sequence of inputs, I'd like to see, first, the
process used by one ND-HLUT and then the process used by one brain
to find out the corresponding behaviors (outputs).

The ND-HLUT will take one input, join it with the history
of previous experiences and use it to address one output
register. This "register" is not a single entry, but contains
a certain (finite) number of "possible outputs", enough to
account for all possible outputs of Einstein, with repeated
outputs, as I said earlier, having greater probability
of selection than the outputs that are more unlikely, so
as to reflect Einstein's "behavioral tendencies".

A random selection method will pick the desired output. As
I said, I concocted this method to keep the process in the
HLUT as a *table retrieval*, without any other sort of algorithm
(I accept here Daryl's suggestion of looking up to a table
where all random numbers of determinate accuracy are listed
sequentially, a finite table). Then, everything is a table,
as this is what HLUT proposers seem to assume.

Now for Einstein's brain. The brain will use those inputs to
feed its "intelligence mechanism". Here, I want to split what
can happen in two possibilities:

a) The input will go through its intelligence and will produce
one output that may be even the one produced by one D-HLUT and
that, obviously, is perfectly explainable also by the ND-HLUT.
Nothing extraordinary happens in this possibility.

b) The input will go through its intelligence but, due to a
random glitch in one of Einstein's inner neurons, will produce
a sequence of thoughts that will end up producing *another*
output, different than that of a). The ND-HLUT will, by
definition, possess that output, WITH THE VERY SAME PROBABILITY
as Einstein's producing that output. ND-HLUTs will, then, keep
following in the same probabilistic line, as it does this
*by the force of definition*.

So far, nothing wrong. This is largely what I've learned
from you guys, since my introduction to HLUTs a few
weeks ago.

The question happens when we think what happened to
Einstein's brain as being subject to condition a) or to
condition b). If Einstein's brain goes through a), then all the
future probabilities of outputs will have nothing different.

But under condition b), things can be a "little bit different".
Einstein's brain, because of that glitch, will be a little
bit *different* (a creative thought, perhaps), in such a way as
to *ALTER ALL FUTURE PROBABILISTIC DISTRIBUTIONS OF ITS OUTPUTS*.
It is like if Einstein was transformed into a *new* kind of man,
because of that glitch (the power of one idea can transform
one man!).

One future output that, on the original ND-HLUT, had a
very probable chance of being selected after condition a), may not
have now, after condition b), because Einstein's brain would
think **differently**.

This can happen because that glitch may be responsible for
the sequence of thoughts that drove Einstein to discover
relativity, for instance.

Yes, yes, yes, yes, I know that, *by definition*, the ND-HLUT
will have all Einstein's outputs and all POSSIBLE DISTRIBUTION
OF PROBABILITIES, including those resultant from the ALTERATION
OF DISTRIBUTION! (I learned that one can't fight against such
a contrived "definition" :-).

But the probability distribution, that in our case was being
done by repetition of possible outputs, will be different
FOR EACH FUTURE OUTPUT "REGISTER". That means that, if what
happened was situation a), we will have to have one future
sequence of probabilistic distributions of *ALL* future outputs
in a determinate way, but if what happened was event b), we will
have to have a completely different sequence of future
distributions.

But again, ND-HLUTs, *by definition*, have it all. So, ND-HLUTs
will have *both* possible sequence of tables, which will imply
in the *duplication* of its size (one complete future route for
option a, another complete future route for option b). Selection
from one of those paths may be done by including the output in the
history of inputs, increasing the size of the historical input.
Please, note that THIS IS NOT AN ARGUMENT INVALIDATING ND-HLUTS!
(there's nothing we can think to invalidate tautological HLUTs :-)

So what is the big deal? Doubling the size is not the much,
much larger size I was trying to demonstrate. Or is it?

The question is that even at 0.01% variation, *each* of
those tables will *further split* into two a moment later (maybe
a second later), and then each of the four resulting tables
will again split and so on. Recall that each split doubles *all*
future entries.

How many times will that thing split in a finite lifetime?
Obviously, a finite number. But that is a *geometric increase*,
and given the conservative rate of 1 split at each minute,
it is easy to find numbers very, very large. So that's why
I say that ND-HLUTs are much, much, much larger than the D-HLUT.
Granted, this conclusion is the nearest thing I can think
of being analogous to the number of angels that can dance
in a pin.

I only spent time writing this because I firmly believe on
the role of randomness in intelligent mechanisms.

Regards,
Sergio Navega.

From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: Why HLUTs are not intelligent
Date: 24 Mar 1999 00:00:00 GMT
Message-ID: <7dbea5$9tg@edrn.newsguy.com>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net> <7d9c3q$spl$1@nnrp1.dejanews.com> <36f9314b@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy

Sergio says...

>g) The size of ND-HLUTs are *much, much* greater than the size
>of D-HLUTs (this is vague enough, so we may have problems finding
>what is that much, much greater; suffice it to say that it is
>not on the order of 40 years to 80 years life span as you
>proposed, it is much greater).

Oh, goody! Finally, we get away from philosophy, and get to
something that only requires math.

A deterministic HLUT can be represented by a set of pairs
<ih,o> where ih is an input history, and o is an output.
Since there is one entry per input history, the number of
entries will be Card(IH), the cardinality of the number of
input histories. Each entry will have a size at least equal
to log(Card(O)), where O is the set of possible outputs. So
the total size will be about Card(IH) * log(Card(O)). Card(IH)
is computed from Card(I), the number of possible inputs, and
T, then number of discrete time intervals as follows:
Card(IO) = Card(I)^T. So the total number of entries is:
Card(I)^T * log(Card(0)).

A nondeterministic HLUT can be represented as a set of triples,
<ih,oh,p>, where ih is an input history, oh is an output history,
and p is the probability that that input history ih will result
in output history oh. There will be one entry for each input
history and for each output history, so the total number of
entries is given by: Card(IH) * Card(OH), where IH is the set
of input histories, and OH is the set of output histories.
The size of each entry will depend on the precision of the
probability p, so let L be the length of the probability
(how many bits it takes to represent it). Then, the total
size of the nondeterministic HLUT will be:

Card(IH) * Card(OH) * L

As before, Card(IH) = Card(I)^T, and similarly, Card(OH) = Card(O)^T.
So, the total size is Card(I)^T * Card(O)^T * L.

If we assume that Card(O) = Card(I) (at least to a rough approximation),
we have

    Deterministic case: size = Card(I)^T * log(Card(O))
    Nondeterministic case: size = Card(I)^(2T) * L

So, I agree with houlepn, that roughly, the nondeterministic
case for time T/2 is comparable to the deterministic case for
time T.

Daryl McCullough
CoGenTex, Inc.
Ithaca, NY

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 24 Mar 1999 00:00:00 GMT
Message-ID: <36f966d5@news3.us.ibm.net>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net> <7d9c3q$spl$1@nnrp1.dejanews.com> <36f9314b@news3.us.ibm.net> <7dbea5$9tg@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 24 Mar 1999 22:27:33 GMT, 166.72.21.170
Organization: SilWis
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7dbea5$9tg@edrn.newsguy.com>...
>Sergio says...
>
>>g) The size of ND-HLUTs are *much, much* greater than the size
>>of D-HLUTs (this is vague enough, so we may have problems finding
>>what is that much, much greater; suffice it to say that it is
>>not on the order of 40 years to 80 years life span as you
>>proposed, it is much greater).
>
>Oh, goody! Finally, we get away from philosophy, and get to
>something that only requires math.
>

I don't like philosophy either, but I also happen to dislike
math ;-)

>A deterministic HLUT can be represented by a set of pairs
><ih,o> where ih is an input history, and o is an output.
>Since there is one entry per input history, the number of
>entries will be Card(IH), the cardinality of the number of
>input histories. Each entry will have a size at least equal
>to log(Card(O)), where O is the set of possible outputs. So
>the total size will be about Card(IH) * log(Card(O)). Card(IH)
>is computed from Card(I), the number of possible inputs, and
>T, then number of discrete time intervals as follows:
>Card(IO) = Card(I)^T. So the total number of entries is:
>Card(I)^T * log(Card(0)).
>

Well put, I agree entirely.

>A nondeterministic HLUT can be represented as a set of triples,
><ih,oh,p>, where ih is an input history, oh is an output history,
>and p is the probability that that input history ih will result
>in output history oh.

Wait a minute. I can't recognize that as the nondeterministic HLUT
(ND-HLUT) I was talking with Pierre. What you propose seems to be
unusable. Say I take one specific entry ih and say that the
probability p for a determinate output (what we feed to the motor
system is only *one* output, not a history) is 0.4. Where is the
entry ih with probability of output 0.6? Without repeated inputs ih,
identical among them, but with different probability for the
output o, how I'm supposed to make it work? And what happens
when I have two identical ih with equal probability but with
different outputs? (as this is what could happen because of
noise inside a brain)?

>There will be one entry for each input
>history and for each output history, so the total number of
>entries is given by: Card(IH) * Card(OH), where IH is the set
>of input histories, and OH is the set of output histories.

This does not reflect what I proposed as a ND-HLUT.

>The size of each entry will depend on the precision of the
>probability p, so let L be the length of the probability
>(how many bits it takes to represent it). Then, the total
>size of the nondeterministic HLUT will be:
>
>Card(IH) * Card(OH) * L
>
>As before, Card(IH) = Card(I)^T, and similarly, Card(OH) = Card(O)^T.
>So, the total size is Card(I)^T * Card(O)^T * L.
>

Math is ok and well put, but that's not my ND-HLUT.

>If we assume that Card(O) = Card(I) (at least to a rough approximation),
>we have
>
>    Deterministic case: size = Card(I)^T * log(Card(O))
>    Nondeterministic case: size = Card(I)^(2T) * L
>
>So, I agree with houlepn, that roughly, the nondeterministic
>case for time T/2 is comparable to the deterministic case for
>time T.
>

I have no problem agreeing with the math, I think this is a clear
way to expose HLUTs, but what you found was the size of a different
table, one that will not address the problems I raised in Pierre's
post. What I had proposed is a table in which there are several
potential outputs with equal probability of being selected and
with repeated entries for those more likely.

That will make the ND-HLUT, only because of the repetition of entries,
much larger than a D-HLUT. I know that my post to Pierre may sound
a bit tangled and too philosophical, but I'd like to know your
opinion about the arguments I raised there.

Regards,
Sergio Navega.

From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: Why HLUTs are not intelligent
Date: 24 Mar 1999 00:00:00 GMT
Message-ID: <7dbsdu$8g0@edrn.newsguy.com>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net> <7d9c3q$spl$1@nnrp1.dejanews.com> <36f9314b@news3.us.ibm.net> <7dbea5$9tg@edrn.newsguy.com> <36f966d5@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy

Sergio says...
>
>Daryl McCullough wrote
>>A nondeterministic HLUT can be represented as a set of triples,
>><ih,oh,p>, where ih is an input history, oh is an output history,
>>and p is the probability that that input history ih will result
>>in output history oh.
>
>Wait a minute. I can't recognize that as the nondeterministic HLUT
>(ND-HLUT) I was talking with Pierre. What you propose seems to be
>unusable. Say I take one specific entry ih and say that the
>probability p for a determinate output (what we feed to the motor
>system is only *one* output, not a history) is 0.4. Where is the
>entry ih with probability of output 0.6?

I can explain it. First a bit of notation:
If oh is the sequence <o_1,o_2,...,o_n>, and o is
another output, then oh + o is the sequence
<o_1,o_2,...,o_n,o>. That is, oh + o is the result
of appending o to the end of oh.

Okay, the idea is this: The nondeterministic HLUT has,
for each possible input history ih and output history oh,
exactly one entry of the form <ih,oh,p>, where p is
a probability. The "state" of the HLUT is simply a
record of all inputs and outputs made so far.

Now, for each input history ih and output history oh
and output history o, define P(ih,oh,o) to be that
number p such that <ih,oh + o,p> is an entry in the
table.

So our nondeterministic HLUT works like this:

     1. Input i.
     2. Update ih to get ih + i.
     3. Pick an output o with probability P(ih,oh,o)/C,
     where C = sum over all o of P(ih,oh,o).
     4. Output o.
     5. Update oh to get oh + o.
     6. Go to 1.

The reason for dividing by C is to insure that the
probabilities add up to 1.

>>There will be one entry for each input
>>history and for each output history, so the total number of
>>entries is given by: Card(IH) * Card(OH), where IH is the set
>>of input histories, and OH is the set of output histories.
>
>This does not reflect what I proposed as a ND-HLUT.

Well, it is *a* way to do a ND-HLUT.

>I know that my post to Pierre may sound
>a bit tangled and too philosophical, but I'd like to know your
>opinion about the arguments I raised there.

I'll have to reread it. I latched onto the part that
seemed amenable to a mathematical treatment, and ignored
the philosophical part.

Daryl McCullough
CoGenTex, Inc.
Ithaca, NY

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 25 Mar 1999 00:00:00 GMT
Message-ID: <36fa2d65@news3.us.ibm.net>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net> <7d9c3q$spl$1@nnrp1.dejanews.com> <36f9314b@news3.us.ibm.net> <7dbea5$9tg@edrn.newsguy.com> <36f966d5@news3.us.ibm.net> <7dbsdu$8g0@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 25 Mar 1999 12:34:45 GMT, 200.229.240.218
Organization: SilWis
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7dbsdu$8g0@edrn.newsguy.com>...
>Sergio says...
>>
>>Daryl McCullough wrote
>>>A nondeterministic HLUT can be represented as a set of triples,
>>><ih,oh,p>, where ih is an input history, oh is an output history,
>>>and p is the probability that that input history ih will result
>>>in output history oh.
>>
>>Wait a minute. I can't recognize that as the nondeterministic HLUT
>>(ND-HLUT) I was talking with Pierre. What you propose seems to be
>>unusable. Say I take one specific entry ih and say that the
>>probability p for a determinate output (what we feed to the motor
>>system is only *one* output, not a history) is 0.4. Where is the
>>entry ih with probability of output 0.6?
>
>I can explain it. First a bit of notation:
>If oh is the sequence <o_1,o_2,...,o_n>, and o is
>another output, then oh + o is the sequence
><o_1,o_2,...,o_n,o>. That is, oh + o is the result
>of appending o to the end of oh.
>

Ok, agreed. So I guess you're using oh together with
ih as a huge address to retrieve one entry from the table.

>Okay, the idea is this: The nondeterministic HLUT has,
>for each possible input history ih and output history oh,
>exactly one entry of the form <ih,oh,p>, where p is
>a probability. The "state" of the HLUT is simply a
>record of all inputs and outputs made so far.
>

If I understood you correctly, you're proposing the following:

[<input to table>, <output to motor action>]

[    <ih,oh>     ,         <o, p>            ]

That's the problem. I can't see how this is supposed to
work on the conditions that I have put in Pierre's message.
For each input ih and oh, we have to retrieve *several*
outputs o, not just one, and each particular output must have
its own probability p. That's what I'm calling a nondeterministic
table, one in which the output is not fixed, but variable
within some boundary.

What I'm proposing, then, is this:

   input to table      output from table
[    <ih, oh>     ,          <ov>           ]

Where ov is a vector of possible outputs in that specific
circumstance (pre-calculated, of course, just like
everything in HLUTs).

<ov> = {o1, o2, o3, o4, o5, o6, o7....on}

One will "randomly" pick one of these entries in <ov> to put
into the motor control. As I said in the message to Pierre, the
probability is accounted by repetition of the most probable
outputs, just like this:

<ov> = {o1, o2, o2, o2, o2, o3, o4 ,o4, o4, o5, o5....}

In this way, I can use a table of pre-computed "random" numbers
(the way you suggested some posts ago, a very good idea) to pick
one of the entries, and the probability of picking o2, in the
above case, will be greater than all other entries. This is a
clear way to keep the whole process *just as table retrievals*,
without any kind of "summing" algorithms or anything special.

This is what I consider to be a fair comparison between the
deterministic HLUTs and the nondeterministic ones (as D-HLUTs
are purely table retrievals too).

>Now, for each input history ih and output history oh
>and output history o, define P(ih,oh,o) to be that
>number p such that <ih,oh + o,p> is an entry in the
>table.
>

>So our nondeterministic HLUT works like this:
>
>     1. Input i.
>     2. Update ih to get ih + i.
>     3. Pick an output o with probability P(ih,oh,o)/C,
>     where C = sum over all o of P(ih,oh,o).
>     4. Output o.
>     5. Update oh to get oh + o.
>     6. Go to 1.
>
>The reason for dividing by C is to insure that the
>probabilities add up to 1.
>

I understand this, but this method will not address the
problem I raised in Pierre's message.

>>>There will be one entry for each input
>>>history and for each output history, so the total number of
>>>entries is given by: Card(IH) * Card(OH), where IH is the set
>>>of input histories, and OH is the set of output histories.
>>
>>This does not reflect what I proposed as a ND-HLUT.
>
>Well, it is *a* way to do a ND-HLUT.
>
>>I know that my post to Pierre may sound
>>a bit tangled and too philosophical, but I'd like to know your
>>opinion about the arguments I raised there.
>
>I'll have to reread it. I latched onto the part that
>seemed amenable to a mathematical treatment, and ignored
>the philosophical part.
>

Please, Daryl, I'd like to know your opinion about my arguments
there. After all, they are not *so* philosophical in nature (I
can assure you, I can't produce too philosophical arguments,
I'm more of a realist observer, which is one of my difficulties
here in c.a.p ;-).

Remember, though, that I'm not discussing the possibility of
the HLUT. I have accepted that. What is at stake now is just
the size of the beasts.

However, my justification to think about this is just to
exercise my recently acquired knowledge about this matter
(something that I must thank you and Pierre for helping me).

My intention, as you should be aware, is to extract something
useful from the HLUT thing (a very difficult task, indeed).

Regards,
Sergio Navega.

From: Pierre-Normand Houle <houlepn@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 26 Mar 1999 00:00:00 GMT
Message-ID: <7dek0i$fee$1@nnrp1.dejanews.com>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net> <7d9c3q$spl$1@nnrp1.dejanews.com> <36f9314b@news3.us.ibm.net>
X-Http-Proxy: 1.0 x14.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Fri Mar 26 00:22:51 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.51 [en] (Win98; I)

"Sergio Navega" <snavega@ibm.net> wrote:

> houlepn@ibm.net

> >> Some posts ago, Pierre-Normand Houle proposed such
> >> a HLUT and gave an idea of an algorithm to select
> >> the entries randomly. At the time, I swallowed that
> >> argument, but I think there's a point which was
> >> underestimated: the size of that HLUT, although
> >> still finite, would have to be *much, much* larger
> >> than the already large HLUT that was proposed.
> >
> > To think of the non deterministic HLUT as much larger
> > than the deterministic HLUT begs the question.  The
> > size of the deterministic HLUT is of the order of
> > (10^10)^(10^10) = 10^(10^11).  Assuming one can
> > discriminate between just as much different motor outputs
> > as sensory inputs then the size of the non deterministic
> > HLUT is just: (10^10)^(2*10^10) = 10^(2*10^11).  So the
> > non deterministic HLUT for Sergio living 80 years just as
> > as big as the deterministic HLUT for Sergio living 40 years.
> > I tried to convey this point in my DeepBlue HLUT:  Once
> > the functional organization of Segios brain is understood,
> > the part of the code responsible for noise addition will
> > probably be less than 0.01% of the code for emulating the
> > deterministic part.
>
> Randomness inside one intelligent brain is very different from
> randomness in a dumb machine.

I am not quite sure about that.

> Deepblue is not intelligent.

Agreed but I am not sure this is only due to lack of
intrinsic randomness and this is what this discussion
is about.

> That 0.01% (if it can be thought to be so small) may be
> responsible for a lot of changes, if those changes compound
> cumulatively through time. That's the big problem!

Sure, just as tiny changes in sensory inputs compound
cumulatively through time.  I am not saying that stochastic
processes have no roles in human brains, only that they
contribute marginally in making the HLUT bigger.

> What I'll try to show is that this cumulative process is
> enough to make not only one behavior divert substantially, but
> provide an enourmous amount of other possible behaviors.

I have no argument over that.

> The
> nondeterministic HLUT, by definition, have stored all behaviors,
> so it will have all possible diversions. What I'm saying is that
> the size of this HLUT is much, much greater than the size of the
> deterministic one and understanding this is understanding where
> I want to put randomness.

And I hope understanding how it is not is understanding where
not to put randomness ;-)

> >> My primary reason to reluct accepting HLUTs is that
> >> I see a clear position for randomness inside our
> >> brain.
> >
> > Then you still believe the 10^(10^10) HLUT/FSA is fair
> > enough but 10^(2*10^10) HLUT/FSA is too big to swallow?
> >
>
> I don't agree with your estimate of the size of
> nondeterministic HLUTs. And, sincerely, any kind of HLUT
> is not easily swallowed by me ;-)

The HLUT was intended as an argument not to swallow FSAs and
I think this argument fails.  I will refine my estimate below.

> Ok, let's reduce our nomenclature first. D-HLUTs are deterministic
> HLUTs. ND-HLUTs are nondeterministic HLUTs. The points which
> we seem to agree are these:
>
> a) D-HLUTs have only one output for each input (including history).
> They are big, but they have only one output for each input.

Right.

> b) D-HLUTs represents one (and only one) possible "line of
> behaviors" of Einstein, for instance. That's the issue I raised
> on the previous post and, by now, I assume to be reasonably
> understood by all.

One possible "line"?  I'd rather say one three for all possible
lines of behavior resulting from all possible input histories.

> c) ND-HLUTs have, for each input, lots of outputs, and the
> selection from those possible outputs is done through
> a linearly distributed, random process (flat curve).
>
> d) The probabilistic aspects of outputs (one output being more
> probable than the "neighbors") may be done through repetition of
> that entry. In this way, a linearly random selection process will
> pick that output *more often* than the less probable ones. This
> is enough to account for the most likely outputs, given one
> input and history, and this is enough to put Einstein-HLUT into a
> very probable path of discovering relativity. I devised this
> method to avoid using *any other kind of algorithm*, keeping our
> attention focused on the *retrieval of entries* from a table,
> and nothing more. I find this reasonable.

I have no objection.

> e) The ND-HLUT is able to represent *all* possible lines of
> behavior of Einstein, including those unlikely routes in which
> he does *not* discover relativity (also others in which he
> discovers quantum electrodynamics, for example).

Well, there were also many routes in which he did not discover
relativity in the D-HLUT too.  The only difference in the ND-HLUT
is that the three branches also at 'output nodes' and not only
at 'input nodes'.  Thus you have twice as much branching levels.
This has the same impact on the HLUT as doubling Einstein's
longevity, assuming there are about as much motor neurons as
sensory neurons in the brain.

There is also a small overhead due to the branche
redundancy you put there to account for varying
probabilities.  Lets assume there are 10^8 motor neurons and
just as many sensory (binary) neurons.  The three splits in
2^(10^8) = 10^30,000,000 branches at each node.  Lets say
you want a time resolution of 1/10,000 second and a 100
year longevity.  This leads to 2*100*365*24*60*60*10,000
< 10^13 periods (branching levels).  The HLUT thus has
(10^30,000,000)^(10^13) = 10^(3 * 10^20) < 10^(10^21)
entries which is more than my earlier estimate because
here I assumed finer motor control and sensory
discrimination.

Now this is still assuming equiprobability which is most
unreasonable if meaningful outputs are to emerge from the
sea of senseless outputs.  So, lets duplicate some 'motor'
branches up to an average of 10^30,000,000 times so that
some of them can 'emerge' from this sea of nonsense.  The
resulting ND-HLUT now has

(10^30,000,000 * 10^30,000,000)^(10^13) =
(10^60,000,000)^(10^13) = 10^(6 * 10^20) < 10^(10^21)
entries.

Lets compare this to the

(10^30,000,000)^(10^13/2) = 10^(1.5*10^20) entries of the
deterministic HLUT.

I would claim that if one of these HLUT is big enough then
so are the other two.

> f) It is clear that ND-HLUT are greater than HLUT, although we're
> in doubt how much greater.
>
> Please acknowledge agreement with these premises, as all my
> arguments below derive from these things.
>
> Now, what I'll try to argue is this:
>
> g) The size of ND-HLUTs are *much, much* greater than the size
> of D-HLUTs (this is vague enough, so we may have problems finding
> what is that much, much greater; suffice it to say that it is
> not on the order of 40 years to 80 years life span as you
> proposed, it is much greater).

Right.  It is more like doubling the size of the motor cortex in
your scheme.  In my scheme there are no duplicate branches and the
probability for each output is explicitly given in the table.
There is just twice as much branchings.  In any case, the number
of entries is still in the order of 10^(10^21).

> Given one sequence of inputs, I'd like to see, first, the
> process used by one ND-HLUT and then the process used by one brain
> to find out the corresponding behaviors (outputs).
>
> The ND-HLUT will take one input, join it with the history
> of previous experiences and use it to address one output
> register. This "register" is not a single entry, but contains
> a certain (finite) number of "possible outputs", enough to
> account for all possible outputs of Einstein, with repeated
> outputs, as I said earlier, having greater probability
> of selection than the outputs that are more unlikely, so
> as to reflect Einstein's "behavioral tendencies".
>
> A random selection method will pick the desired output. As
> I said, I concocted this method to keep the process in the
> HLUT as a *table retrieval*, without any other sort of algorithm
> (I accept here Daryl's suggestion of looking up to a table
> where all random numbers of determinate accuracy are listed
> sequentially, a finite table). Then, everything is a table,
> as this is what HLUT proposers seem to assume.
>
> Now for Einstein's brain. The brain will use those inputs to
> feed its "intelligence mechanism". Here, I want to split what
> can happen in two possibilities:
>
> a) The input will go through its intelligence and will produce
> one output that may be even the one produced by one D-HLUT and
> that, obviously, is perfectly explainable also by the ND-HLUT.
> Nothing extraordinary happens in this possibility.
>
> b) The input will go through its intelligence but, due to a
> random glitch in one of Einstein's inner neurons, will produce
> a sequence of thoughts that will end up producing *another*
> output, different than that of a). The ND-HLUT will, by
> definition, possess that output, WITH THE VERY SAME PROBABILITY
> as Einstein's producing that output. ND-HLUTs will, then, keep
> following in the same probabilistic line, as it does this
> *by the force of definition*.
>
> So far, nothing wrong. This is largely what I've learned
> from you guys, since my introduction to HLUTs a few
> weeks ago.
>
> The question happens when we think what happened to
> Einstein's brain as being subject to condition a) or to
> condition b). If Einstein's brain goes through a), then all the
> future probabilities of outputs will have nothing different.
>
> But under condition b), things can be a "little bit different".
> Einstein's brain, because of that glitch, will be a little
> bit *different* (a creative thought, perhaps), in such a way as
> to *ALTER ALL FUTURE PROBABILISTIC DISTRIBUTIONS OF ITS OUTPUTS*.

Right.  As Einstein goes through life the conditional
probability of his future actions change.  The same happens
as you progress through the HLUT's three.  As you climb it,
the branches you do not take are trimmed and the conditional
probabilities for the accessible nodes above change.

> It is like if Einstein was transformed into a *new* kind of man,
> because of that glitch (the power of one idea can transform
> one man!).
>
> One future output that, on the original ND-HLUT, had a
> very probable chance of being selected after condition a), may not
> have now, after condition b), because Einstein's brain would
> think **differently**.

Sure.

> This can happen because that glitch may be responsible for
> the sequence of thoughts that drove Einstein to discover
> relativity, for instance.

Fine.

> Yes, yes, yes, yes, I know that, *by definition*, the ND-HLUT
> will have all Einstein's outputs and all POSSIBLE DISTRIBUTION
> OF PROBABILITIES, including those resultant from the ALTERATION
> OF DISTRIBUTION! (I learned that one can't fight against such
> a contrived "definition" :-).

There is no alteration of the HLUT.  As you go through the HLUT,
the past history is built and the conditional probability
of visiting nodes ahead change.

> But the probability distribution, that in our case was being
> done by repetition of possible outputs, will be different
> FOR EACH FUTURE OUTPUT "REGISTER". That means that, if what
> happened was situation a), we will have to have one future
> sequence of probabilistic distributions of *ALL* future outputs
> in a determinate way, but if what happened was event b), we will
> have to have a completely different sequence of future
> distributions.

And this is done automatically because the accessible entries of
the HLUT have to be consistent with the past history and as this
history is being built all inconsistent entries are trimmed from
the 'accessible subset' of the HLUT.

> But again, ND-HLUTs, *by definition*, have it all. So, ND-HLUTs
> will have *both* possible sequence of tables, which will imply
> in the *duplication* of its size (one complete future route for
> option a, another complete future route for option b). Selection
> from one of those paths may be done by including the output in the
> history of inputs, increasing the size of the historical input.
> Please, note that THIS IS NOT AN ARGUMENT INVALIDATING ND-HLUTS!
> (there's nothing we can think to invalidate tautological HLUTs :-)
>
> So what is the big deal? Doubling the size is not the much,
> much larger size I was trying to demonstrate. Or is it?

You are doubling the number of branching levels.  The same
is achieved in doubling longevity.

> The question is that even at 0.01% variation, *each* of
> those tables will *further split* into two a moment later (maybe
> a second later), and then each of the four resulting tables
> will again split and so on. Recall that each split doubles *all*
> future entries.
>
> How many times will that thing split in a finite lifetime?
> Obviously, a finite number. But that is a *geometric increase*,

Nothing new.  We already had this from the 'input nodes'.

> and given the conservative rate of 1 split at each minute,
> it is easy to find numbers very, very large. So that's why
> I say that ND-HLUTs are much, much, much larger than the D-HLUT.
> Granted, this conclusion is the nearest thing I can think
> of being analogous to the number of angels that can dance
> in a pin.

Quite so ;-)

> I only spent time writing this because I firmly believe on
> the role of randomness in intelligent mechanisms.

OK, randomness from internal sources must be accounted for in
the design of intelligent machines but why is it necessary?

Pierre-Normand Houle

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own   

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 26 Mar 1999 00:00:00 GMT
Message-ID: <36fbc977@news3.us.ibm.net>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net> <7d9c3q$spl$1@nnrp1.dejanews.com> <36f9314b@news3.us.ibm.net> <7dek0i$fee$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 26 Mar 1999 17:52:55 GMT, 200.229.243.252
Organization: SilWis
Newsgroups: comp.ai.philosophy

[to keep the post short, I'm cutting the agreed stuff, which is large]

Pierre-Normand Houle wrote in message <7dek0i$fee$1@nnrp1.dejanews.com>...
>"Sergio Navega" <snavega@ibm.net> wrote:
>>
>> Randomness inside one intelligent brain is very different from
>> randomness in a dumb machine.
>
>I am not quite sure about that.
>

The main reason I hypothesize that is because of things like
categorization. Suppose you have a table-driven system, like
a computer running a table lookup program.

Suppose you add a random noise source to that system like, for
instance, the random alteration of one bit of the address lines.
What that system will present as output is a random entry that
probably bears no relation to the problem being solved by that
computer.

Now do the same thing inside an intelligent brain. What it will
probably do is to provoke one alteration that will be confined in
a certain part of one specific "conceptual space" of that brain.

Instead of provoking a generally "mad" response like on the former
case, this random glitch may provoke a "slight deviation" only,
because it will be *circumscribed* to the current conceptual
space (any concept relating to the one that received randomization
is already resistant to slight fluctuations, because that's what
concepts are, a reunion of vaguely similar things). This variation
will, then, not provoke something out of context, but will produce
some variation *within* that context.

The intelligent agent may, for instance, find a totally new way to
think about one specific concept, even one that he had long
ago settled.

This is equivalent of, occasionally, rethinking things such as
gravitation. Who can say, today, that something *very* important
can't be discovered if we reject or reevaluate our most cherished
concepts about gravity? Maybe that's just what will happen when
some physicist discovers a way to unify quantum and gravitational
effects.

I say that most (if not all) creative acts in our life depends
crucially of such a mechanism and that most discoveries in
science, arts, technology, social sciences, relationships, etc,
are driven by the exploration of unlikely things, things that
we would never think about if we were "deterministic".
The small influence of a random component seems to potentialize
our efforts we made during conceptualization and categorization.

Categorization, then, seems to act often as a *filter*, receiving
noise as input and producing concentrated, focused but slightly
random things as output. Often, valuable things.

>> Deepblue is not intelligent.
>
>Agreed but I am not sure this is only due to lack of
>intrinsic randomness and this is what this discussion
>is about.
>

I wouldn't say that randomness is the detail missing in Deepblue
to make it intelligent. What I say is that a random glitch in
Deepblue will probably make it go out of its mind, while a random
glitch in a brain will more often force it to reevaluate (and
learn something new in the process) something that he never
thought before. The rest of the brain will discard the effect
of that randomness, if its result is not good. But if it is good,
then the result is "Eureka!".

>> That 0.01% (if it can be thought to be so small) may be
>> responsible for a lot of changes, if those changes compound
>> cumulatively through time. That's the big problem!
>
>Sure, just as tiny changes in sensory inputs compound
>cumulatively through time.  I am not saying that stochastic
>processes have no roles in human brains, only that they
>contribute marginally in making the HLUT bigger.
>

Indeed, the D-HLUT is so large that it really does not
grows sensibly if we multiply it by 10 or 100 or 1000.

Your calculations (that I snipped) are reasonable and I
guess that the way I was proposing to store the most
probable entries (as repeated entries in a proportion
such as to give adequate accuracy for probability) did
just that "minimum" effect of a few orders of magnitude
more than the double of a D-HLUT. My intent, I hope it
is clear, was to conserve unaltered the spirit of table
retrieval, and not adding "calculation loops" for probability
to simplify the size of the table. We know that with quite
simple auxiliary techniques we can fit a ND-HLUT in 1/10
or less of the size of a D-HLUT.

That is largely why I said ND-HLUTs are "much larger" than
D-HLUTs, but granted, giving the magnitude of the things, it
does not make much sense to compare sizes here. They are
all unimaginably large.

>
>> I only spent time writing this because I firmly believe on
>> the role of randomness in intelligent mechanisms.
>
>OK, randomness from internal sources must be accounted for in
>the design of intelligent machines but why is it necessary?
>

I can't tell you why, because I'm still trying to convince
myself. Suffice it to say that darwinian evolution, without random
processes, would have a serious problem to produce the variation
necessary for its development. I guess some evolutionists say
that without random mutations we probably wouldn't be here.

I see human thoughts (and creative thoughts, in particular) as
being dependent of such influences.

Most progress (if not all) in science is due to creative visions,
even those that use only mathematical derivations. The difference
of, say, Paul Dirac and other physicists of his time was, in my
opinion, not only of brain power, but also of creative insight.
One insight on a well prepared brain provokes miracles. Even if
this brain is not so different than that of somebody else's.

Regards,
Sergio Navega.

From: Pierre-Normand Houle <houlepn@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 26 Mar 1999 00:00:00 GMT
Message-ID: <7dgthf$fk5$1@nnrp1.dejanews.com>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net> <7d9c3q$spl$1@nnrp1.dejanews.com> <36f9314b@news3.us.ibm.net> <7dek0i$fee$1@nnrp1.dejanews.com> <36fbc977@news3.us.ibm.net>
X-Http-Proxy: 1.0 x9.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Fri Mar 26 21:17:44 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.51 [en] (Win98; I)

"Sergio Navega" <snavega@ibm.net> wrote:

> >> Randomness inside one intelligent brain is very different from
> >> randomness in a dumb machine.
> >
> > I am not quite sure about that.
> >
>
> The main reason I hypothesize that is because of things like
> categorization. Suppose you have a table-driven system, like
> a computer running a table lookup program.
>
> Suppose you add a random noise source to that system like, for
> instance, the random alteration of one bit of the address lines.
> What that system will present as output is a random entry that
> probably bears no relation to the problem being solved by that
> computer.

OK, but adding random errors in the process of retrieving some line
in a HLUT does not prevent us to define the HLUT of a FSA which is
itself fault tolerant.  HLUTs are not meant to be sensible
implementations of intelligent algorithms.

> Now do the same thing inside an intelligent brain.  What it will
> probably do is to provoke one alteration that will be confined in
> a certain part of one specific "conceptual space" of that brain.

This is because the brain has a fault tolerant architecture.  If
you approximate the brain with a FSA then the corresponding HLUT
will inherit the brain's 'software' fault tolerance to noisy inputs
but not its 'hardware' fault tolerance to 'mechanical' malfunctions.
The malfunctions of FSAs lead to different FSAs with different
HLUTs.  This in no way argues against the possibility of building
fault tolerant FSAs.  (And nobody wishes to 'build' HLUTs.)

[agreed stuff snipped]

> >> I only spent time writing this because I firmly believe on
> >> the role of randomness in intelligent mechanisms.
> >
> > OK, randomness from internal sources must be accounted for in
> > the design of intelligent machines but why is it necessary?
>
> I can't tell you why, because I'm still trying to convince
> myself. Suffice it to say that darwinian evolution, without random
> processes, would have a serious problem to produce the variation
> necessary for its development. I guess some evolutionists say
> that without random mutations we probably wouldn't be here.

OK, but in these cases pseudo-randomness seems just as good as
true randomness so determinism or chaos do not seem at issue and
can not lead to arguments against FSAs.

> I see human thoughts (and creative thoughts, in particular) as
> being dependent of such influences.
>
> Most progress (if not all) in science is due to creative visions,
> even those that use only mathematical derivations. The difference
> of, say, Paul Dirac and other physicists of his time was, in my
> opinion, not only of brain power, but also of creative insight.
> One insight on a well prepared brain provokes miracles. Even if
> this brain is not so different than that of somebody else's.

I like Minsky's account of creativity as a result of activity
occurring beyond the reach of our introspective power.  The key
might be more in hidden "preparation", as you hint above, than
in amounts of random glitches.  This might be why drugs allow
an increase in 'creative' production that seldom result in
deeper insights.

Regards,
Pierre-Normand Houle

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own   

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 26 Mar 1999 00:00:00 GMT
Message-ID: <36fc0a3c@news3.us.ibm.net>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net> <7d9c3q$spl$1@nnrp1.dejanews.com> <36f9314b@news3.us.ibm.net> <7dek0i$fee$1@nnrp1.dejanews.com> <36fbc977@news3.us.ibm.net> <7dgthf$fk5$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 26 Mar 1999 22:29:16 GMT, 129.37.183.221
Organization: SilWis
Newsgroups: comp.ai.philosophy

Pierre-Normand Houle wrote in message <7dgthf$fk5$1@nnrp1.dejanews.com>...
>"Sergio Navega" <snavega@ibm.net> wrote:
>
>> >> Randomness inside one intelligent brain is very different from
>> >> randomness in a dumb machine.
>> >
>> > I am not quite sure about that.
>> >
>>
>> The main reason I hypothesize that is because of things like
>> categorization. Suppose you have a table-driven system, like
>> a computer running a table lookup program.
>>
>> Suppose you add a random noise source to that system like, for
>> instance, the random alteration of one bit of the address lines.
>> What that system will present as output is a random entry that
>> probably bears no relation to the problem being solved by that
>> computer.
>
>OK, but adding random errors in the process of retrieving some line
>in a HLUT does not prevent us to define the HLUT of a FSA which is
>itself fault tolerant.  HLUTs are not meant to be sensible
>implementations of intelligent algorithms.
>

I agree. Surely the HLUT will inherit the "fault toleranceness" of
that FSA, although the HLUT will be, because of its architecture,
subject to its own reliability problems (if it could be implemented
at all).

>> Now do the same thing inside an intelligent brain.  What it will
>> probably do is to provoke one alteration that will be confined in
>> a certain part of one specific "conceptual space" of that brain.
>
>This is because the brain has a fault tolerant architecture.  If
>you approximate the brain with a FSA then the corresponding HLUT
>will inherit the brain's 'software' fault tolerance to noisy inputs
>but not its 'hardware' fault tolerance to 'mechanical' malfunctions.
>The malfunctions of FSAs lead to different FSAs with different
>HLUTs.  This in no way argues against the possibility of building
>fault tolerant FSAs.  (And nobody wishes to 'build' HLUTs.)
>

Brains are more than fault tolerant.
I recognize two situations in which noise may be involved here. Noise
tolerance is one of them, the system being able to recover after
suffering an attack from a small amount of noise (obviously,
neglecting strong malfunctions such as epileptic seizures). I
confess that I didn't think much about this aspect in relation to
FSAs, in which you are, certainly, more versed than I am.
What I know is that noise in HLUTs address lines produces garbage.
I can't in principle say the same thing in relation to FSAs.

The second aspect of noise (it is the same noise as above, but the
effect is other) is the "good" use of noise, which is forcing the
brain's attention to an unusual conceptual area. An intelligent
mechanism may benefit from this, because the "deviation" from
normal behavior of a certain part of the mechanism is restricted
to a local area (not physical, but conceptual). This is, for instance,
the equivalent of watching a jar upside down and perceiving that
it can be used as a support to step over and reach something too high.
This can occur in the mind of a child and would be considered
intelligent (although prone to complaints of her father :-).

The kind of problems that scientists frequently face involve this
sort of "unusual", creative ways of seeing things. I find a place
for such a thing deep in the cognition of one intelligent agent.

>[agreed stuff snipped]
>
>> >> I only spent time writing this because I firmly believe on
>> >> the role of randomness in intelligent mechanisms.
>> >
>> > OK, randomness from internal sources must be accounted for in
>> > the design of intelligent machines but why is it necessary?
>>
>> I can't tell you why, because I'm still trying to convince
>> myself. Suffice it to say that darwinian evolution, without random
>> processes, would have a serious problem to produce the variation
>> necessary for its development. I guess some evolutionists say
>> that without random mutations we probably wouldn't be here.
>
>OK, but in these cases pseudo-randomness seems just as good as
>true randomness so determinism or chaos do not seem at issue and
>can not lead to arguments against FSAs.
>

I must say that, in principle, I don't have such arguments against
FSAs. I'd have to think more about it. The experience I have is with
address lines. When I built my first microcomputer, using the MC6800
8 bit microprocessor (a century ago!), I learned what happens when
one address line receives a glitch: the machine invariably hangs.

>> I see human thoughts (and creative thoughts, in particular) as
>> being dependent of such influences.
>>
>> Most progress (if not all) in science is due to creative visions,
>> even those that use only mathematical derivations. The difference
>> of, say, Paul Dirac and other physicists of his time was, in my
>> opinion, not only of brain power, but also of creative insight.
>> One insight on a well prepared brain provokes miracles. Even if
>> this brain is not so different than that of somebody else's.
>
>I like Minsky's account of creativity as a result of activity
>occurring beyond the reach of our introspective power.  The key
>might be more in hidden "preparation", as you hint above, than
>in amounts of random glitches.  This might be why drugs allow
>an increase in 'creative' production that seldom result in
>deeper insights.
>

It is exactly the serious studies of that "introspective power"
and "preparation" that I find most informative. In particular, a
field of (very serious) psychology called "Implicit Learning" gives
a bunch of information about the "cognitive unconscious". With solid
empirical evidences, the guys that research this field are revealing
very interesting characteristics of the cognition "below the rug".
Part of my arguments for noise I support with those findings.

But another strong influence I carry is from Douglas Hofstadter's
Copycat system. Hofstadter develops strong points toward the
importance of analogical reasoning in intelligent systems and his
solution uses, among several other techniques, one thing called
"parallel terraced scan", that can be seen as a random exploration
of unknown terrain with a gradual increase in attention in the
areas that are "giving the better prizes". However, one of the
important points he emphasizes is that, even after finding the
"good" areas, one can never abandon occasional random inspection
of other areas. One can find a hidden treasure buried there.

Regards,
Sergio Navega.

From: Pierre-Normand Houle <houlepn@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 27 Mar 1999 00:00:00 GMT
Message-ID: <7djft6$hqh$1@nnrp1.dejanews.com>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net> <7d9c3q$spl$1@nnrp1.dejanews.com> <36f9314b@news3.us.ibm.net> <7dek0i$fee$1@nnrp1.dejanews.com> <36fbc977@news3.us.ibm.net> <7dgthf$fk5$1@nnrp1.dejanews.com> <36fc0a3c@news3.us.ibm.net>
X-Http-Proxy: 1.0 x7.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Sat Mar 27 20:43:20 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.51 [en] (Win98; I)

"Sergio Navega" <snavega@ibm.net> wrote:

> What I know is that noise in HLUTs address lines produces garbage.
> I can't in principle say the same thing in relation to FSAs.

The HLUT is a representation of an FSA where all state information
is put in memory space.  Tempering with the process of line
retrieval in a HLUT is the same as having an FSA producing an
incorrect output at some given time.  It has no consequence beyond
that.  Let's say you want to grasp the cup on your desk and you
have an involuntary spasm in the arm due to some crazy neuron
firing in your motor cortex.  An incorrect output retrieval in the
HLUT will have the same effect: coffee spilled on the floor.  Right
after that, Sergio will behave appropriately based on the perceived
results of this failed action, so will the HLUT (Assuming Sergio's
brain can be modeled with a fault tolerant, flexible FSA).

> The second aspect of noise (it is the same noise as above, but the
> effect is other) is the "good" use of noise, which is forcing the
> brain's attention to an unusual conceptual area. An intelligent
> mechanism may benefit from this, because the "deviation" from
> normal behavior of a certain part of the mechanism is restricted
> to a local area (not physical, but conceptual). This is, for instance,
> the equivalent of watching a jar upside down and perceiving that
> it can be used as a support to step over and reach something too high.
> This can occur in the mind of a child and would be considered
> intelligent (although prone to complaints of her father :-).

I would conjecture that much of this useful stochasticity is
actually the result of hidden deterministic dynamics of the
incredibly complex Society of Mind.  Lots of agents cooperating,
competing, receiving multiple excitatory and inhibitory cues
from the constant shift of attentional/volitional processes.
There might not be much need for amplification of true quantum
(or other low level origin) randomness.  The idea of "support
for climbing" was already there, struggling to catch the child's
attention (the child probably had some direct experience of
both climbing and turning objects upside down), it won because
it was fitting well with the child's current needs and perception
of the scene and because the more obvious but flawed solutions
have been discarted after some analysis.

> >> I can't tell you why, because I'm still trying to convince
> >> myself. Suffice it to say that Darwinian evolution, without random
> >> processes, would have a serious problem to produce the variation
> >> necessary for its development. I guess some evolutionists say
> >> that without random mutations we probably wouldn't be here.
> >
> > OK, but in these cases pseudo-randomness seems just as good as
> > true randomness so determinism or chaos do not seem at issue and
> > can not lead to arguments against FSAs.
>
> I must say that, in principle, I don't have such arguments against
> FSAs. I'd have to think more about it. The experience I have is with
> address lines. When I built my first microcomputer, using the MC6800
> 8 bit microprocessor (a century ago!), I learned what happens when
> one address line receives a glitch: the machine invariably hangs.

Sure, if your computer had to simulate the operation of your brain,
changing the content of this address line might have amounted to
replace simulated dopamine with simulated mustard, probably enough
to cause a simulated brain to hang.

> It is exactly the serious studies of that "introspective power"
> and "preparation" that I find most informative. In particular, a
> field of (very serious) psychology called "Implicit Learning" gives
> a bunch of information about the "cognitive unconscious". With solid
> empirical evidences, the guys that research this field are revealing
> very interesting characteristics of the cognition "below the rug".
> Part of my arguments for noise I support with those findings.

I would like very much to learn about these evidences.  I am not
skeptical a bit, I just find this very interesting.  Do you know
any papers or reviews available on the net?

> But another strong influence I carry is from Douglas Hofstadter's
> Copycat system. Hofstadter develops strong points toward the
> importance of analogical reasoning in intelligent systems and his
> solution uses, among several other techniques, one thing called
> "parallel terraced scan", that can be seen as a random exploration
> of unknown terrain with a gradual increase in attention in the
> areas that are "giving the better prizes". However, one of the
> important points he emphasizes is that, even after finding the
> "good" areas, one can never abandon occasional random inspection
> of other areas. One can find a hidden treasure buried there.

Right.  I discussed this a few days ago with some friends.  It
seems that when one focus on the resolution of a problem much
cognitive power has to be devoted to a few tentative pieces of
the puzzle in order to be able to play with them and manage
their intrinsic complexities.  This might be related to the
figure/ground problem.  Occasionally the missing piece of the
puzzle is hidden in the background and can only emerge when
the person backs away from the problem for some time.  Many
time this hidden treasure was crying to be seen but was
actively pushed in the relative background of another (apparently
contradictory) piece of the tentative solution.  I am still
not sure about the role of deep intrinsic stochasticity though.

Regards,
Pierre-Normand Houle

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own   

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 29 Mar 1999 00:00:00 GMT
Message-ID: <36ffb7d6@news3.us.ibm.net>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net> <7d9c3q$spl$1@nnrp1.dejanews.com> <36f9314b@news3.us.ibm.net> <7dek0i$fee$1@nnrp1.dejanews.com> <36fbc977@news3.us.ibm.net> <7dgthf$fk5$1@nnrp1.dejanews.com> <36fc0a3c@news3.us.ibm.net> <7djft6$hqh$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 29 Mar 1999 17:26:46 GMT, 200.229.240.158
Organization: SilWis
Newsgroups: comp.ai.philosophy

Pierre-Normand Houle wrote in message <7djft6$hqh$1@nnrp1.dejanews.com>...
> "Sergio Navega" <snavega@ibm.net> wrote:
>
>> What I know is that noise in HLUTs address lines produces garbage.
>> I can't in principle say the same thing in relation to FSAs.
>
>The HLUT is a representation of an FSA where all state information
>is put in memory space.  Tempering with the process of line
>retrieval in a HLUT is the same as having an FSA producing an
>incorrect output at some given time.  It has no consequence beyond
>that.  Let's say you want to grasp the cup on your desk and you
>have an involuntary spasm in the arm due to some crazy neuron
>firing in your motor cortex.  An incorrect output retrieval in the
>HLUT will have the same effect: coffee spilled on the floor.  Right
>after that, Sergio will behave appropriately based on the perceived
>results of this failed action, so will the HLUT (Assuming Sergio's
>brain can be modeled with a fault tolerant, flexible FSA).
>

I agree entirely that the effect of a random spike in a motor action
is much the same thing in HLUTs and brains. The big difference
happens when this random spike happens in nonmotor areas. On HLUTs,
it is the very same effect. On brains, the effect will be restricted
in a local, small area, because it will affect one concept or one
line of thought (association, spreading activation, whatever) that
causes few problems to succeeding stages. Succeeding stages are
noise tolerant, in the sense that they present the same answer to
similar entries (an apple is an apple even if its color or shape is
very different from another exemplar).

>> The second aspect of noise (it is the same noise as above, but the
>> effect is other) is the "good" use of noise, which is forcing the
>> brain's attention to an unusual conceptual area. An intelligent
>> mechanism may benefit from this, because the "deviation" from
>> normal behavior of a certain part of the mechanism is restricted
>> to a local area (not physical, but conceptual). This is, for instance,
>> the equivalent of watching a jar upside down and perceiving that
>> it can be used as a support to step over and reach something too high.
>> This can occur in the mind of a child and would be considered
>> intelligent (although prone to complaints of her father :-).
>
>I would conjecture that much of this useful stochasticity is
>actually the result of hidden deterministic dynamics of the
>incredibly complex Society of Mind.  Lots of agents cooperating,
>competing, receiving multiple excitatory and inhibitory cues
>from the constant shift of attentional/volitional processes.

I agree with you here, but this just happens to amplify the effect
of a small amount of noise. Dynamic systems composed of a small
number of elements interacting with complex rules are susceptible
to very small variations. The three body problem of mechanics
is very sensitive to initial conditions. When this system receives
a small amount of noise, it can produce a totally unpredictable
behavior, just like our atmosphere.

>There might not be much need for amplification of true quantum
>(or other low level origin) randomness.  The idea of "support
>for climbing" was already there, struggling to catch the child's
>attention (the child probably had some direct experience of
>both climbing and turning objects upside down), it won because
>it was fitting well with the child's current needs and perception
>of the scene and because the more obvious but flawed solutions
>have been discarted after some analysis.
>

If the child had all elements in its head (need of support to climb,
wish to reach a higher place, etc) except the inverted jar, then
what I propose is a situation where that inversion could happen
because of a random noise. It is unlikely, of course, but we
know that good ideas are unlikely. It is a "gestalt-like" effect
which, suddenly, shows to the child as an "eureka-like" event, what
to do. The moment of discovery is something that reminds me of the
sudden change that a dynamic system may present when it starts
oscillating around another attractor. I hypothesize that this
change may be driven by noise, and so, is not predictable,
although we can devise systems which can perform analogously.

>> I must say that, in principle, I don't have such arguments against
>> FSAs. I'd have to think more about it. The experience I have is with
>> address lines. When I built my first microcomputer, using the MC6800
>> 8 bit microprocessor (a century ago!), I learned what happens when
>> one address line receives a glitch: the machine invariably hangs.
>
>Sure, if your computer had to simulate the operation of your brain,
>changing the content of this address line might have amounted to
>replace simulated dopamine with simulated mustard, probably enough
>to cause a simulated brain to hang.
>

Well, I was trying to show that in a computer a glitch in a restricted
area (one address line) may cause the damage, while in brains you must
change a lot of places to make it fail.

>> It is exactly the serious studies of that "introspective power"
>> and "preparation" that I find most informative. In particular, a
>> field of (very serious) psychology called "Implicit Learning" gives
>> a bunch of information about the "cognitive unconscious". With solid
>> empirical evidences, the guys that research this field are revealing
>> very interesting characteristics of the cognition "below the rug".
>> Part of my arguments for noise I support with those findings.
>
>I would like very much to learn about these evidences.  I am not
>skeptical a bit, I just find this very interesting.  Do you know
>any papers or reviews available on the net?
>

Thanks for being interested. I hope you become as excited about this
subject as I am. All references I mention here are summarized in the
end of this post.

Implicit Learning is an expression coined by Arthur Reber in 1965,
in his MS dissertation. Then, around 1969 he published a paper that
became seminal to the field. I highly recommend reference [1], it is
a readable (and pleasantly written) book about the subject by Reber
himself (I became a fan of Reber because of this book, his narrative
is very lucid and appealing).

Another good vision of the field can be seen in [2]. Berry and Dienes
are two well known researchers in this field and this book is a
wonderful summary of the kinds of tests applied and also of some
neuropsychological evidences supporting it.

For a fast introduction to the IL field, I recommend Axel Cleereman's
paper [3], available online. Cleeremans is one of the best names in
IL today. I have more than 200 pages of printed papers of him.

For an online paper with a high level approach to the subject, take
a look at [4].

Another very important book is [5]. There are a bunch of very
interesting papers in there, but [6] is what I think is the most
complete single paper vision of Implicit Learning I've ever read,
containing a description of the kinds of tests performed (Artificial
grammars, dynamic control tasks, sequence learning, etc). The
paper then describes some computer implementation of the models
with symbolic and connectionist options, and then it goes to a
through analysis of some neurocognitive explanations of how this
happens in our brain. This is by far the best paper one could read
to have a wide view of the subject.

In ref [7], there is one of the most interesting Phd Thesis that I
have printed. Morten Christiansen's thesis addresses how infinitely
productive languages may fit into a finite mind and for that he
addresses (among a bunch of other subjects) the problem of innateness
of language and the problem of implicit learning in page 136. Chapter
5 (The evolution and acquisition of language) is, in my vision, one
of the central points of the thesis, dealing with a lot of things
that I find to be important to our field.

I have more references but I guess this is enough for a good start.

>> But another strong influence I carry is from Douglas Hofstadter's
>> Copycat system. Hofstadter develops strong points toward the
>> importance of analogical reasoning in intelligent systems and his
>> solution uses, among several other techniques, one thing called
>> "parallel terraced scan", that can be seen as a random exploration
>> of unknown terrain with a gradual increase in attention in the
>> areas that are "giving the better prizes". However, one of the
>> important points he emphasizes is that, even after finding the
>> "good" areas, one can never abandon occasional random inspection
>> of other areas. One can find a hidden treasure buried there.
>
>Right.  I discussed this a few days ago with some friends.  It
>seems that when one focus on the resolution of a problem much
>cognitive power has to be devoted to a few tentative pieces of
>the puzzle in order to be able to play with them and manage
>their intrinsic complexities.  This might be related to the
>figure/ground problem.  Occasionally the missing piece of the
>puzzle is hidden in the background and can only emerge when
>the person backs away from the problem for some time.  Many
>time this hidden treasure was crying to be seen but was
>actively pushed in the relative background of another (apparently
>contradictory) piece of the tentative solution.  I am still
>not sure about the role of deep intrinsic stochasticity though.
>

I liked the way you put it. I agree that this is a good vision
of what should be happening in our "innards". It is time for
me to say that, although we're talking about stochastic
processes in the brain, I'm not proposing that it is the only
mechanism in action, neither that it is the most important thing.
I think it is fundamental, but there are, IMHO, more important
things. That figure/ground problem, so used by Gestalt psychologists,
is something that should also be seen in terms of Gibson's invariants
in perception.

Have you seen that picture of a dalmatian dog made of blobs of ink?
The "background" of that figure is also made of blobs. When somebody
sees that for the first time, it is very difficult to notice what
this figure is about. Then, when you show the person where the
dog is, he/she will "learn" that and will never forget it. But
what happens when we don't tell the person nothing, only that
there is something significant in that picture? My hypothesis is
that a series of highly parallel processes start to compete in
his mind, using not only what the person knows about shapes
of things in general, but also "tentative" shapes. Instead of
using a first-in/first-out or round-robin process, the brain
seems to use a "random pick and test" process, in a huge quantity
of parallel instances (some searching for contradictory aspects).

When one of these processes recognize something interesting (say,
a blob that resembles the leg of an animal), that information will
provoke the firing of a lot of other search instances, taking the
"leg" as "axiom".

Suddenly, one of these processes may find the other leg and
then another one may find what appears to be a belly. One process
using these starting points may easily postulate where one should
find the head of a quadruped animal, and then "BANG!", the figure
appears as a thunder in his mind. Recognition after all.

Randomness, in this process of search, is not the most important
part (I guess it is the parallel spreading), but it is very
important to give equal chances to the whole conceptual space
and this may be done through randomness.

All this is mostly hypothetical and my interest in Implicit
Learning and other related subjects is just to find the
empirical points that could eventually support (or falsify) such
a vision.

Regards,
Sergio Navega.

-----------------

Refs. on Implicit Learning

[1] Reber, Arthur (1993) Implicit Learning and Tacit Knowledge.
Oxford University Press.

[2] Berry, Dianne and Dienes, Zoltán (1993) Implicit Learning,
Theoretical and Empirical Issues. Lawrence Erlbaum Assoc.

[3] Cleeremans, Axel (1997) Principles for Implicit Learning,
in D. Berry (ed.) How implicit is implicit Learning?
Oxford University Press. Available online on the publications
link from homepage:
http://164.15.20.1/axcWWW/axc.html

[4] Dienes, Zoltán (1999) A Theory of Implicit and Explicit
Knowledge. Behavioral and Brain Sciences available online at:
http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.dienes.html

[5] Lamberts, Koen and Shanks, David (eds.) (1997) Knowledge,
Concepts and Categories. MIT Press.

[6] Goschke, Thomas (1997) Implicit Learning and Unconscious
Knowledge: Mental representation, computational mechanisms and
brain structures, in  Lamberts, Koen and Shanks, David (eds.)
Knowledge, Concepts and Categories. MIT Press

[7] Christiansen, Morten (1994) Infinite Languages, Finite Minds
Phd Thesis Univ. Edinburgh

From: Pierre-Normand Houle <houlepn@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 31 Mar 1999 00:00:00 GMT
Message-ID: <7dtqmg$9pk$1@nnrp1.dejanews.com>
References: <7crbe9$ikv@ux.cs.niu.edu> <7cscso$1kv@journal.concentric.net> <7d358g$pkb@ux.cs.niu.edu> <7d3lm3$3en@edrn.newsguy.com> <36f64e2b@news3.us.ibm.net> <7d5pds$ogb@edrn.newsguy.com> <36f6a273@news3.us.ibm.net> <7d6dd0$5d7@edrn.newsguy.com> <36f79fbe@news3.us.ibm.net> <7d9c3q$spl$1@nnrp1.dejanews.com> <36f9314b@news3.us.ibm.net> <7dek0i$fee$1@nnrp1.dejanews.com> <36fbc977@news3.us.ibm.net> <7dgthf$fk5$1@nnrp1.dejanews.com> <36fc0a3c@news3.us.ibm.net> <7djft6$hqh$1@nnrp1.dejanews.com> <36ffb7d6@news3.us.ibm.net>
X-Http-Proxy: 1.0 x8.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Wed Mar 31 18:48:52 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.51 [en] (Win98; I)

In article <36ffb7d6@news3.us.ibm.net>,
  "Sergio Navega" <snavega@ibm.net> wrote:

[snip]

Sergio, thank you for your answers and the references
you were kind enough to provide.

I apologize for not being able to reply to you last
messages.  My work will keep me very busy and traveling
for at least the next few days and maybe a couple weeks.
I hope to be back later with some comments.

Regards,
Pierre-Normand Houle

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own   

From: houlepn@ibm.net
Subject: Re: Why HLUTs are not intelligent
Date: 14 Mar 1999 00:00:00 GMT
Message-ID: <7ch9rg$a2g$1@nnrp1.dejanews.com>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com> <36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com> <36ea8c14@news3.us.ibm.net>
X-Http-Proxy: 1.0 x10.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Sun Mar 14 21:31:28 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.05 [en] (Win95; U)

"Sergio Navega" <snavega@ibm.net> wrote:

> This is a response to Pierre-Normand Houle's message
> <7ccpk5$o25$1@nnrp1.dejanews.com>
>
> Pierre, I must again thank you for your answers, they were

Thanks to you, Sergio!

> important enough to let me realize another collection of new
> points about this subject that I had to address. In fact, I
> started commenting each paragraph of yours when I suddenly
> realized what I think now is the "root" of the question
> (I already said that, but a tree have several roots!).

I think you are digging on the other side of the same root
however ;)

> So instead of answering your well put comments, I decided
> to rewrite the argument under a new perspective.
>
> That perception, now, makes me agree with most of your comments
> about HLUTs and disagree with the "definition" of the problem
> we're into (it's funny as a lot of discussions boils down
> to disagreement in "definitions").
>
> I hope you'll have another dime of patience left to understand
> what I'll write and I'll be glad to know if you agree I
> touched on the important points.
>
> The fundamental point which I now agree is that a HLUT such
> as the one you're proposing *can produce intelligent behavior,
> in any circumstance*. I'm publically saying this now.

Ok.  It is enough that it produces intelligent behavior in any
circumstance where Sergio also produces intelligent behavior.
(A assume: same circumstance = same input history)

> ----
> *But it cannot be considered to be intelligent*.
> ----
>
> The subtle semantic difference between these phrases is the *root*
> of our dispute and I really hope to be clear on the next paragraphs.
>
> Present intelligent behavior, as Neil Rickert said recently, is an
> external aspect, that is very different from the output from the HLUT.
> Although this is the main point Neil uses to criticize the HLUT
> thought experiment, I'll do it differently.
>
> *To be intelligent* is not only to present intelligent behavior,
> but *also* to grow an intelligent representation of the world
> one is in. Is this difference such a big deal? Is this really a
> so different concept? Isn't this just a syntatically different
> way of saying the same thing? My answer is No.
>
> In my vision, that's the essential difference. But to fully
> understand what I'm proposing, I'll suggest a more intuitive
> way of seeing things.
>
> Can a HLUT capture a gaussian probabilistic distribution?
> It obviously can! And to any desired accuracy! You take
> the curve and insert it, point by point, in a table. What
> can we say this table captured? It captured the *essence* of
> a gaussian distribution, in such a way as to replicate
> the *behavior* of the phenomena represented by that curve
> given any incoming address.
>
> Now take all distributions possible for the way a human
> being moves its finger. It is a probabilistic distribution
> in a determinate shape (probably non-gaussian) that the HLUT
> will be able to simulate to *any desired degree of precision*.
>
> As the person gets older and the stiffness of its finger
> muscles alter (changing the probabilistic curve to be modeled),
> so do the response of the HLUT, because it uses as address
> not only the current sensory inputs, but also a history of
> them, which is enough to address a *different* area of the
> HLUT (which may have a different probabilistic distribution
> for its finger).
>
> This is what I have seen from the previous post of Pierre
> and this is what makes me agree that such a HLUT will behave
> intelligently.
>
> In that same way, any kind of output that a human may
> have can be conveniently reproduced by that HLUT, down
> to the utterances a man does to his puppy dog that made
> "that" thing in the living room.
>
> This is all very hard to imagine, but once you get used
> to the way mathematicians make inductions, it is not
> difficult to accept the logical existence of that
> table.
>
> ----
> It is not difficult also to see how that HLUT can
> present intelligent behavior, comparable to the human being
> that it is simulating.
> ----
>
> So in this case I may seem to be agreeing with Pierre,
> Daryl, Balter and others. But...
>
> The question is if this HLUT could be said **to be**
> intelligent, and my next words will try to refine
> what that means.
>
> Thought Experiment
> ------------------
>
> Suppose we take the HLUT of Sergio to a different universe
> (all you guys that are discussing HLUTs cannot say that
> this is an "unreasonable" thought experiment!).

That's fine.  Sergio's HLUT can not object any more
vehemently than the real Sergio ;-)

> Suppose, also, that this universe works with different
> laws of physics than the one we're in. As an example,
> in this contrived universe if you drop a rock it will
> not fall on the ground, it will raise up to a height
> of 4 meters and stabilize there (don't ask me why,
> it's just the way that universe works!).

No problem.  As far as Sergio's HLUT is concerned this
will just change the input vector sequence.  I would
expect the HLUT to provide similar outputs to those
produced by the real Sergio suddenly immersed in a virtual
reality simulation of this weird universe.

> It is obvious (I hope you all agree!) that the HLUT
> of Sergio in the previous universe (ours) will be *useless*
> in this universe. All laws of physics are different,
> all "models" the HLUT has do not correspond to the
> circumstances of this new universe. So, that HLUT
> is useless in this universe.

At this time this HLUT's outputs should suggest just as much
puzzelment and confusion than those of the real Sergio subjected
to the same circumstances (input history).  If this is what
you mean by 'useless' then I agree.

> Ok, I hear you guys saying, no problem, we can find another
> HLUT that is the correspondent of Sergio in this new universe.

No.  There is no need of another HLUT.  The first one has
an output (or many possible outputs in the non deterministic
case) for every possible input sequence.

> -----
> The question is that Sergio's brain need not be
> *altered* to work in this new universe! And that
> happens because it is *intelligent*!
> -----

Then, if you really believe what you said above, you
must conclude that the HLUT is intelligent too!

> The second HLUT will present the intelligent behaviors
> of Sergio in the second universe. But it will fail
> miserably if put in the first universe. Sergio's brain,
> however, will be intelligent in the first, second
> and *any other universe* that can be imagined (by the
> way, don't even think about joining both HLUTs in a
> single one; besides the fact that you could not
> differentiate which entry to use, there's *infinite*
> different universes, and HLUTs are said to be *finite*).
> We can think of Sergio's brain working on infinitely
> many universes, but not HLUTs.

I think you unwittingly reverted back to your earlier
conception of an incomplete LUT.  These LUTs can not
be used to argue against FSAs because they are incomplete
representations of FSAs.  They are flawed ex hypothesis
in a way the corresponding FSAs are not.  It must be
apparent that the incomplete LUT you proposed does not
even *behave* like Sergio in these circumstances.  Yet
you said above:

"The fundamental point which I now agree is that a HLUT such
as the one you're proposing *can produce intelligent behavior,
in any circumstance*. I'm publically saying this now."

Such a HLUT can not be a flawed LUT.

> HLUTs may only provide *intelligent behaviors*, but they
> cannot *be intelligent*. Human brains not only provide
> intelligent behaviors, but they are *also* considered as
> intelligent in any environment, because they are
> able to *come up with their own models of that universe*,

You have still not shown that FSAs can not built internal
representations of the universe they are immersed in.

> extracted from the experiences and interactions with
> that universe. A HLUT can't do that! So a HLUT *is not*
> equivalent to a human brain!
>
> Executive Summary-----------
> Only brains are able to be finite in size and temporal
> existence and also to present intelligent behavior in an
> infinitely large number of universes.

I disagree.  Although much less intelligent and adaptable
than him, DeepBlue can play good chess in just as many
universes than Sergio.

Regards,
Pierre-Normand Houle

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own   


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net