Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: The Great Problem With HLUTs
Date: 11 Mar 1999 00:00:00 GMT
Message-ID: <36e80e7c@news3.us.ibm.net>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 11 Mar 1999 18:42:04 GMT, 166.72.29.237
Organization: SilWis
Newsgroups: comp.ai.philosophy

In this post, I'll try to present a problem with the
HLUT thought experiment which may be enough to discard it
even as an armchair conversation. As this subject is still
in a sedimentary state in my mind, I would appreciate all
comments (pro and cons).

Definition
----------
A HLUT is an unreal, mathematic object in the form of a table.
To address this table, you use a vector composed of inputs
of data from sensory equipment plus a vector listing all
previous inputs (history).

Claim (agreed by Daryl, Jim Balter, Pierre and now Sergio)
-----
That this HLUT is able to give, as output, entries enough to
reproduce all possible behaviors that a specific human being is
able to present, due to the finite nature of the inputs and
outputs that controls physical behavior.

Other Claim (apparently Daryl, Balter, Pierre, but *not* me)
-----------
That this HLUT is able to perform, in behavioral terms,
just like Sergio, if given similar starting conditions.

The Faulty Conclusion
---------------------
The problem of the preceding claim surfaces when we think that
given a vector I containing all input data (vision, audition,
touch, proprioception, etc, etc) and also using a history H of
all past interactions as a huge address to the HLUT table, you
can obtain a vector V which is a **unique** output to present
to the muscles, limbs, etc., in order to produce the desired
behavior.

The question is whether there is only ONE possible response
to a given <I, H>. To assume that there is only one, then that
is to assume that the mind of the human being emulated will
*always* respond with the same output O given fixed <I, H>.
Is that reasonable? Only for hard-core behaviorists, I presume.

This is analogous to say that this hypothesis demands that the
brain be totally deterministic in nature, which is obviously the
same as saying that the UNIVERSE IS DETERMINISTIC! I know
quite a bunch of physicists who will laugh at this.

In my vision, given the same conditions, any brain will have
*countless* (although finite!) number of possible outputs.
A mad man is the one who choose a strange response to a given
set of stimulus; this man could have produced a different response,
*even* under the exact same conditions, unless we assume its
brain is fixed and that quantum indeterminacy does not affect
spike pulses in its neurons (something proven wrong by current
accounts of neuroscience, let alone physics).

It is obvious that to present the same behavior as someone else's,
one must drive its muscles and limbs in the exact same manner. The
HLUT, in fact, *does contain* all possibilities, being able to
drive a countless (but finite) number of copies of the Sergio
being simulated, in such a way that *one* of those copies will
present the behavior that the actual Sergio did. BUT:

***********
What a HLUT *cannot* do is to select *which one* from those
countless behaviors will be the one presented by that instance
of Sergio.
***********

And that means the HLUT will have to stick with ONLY ONE possible
output O, selected from the huge amount of possible outcomes. One
can think of a method to randomly select one output, among the
possible outputs to a given <I,H>, but I think that does not
guarantee even *intelligent behavior*, let alone the one
behavior correspondent to the entity being emulated.

So, a super humongous HLUT, with all its size and unlikeliness,
is not guaranteed to produce intelligent behavior. At least
in this nondeterministic universe we live in.

I hope this is as clear to everyone as it is for me.

Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Great Problem With HLUTs
Date: 11 Mar 1999 00:00:00 GMT
Message-ID: <36e847eb@news3.us.ibm.net>
References: <36e80e7c@news3.us.ibm.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 11 Mar 1999 22:47:07 GMT, 166.72.29.249
Organization: SilWis
Newsgroups: comp.ai.philosophy

Just to complement the text I wrote, the fundamental point
that drives the HLUT far from the behavior of a human is,
on the bottom line, randomness (and this is the effect of
the stochastic brain we have, which is a result of the
nondeterministic world we live).

Open the palm of your hand in front of your eyes. What do you
see? Your fingers may present small involuntary movements
that are the result of random behavior of muscles and also
random spikes in the neurons of the motor cortex (Parkinson's
disease is just a huge amplification of this effect). This is
the effect that is called by neuroscientists as synaptic
noise, present in 100% of the brains.

It is easy to overlook this detail. After all, what benefit
could a small noise have in our intelligence? I am
continuously finding that this detail is one of the
*most fundamental* points behind our intelligence. But I'll
left the exposition of my reasons (and references) to
another moment.

For now, it is enough to say that random spikes in our
neurons are able to provide behavioral outputs (vector V)
that are different than the expected output from the
HLUT (because it was fed only with vector I and history
H).

But lets assume that, for the sake of keeping the HLUT
omniscient, that it had, in this case, two entries as
the result of the single <I,H>, one with and other without
the effect of noise, and lets assume it randomly used one
of them.

So that HLUT would not be able to emulate precisely *one*
instance of a human, although it would be able to
present general intelligent behavior. Problem solved? Hardly!
I think that this will destroy the "value" of the HLUT,
even for silly philosophical discussions.

The question is that it is reasonable to suppose that
the random spikes also occur in the interneurons of
our brain, those responsible for *thought*. The outcome
of this is that the human brain would assemble models
of the world THAT CANNOT BE ACCOUNTED ONLY BY THE <I,H>
VECTOR! I call this *creative discovery*, those intuitive
flashlights that we have from time to time.

A HLUT would, obviously, due to its omniscience, know *all*
possible behaviors, including those who were originary from
random fluctuations in models. It is easy to understand,
however, that the HLUT would have to randomly decide among
a *huge* and *growing* number of alternatives, FOR EACH
VECTOR <I,H> that was used to address it. And that is the
problem. The only criterion that the HLUT have to effectivate
its choice is a random, equally probable selection
of one of the possible output vectors V for a given <I,H>.
It can't do no better than going by chance, because it
does not have any model to say which entry is best for
the circumstance (after all, we're talking here of HLUTs,
without any other specific accessory construct).

Doing that, the HLUT will have a *minimal* chance of doing
something appropriate, because the randomness it uses affects
directly (and only) behavior, while the randomness that's inside
the brain affects behaviors AND internal *models*, that are
used to categorized the world in a *noise resistant* manner,
thus conducting future good decisions. A cat without a tail
is still a cat.

Then, the human brain could use, to decide the best
course of action, a vector composed of <I, H, S>, where
S is the internal model, INACCESSIBLE TO THE HLUT!.
This will make the brain have a FAR GREATER CHANCE
of doing the right thing, far outperforming the HLUT
that, although capable of generating the same vector V,
will only do that with *minimal* probability of success.

The problem is that this internal model was the result of
random (and in my wording, CREATIVE) processing in
the brain, something that does not have a counterpart
in the HLUT.

Because of this, the HLUT will fail often, and will only
work intelligently as an exception (I would say once in
a human lifetime). That's almost as bad as Windows 95 :-)

Sergio Navega.

From: houlepn@my-dejanews.com
Subject: Re: The Great Problem With HLUTs
Date: 12 Mar 1999 00:00:00 GMT
Message-ID: <7c9qnu$5g3$1@nnrp1.dejanews.com>
References: <36e80e7c@news3.us.ibm.net>
X-Http-User-Agent: Mozilla/4.05 [en] (Win95; U)
X-Http-Proxy: 1.0 x16.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Fri Mar 12 01:30:37 1999 GMT
Newsgroups: comp.ai.philosophy

"Sergio Navega" <snavega@ibm.net> wrote:

> Definition
> ----------
> A HLUT is an unreal, mathematic object in the form of a table.
> To address this table, you use a vector composed of inputs
> of data from sensory equipment plus a vector listing all
> previous inputs (history).

Agreed.

> Claim (agreed by Daryl, Jim Balter, Pierre and now Sergio)
> -----
> That this HLUT is able to give, as output, entries enough to
> reproduce all possible behaviors that a specific human being is
> able to present, due to the finite nature of the inputs and
> outputs that controls physical behavior.

Agreed.

> Other Claim (apparently Daryl, Balter, Pierre, but *not* me)
> -----------
> That this HLUT is able to perform, in behavioral terms,
> just like Sergio, if given similar starting conditions.
>
> The Faulty Conclusion
> ---------------------
> The problem of the preceding claim surfaces when we think that
> given a vector I containing all input data (vision, audition,
> touch, proprioception, etc, etc) and also using a history H of
> all past interactions as a huge address to the HLUT table, you
> can obtain a vector V which is a **unique** output to present
> to the muscles, limbs, etc., in order to produce the desired
> behavior.

This is an unnecessary but non problematic assumption.

> The question is whether there is only ONE possible response
> to a given <I, H>. To assume that there is only one, then that
> is to assume that the mind of the human being emulated will
> *always* respond with the same output O given fixed <I, H>.
> Is that reasonable? Only for hard-core behaviorists, I presume.
> This is analogous to say that this hypothesis demands that the
> brain be totally deterministic in nature, which is obviously the
> same as saying that the UNIVERSE IS DETERMINISTIC! I know
> quite a bunch of physicists who will laugh at this.

We have been proposing many sorts of non deterministic HLUTs.
Let me give a more precise example:

DeepBlue is a chess computer with 256 nodes searching in
parallel for the best move as a function of the present board
position and some evaluation function.  Each node concentrates
on the exploration of a separate sub-tree.  At some time a
central unit requests from each node the best move found so far.
It outputs the best one among these 256 candidates.  Due to the
variable time delays and other imponderables in the network
implementation DeepBlue might not give the same answer on two
different runs.  These chaotic flukes are beyond DeepBlue's
sensory discriminative ability.  How would one define a HLUT
for DeepBlue?  The first step is to simulate the whole machine
on a single processor computer running a deterministic program.
Call this simulation SimDB.  SimDB is programmed run its 256
simulated nodes for the same time T normally scheduled by DeepBlue
plus some finite uncertainty delta_t.  SimDB then consider each
move found by a node at time T + delta_t that is better than the
moves found by the 255 other nodes at time T - delta_t.  All these
moves are likely candidates for moves the true DeepBlue would have
made.  Now add to SimDB a pseudo random number generator and a
reasonable statistical model of DeepBlue's network behavior to
weight the probability of each candidate being the actual
output of SimDB.  Alternatively, replace the pseudo random number
generator by an additional input from a quantum random number
generator external to the simulation.  Consider the HLUT of SimDB:

1) It is finite.

2) The corresponding FSA consumes barely more MIPS than the
   original multiprocessor machine.

2) It models DeepBlue with great accuracy.

3) It is non deterministic. (in the case of the quantum number
   generator.

4) I might have overlooked some imponderables affecting the real
   DeepBlue in my SimDB implementation.  You might think of
   these as holes in SimDB's HLUT.  That is not a problem as
   these holes only reflect the fact that SimDB is an
   'ideal' DeepBlue.  DeepBlue will also fail from other
   imponderables such as power interruptions.  Human beings
   also fail to behave intelligently to sensory inputs in many
   circumstances such as when they die, are inatentive or get
   trapped in pointless discussions about HLUTS :-)

5) SimDB gracefully recover from those imponderables taken into
   account by the designer of the simulation just as human beings
   recover from many unpredictable events due to fault tolerant
   neural mechanisms and architectures produced by evolution.

6) Defining a similar HLUT from a 10^15 MIPS SimSergio is left as
   an exercise for the reader.

> In my vision, given the same conditions, any brain will have
> *countless* (although finite!) number of possible outputs.
> A mad man is the one who choose a strange response to a given
> set of stimulus; this man could have produced a different response,
> *even* under the exact same conditions, unless we assume its
> brain is fixed and that quantum indeterminacy does not affect
> spike pulses in its neurons (something proven wrong by current
> accounts of neuroscience, let alone physics).
>
> It is obvious that to present the same behavior as someone else's,
> one must drive its muscles and limbs in the exact same manner. The
> HLUT, in fact, *does contain* all possibilities, being able to
> drive a countless (but finite) number of copies of the Sergio
> being simulated, in such a way that *one* of those copies will
> present the behavior that the actual Sergio did. BUT:
>
> ***********
> What a HLUT *cannot* do is to select *which one* from those
> countless behaviors will be the one presented by that instance
> of Sergio.
> ***********

Sure but the goal of AI is to build a machine as intelligent
as Sergio.  A machine possibly able to predict Sergios probable
actions in some context.  It is not to predict Sergio's future
with 100% accuracy.

> And that means the HLUT will have to stick with ONLY ONE possible
> output O, selected from the huge amount of possible outcomes. One

Just as the real Sergio does.  That's good enough.  I can not
imagine answering all possible objections the real Sergio
could come up against HLUTs :-)  Note that the HLUT just has
to select from a small range of distinguishable and reasonable
outcomes.  There is no need to attribute the same probability
to all distinguishable and possible outputs.  A modelisation
of the neural activity corresponding to Sergios low level cognitive
processes and some reasonable hypothesis about the underlying
physics will ensure that the probabilities are computed right
and that the HLUT behaves reasonably.

Regards,
Pierre-Normand Houle

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own   

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Great Problem With HLUTs
Date: 12 Mar 1999 00:00:00 GMT
Message-ID: <36e92939@news3.us.ibm.net>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 12 Mar 1999 14:48:25 GMT, 166.72.21.144
Organization: SilWis
Newsgroups: comp.ai.philosophy

Pierre, thank you for your response, it seems that you and me are
the last ones with patience to keep an eye in this subject. There
is, in my opinion, one aspect that the proponents of HLUTs are
not being able to understand, and that's what keeps me on this
subject. I guess my point is still not understood.

The greatest difference about nondeterministic behavior in HLUTs
and brains is the fact that any random variation introduced in
HLUTs will produce garbage as output, while in brains it will
usually produce slight variations *in one aspect of one model*.
As an example, take the address lines of the memory chips of
your computer (pretty good model of a HLUT) and vary a few bits
randomly: you'll obtain garbage. Now take any input to an
associative memory (content addressable memory) and vary
any bit randomly: you'll obtain things *on the neighborhood*
of your original entry.

We can obviously duplicate that associative behavior in a HLUT,
provided that we know, beforehand, what are the bits most likely
to be associated together. What I'm saying is that the intelligent
brain is able to come up with this association by itself, and
often it uses random aspects (impossible to account in a HLUT)
to improve *its own* models (showing new ways of doing things,
helping in the discovery of new things, etc). A HLUT will have
to stick with one fixed set of models forever. Yes a HLUT does
contain all possible answers of that intelligent brain. It only
does not know how to select the most likely entries for the
circumstance.

The brain does not store entries in a sequential, orderly manner
as HLUTs do. The brain groups things, according to similarity
criteria, categorizing things. So when one randomness occurs
in the brain, that provokes a slight variation able to present
*new* and *potentially useful* aspects that will lead the
brain to *discover* things. That's creativity that you and I
and all humans have, and that no HLUT, no matter how large,
will ever have.

*Intelligence does not exist without creativity*

But the problem is the *cumulative composition* of this effect,
and that happens because we don't have a single cycle of
this process. We have *millions of cycles* per second, on
a conservative estimate. A HLUT, subject to slight randomness,
will divert very fast, while a brain will *converge* very fast.
And why's that? Because the HLUT *do not group things* according
to several similarity criteria (and randomness in the brain helps
coming up with these similarity criteria), but the brain do!

Einstein's theory of relativity could, in principle, be the
result of a *random* firing of a single neuron that contributed
to an avalanche of cumulative neural effects that sparked in
Einstein's mind as that idea. Yes, a HLUT does have all possible
outcomes of Einstein, including all behaviors that he could
present as the result of discovering relativity. But a HLUT
**does not** have that theory as an **internal criterion** for
judging and selecting what are the most *probable* behaviors
to have in a specific circumstance, among the number of
possibilities, in a *limited* amount of time (Einstein's lifetime!).

Why is it so hard for you folks to acknowledge this?

houlepn@my-dejanews.com wrote in message
<7c9qnu$5g3$1@nnrp1.dejanews.com>...
>[snip]
>
>We have been proposing many sorts of non deterministic HLUTs.
>Let me give a more precise example:
>
>DeepBlue is a chess computer with 256 nodes searching in
>parallel for the best move as a function of the present board
>position and some evaluation function.  Each node concentrates
>on the exploration of a separate sub-tree.  At some time a
>central unit requests from each node the best move found so far.
>It outputs the best one among these 256 candidates.  Due to the
>variable time delays and other imponderables in the network
>implementation DeepBlue might not give the same answer on two
>different runs.  These chaotic flukes are beyond DeepBlue's
>sensory discriminative ability.  How would one define a HLUT
>for DeepBlue?  The first step is to simulate the whole machine
>on a single processor computer running a deterministic program.
>Call this simulation SimDB.  SimDB is programmed run its 256
>simulated nodes for the same time T normally scheduled by DeepBlue
>plus some finite uncertainty delta_t.  SimDB then consider each
>move found by a node at time T + delta_t that is better than the
>moves found by the 255 other nodes at time T - delta_t.  All these
>moves are likely candidates for moves the true DeepBlue would have
>made.  Now add to SimDB a pseudo random number generator and a
>reasonable statistical model of DeepBlue's network behavior to
>weight the probability of each candidate being the actual
>output of SimDB.  Alternatively, replace the pseudo random number
>generator by an additional input from a quantum random number
>generator external to the simulation.  Consider the HLUT of SimDB:
>
>1) It is finite.
>
>2) The corresponding FSA consumes barely more MIPS than the
>   original multiprocessor machine.
>
>2) It models DeepBlue with great accuracy.
>
>3) It is non deterministic. (in the case of the quantum number
>   generator.
>
>4) I might have overlooked some imponderables affecting the real
>   DeepBlue in my SimDB implementation.  You might think of
>   these as holes in SimDB's HLUT.  That is not a problem as
>   these holes only reflect the fact that SimDB is an
>   'ideal' DeepBlue.  DeepBlue will also fail from other
>   imponderables such as power interruptions.  Human beings
>   also fail to behave intelligently to sensory inputs in many
>   circumstances such as when they die, are inatentive or get
>   trapped in pointless discussions about HLUTS :-)
>
>5) SimDB gracefully recover from those imponderables taken into
>   account by the designer of the simulation just as human beings
>   recover from many unpredictable events due to fault tolerant
>   neural mechanisms and architectures produced by evolution.
>
>6) Defining a similar HLUT from a 10^15 MIPS SimSergio is left as
>   an exercise for the reader.
>

This will not work (although I *believe* that there exists one HLUT
able to act *just like* Deep Blue). Any random fluctuation in the
address furnished to a HLUT may oblige it to take an entry that may
belong to the space of *another* simulated processor. The result is
pure garbage, unless you modify the HLUT introducing contrived
auxiliary mechanisms. A pure HLUT (such as the one all of you
are proposing from the beginning) cannot do that.

>>
>> It is obvious that to present the same behavior as someone else's,
>> one must drive its muscles and limbs in the exact same manner. The
>> HLUT, in fact, *does contain* all possibilities, being able to
>> drive a countless (but finite) number of copies of the Sergio
>> being simulated, in such a way that *one* of those copies will
>> present the behavior that the actual Sergio did. BUT:
>>
>> ***********
>> What a HLUT *cannot* do is to select *which one* from those
>> countless behaviors will be the one presented by that instance
>> of Sergio.
>> ***********
>
>Sure but the goal of AI is to build a machine as intelligent
>as Sergio.  A machine possibly able to predict Sergios probable
>actions in some context.  It is not to predict Sergio's future
>with 100% accuracy.
>

To predict the possible actions of Sergio in one context you
have to use a probabilistic model (like a gaussian curve)
centered in the average aspect of the possible responses of
Sergio to that circumstance. The HLUT does not have such a
model, the probabilistic model a HLUT uses is a *straight
horizontal line* (all entries with equal probability, remember,
address lines!).

Denying this means that you're denying the original HLUT you
have proposed. Correcting this with another auxiliary mechanism
means that the original HLUT *is not* capable of doing it by
itself without the help of tricks.

>> And that means the HLUT will have to stick with ONLY ONE possible
>> output O, selected from the huge amount of possible outcomes. One
>
>Just as the real Sergio does.  That's good enough.

No, the real Sergio will give priority to some outputs, because
he built an internal model resultant from the *creative understanding*
of its universe. The model the HLUT will use is *flat*! The HLUT
CANNOT reconstruct the creative model that Sergio did (and that's
not because of Sergio, a Pierre's HLUT would also have the
same problem ;-)

>I can not
>imagine answering all possible objections the real Sergio
>could come up against HLUTs :-)

That's my *creative* aspect that cannot be accounted
by any HLUT! :-)

>Note that the HLUT just has
>to select from a small range of distinguishable and reasonable
>outcomes.  There is no need to attribute the same probability
>to all distinguishable and possible outputs.

Tell me, Pierre, how you'll do this *without* introducing additional
mechanisms to the HLUT? (I want to stick with the original HLUT that
you, Daryl and Balter proposed). How will you add a judging factor
(you want **reasonable** outcomes, no?) without modifying the original
table-only aspect of the HLUT? If you're not attributing the same
probability to all possible and distinguishable outputs, tell me
what is the criterion you'll use to do that selection and how a
HLUT will know which one to use? Remember that you're trying to
make that HLUT behave similarly (exactly, as we've seen, is
not guaranteed) as Sergio.

>A modelisation
>of the neural activity corresponding to Sergios low level cognitive
>processes and some reasonable hypothesis about the underlying
>physics will ensure that the probabilities are computed right
>and that the HLUT behaves reasonably.
>

So you're adding a special computing process to the HLUT. A model
that can correct the *flat* aspect of the original HLUT. I may
agree that this may eventually work. It is not, however, the
original HLUT as you, Daryl and Jim proposed, it is a *different*
mechanism, something that tries to *correct* failures of the
HLUT. This is pretty much what I want to focus, what are the
aspects that the HLUT does not address and how to imagine one
mechanism able to present intelligent behavior without resorting
to omniscience.

Regards,
Sergio Navega.

From: houlepn@ibm.net
Subject: Re: The Great Problem With HLUTs
Date: 13 Mar 1999 00:00:00 GMT
Message-ID: <7ccpk5$o25$1@nnrp1.dejanews.com>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com> <36e92939@news3.us.ibm.net>
X-Http-Proxy: 1.0 x2.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Sat Mar 13 04:30:04 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.05 [en] (Win95; U)

"Sergio Navega" <snavega@ibm.net> wrote:

> Pierre, thank you for your response, it seems that you and me are
> the last ones with patience to keep an eye in this subject. There
> is, in my opinion, one aspect that the proponents of HLUTs are
> not being able to understand, and that's what keeps me on this
> subject. I guess my point is still not understood.

Thank you for your patience Sergio.

> The greatest difference about nondeterministic behavior in HLUTs
> and brains is the fact that any random variation introduced in
> HLUTs will produce garbage as output, while in brains it will
> usually produce slight variations *in one aspect of one model*.
> As an example, take the address lines of the memory chips of
> your computer (pretty good model of a HLUT) and vary a few bits
> randomly: you'll obtain garbage. Now take any input to an
> associative memory (content addressable memory) and vary
> any bit randomly: you'll obtain things *on the neighborhood*
> of your original entry.

Yes, the HLUT behaves just as well as the FSA for which
it is an alternate representation.  Current computers are
not very fault tolerant and fault tolerance is indeed a
crucial aspect of human intelligence.  However, it does
not seem that guarding oneself from unanticipated flukes
is what takes the most computational power in the
implementation of either natural or artificial fault
tolerant brains.  For instance, a scratched music CD will play
well on most modern CD players.  These have huge HLUTs
recovering for most if not all input errors with the only
addition of some simple ECC code.  Humans do not instantly
recover from all errors either.  You can recognize the word
'bra*n' as 'brain' but misinterpret '*rain' as 'train'.  You
need some context to decide and you sometimes lack it.  An FSA
seeks context in its internal states and models built upon
its past history.  The corresponding HLUT has this context
encoded in its huge input vector instead.

> We can obviously duplicate that associative behavior in a HLUT,
> provided that we know, beforehand, what are the bits most likely
> to be associated together. What I'm saying is that the intelligent

There is no need for Sergio to know beforehand how he will react to
tomorrow unanticipated events.  The state of his brain will
take care of that when the time comes.  Natural evolution or the
DNA machinery present in your first zygotic cell could dispense
with knowledge of the future while designing Sergio's brain, AI
designers can also dispense with it while designing intelligent
systems able to learn as FSAs.  These systems could in principle
be just as ready as Sergio to face a wide range of circumstances
(and so will their HLUTs, by definition).

> brain is able to come up with this association by itself, and
> often it uses random aspects (impossible to account in a HLUT)
> to improve *its own* models (showing new ways of doing things,
> helping in the discovery of new things, etc). A HLUT will have
> to stick with one fixed set of models forever.

The models you refer to occur at a higher level than individual
entries in the HLUT.  Your claim amounts to say that humans will
have to stick with one fixed set of laws (physical, chemical,
neurophysiological...) governing their brain evolution forever,
or that DeepBlue will never find a new, original variation of a
chess opening because it is bound to follow a fixed algorithm.

> Yes a HLUT does
> contain all possible answers of that intelligent brain. It only
> does not know how to select the most likely entries for the
> circumstance.

That is incorrect.  See below...

> The brain does not store entries in a sequential, orderly manner
> as HLUTs do. The brain groups things, according to similarity

Of course!  This red herring HLUT is just an alternate formal
representation of a FSA.  FSAs do not necessarily do the dumb
thing you describe above.  To a small extend they already do the
things you list below.

> But the problem is the *cumulative composition* of this effect,
> and that happens because we don't have a single cycle of
> this process. We have *millions of cycles* per second, on
> a conservative estimate. A HLUT, subject to slight randomness,
> will divert very fast, while a brain will *converge* very fast.
> And why's that? Because the HLUT *do not group things* according
> to several similarity criteria (and randomness in the brain helps
> coming up with these similarity criteria), but the brain do!

Divergence in itself is a non issue.  Several clones of Sergio
will also diverge from each other.  I don't agree that several
HLUTs will diverge faster.  The issue is whether the HLUT behaves
realistically as a possible Sergio clone or whether it will become
erratic.  You argue for the second option from the dynamics of
non-linear systems.  This is a mistake.  Can not a simulation of
the earth climate be implemented on a FSA?  Separate runs of the
same simulation (with slightly different inputs) will certainly
diverge from each other but they will behave statistically just
as the real climate (provided the model is good enough).  The
forecast will not be "Maybe snow, maybe rain, maybe sunny,
maybe cloudy..." but rather "Snow 10%, rain 85%, sunny 5%,
cloudy 95%...".

> Einstein's theory of relativity could, in principle, be the
> result of a *random* firing of a single neuron that contributed
> to an avalanche of cumulative neural effects that sparked in
> Einstein's mind as that idea. Yes, a HLUT does have all possible
> outcomes of Einstein, including all behaviors that he could
> present as the result of discovering relativity. But a HLUT
> **does not** have that theory as an **internal criterion** for
> judging and selecting what are the most *probable* behaviors
> to have in a specific circumstance, among the number of
> possibilities, in a *limited* amount of time (Einstein's
> lifetime!).

Time is a non issue concerning HLUTs.  The HLUT is a mathematical
concept.  Einstein's pre dispositions toward discovering relativity
are encoded in his brain as a result of its past interactions with
the environment and are thus also encoded in the HLUT input vector.
The HLUT gives the most probable behavior at some given time just
as a climate simulator continually monitoring meteorological data
gives the most probable weather forecast.

Additionally I would like to remind you that my argument is not that
Einstein's HLUT suggest FSAs as good models for Einstein.  Rather,
the argument is that the extravagance of that particular HLUT does
not lead to ruling out FSAs as possible models for Einstein's
intelligent behavior.  A HLUT for the real Einstein is just as
crazy as a HLUT for the real climate.  This is because there are
so many things in the real Einstein irrelevant to the understanding
of its intelligence.

[snip]

> > [Pierre-Normand]
> > We have been proposing many sorts of non deterministic HLUTs.
> > Let me give a more precise example:
> >
> > DeepBlue is a chess computer with 256 nodes searching in
> > parallel for the best move as a function of the present board
> > position and some evaluation function.  Each node concentrates
> > on the exploration of a separate sub-tree.  At some time a
> > central unit requests from each node the best move found so far.
> > It outputs the best one among these 256 candidates.  Due to the
> > variable time delays and other imponderables in the network
> > implementation DeepBlue might not give the same answer on two
> > different runs.  These chaotic flukes are beyond DeepBlue's
> > sensory discriminative ability.  How would one define a HLUT
> > for DeepBlue?  The first step is to simulate the whole machine
> > on a single processor computer running a deterministic program.
> > Call this simulation SimDB.  SimDB is programmed run its 256
> > simulated nodes for the same time T normally scheduled by DeepBlue
> > plus some finite uncertainty delta_t.  SimDB then consider each
> > move found by a node at time T + delta_t that is better than the
> > moves found by the 255 other nodes at time T - delta_t.  All these
> > moves are likely candidates for moves the true DeepBlue would have
> > made.  Now add to SimDB a pseudo random number generator and a
> > reasonable statistical model of DeepBlue's network behavior to
> > weight the probability of each candidate being the actual
> > output of SimDB.  Alternatively, replace the pseudo random number
> > generator by an additional input from a quantum random number
> > generator external to the simulation.  Consider the HLUT of SimDB:
> >
> > 1) It is finite.
> >
> > 2) The corresponding FSA consumes barely more MIPS than the
> >   original multiprocessor machine.
> >
> > 2) It models DeepBlue with great accuracy.
> >
> > 3) It is non deterministic. (in the case of the quantum number
> >   generator.
> >
> > 4) I might have overlooked some imponderables affecting the real
> >   DeepBlue in my SimDB implementation.  You might think of
> >   these as holes in SimDB's HLUT.  That is not a problem as
> >   these holes only reflect the fact that SimDB is an
> >   'ideal' DeepBlue.  DeepBlue will also fail from other
> >   imponderables such as power interruptions.  Human beings
> >   also fail to behave intelligently to sensory inputs in many
> >   circumstances such as when they die, are inattentive or get
> >   trapped in pointless discussions about HLUTS :-)
> >
> > 5) SimDB gracefully recover from those imponderables taken into
> >   account by the designer of the simulation just as human beings
> >   recover from many unpredictable events due to fault tolerant
> >   neural mechanisms and architectures produced by evolution.
> >
> > 6) Defining a similar HLUT from a 10^15 MIPS SimSergio is left as
> >   an exercise for the reader.
> >
>
> This will not work (although I *believe* that there exists one HLUT
> able to act *just like* Deep Blue). Any random fluctuation in the
> address furnished to a HLUT may oblige it to take an entry that may
> belong to the space of *another* simulated processor. The result is
> pure garbage,

What you propose is ruled out as this particular HLUTs only takes
chess moves and random seeds as its inputs.  Maybe you want to
temper with the HLUT's table entries themselves.  Then this HLUT
is no longer SimDB's  HLUT.  This amounts to stick a screw driver
into DeepBlue's innards.  I do not recommend doing that with either
computer hardware or human wetware.  Maybe you are talking about
fault tolerance.  HLUTs are just as fault tolerant as the FSAs they
represent.  A machine's immunity to the erratic behavior of its
hardware components is reflected in the outputs of its HLUT.

> unless you modify the HLUT introducing contrived
> auxiliary mechanisms. A pure HLUT (such as the one all of you
> are proposing from the beginning) cannot do that.

The HLUTs I have been proposing from the beginning, are
fault tolerant (just as much as the real system they model)
and non deterministic.  They never require adjustments at run
time to operate correctly.

> >> It is obvious that to present the same behavior as someone else's,
> >> one must drive its muscles and limbs in the exact same manner. The
> >> HLUT, in fact, *does contain* all possibilities, being able to
> >> drive a countless (but finite) number of copies of the Sergio
> >> being simulated, in such a way that *one* of those copies will
> >> present the behavior that the actual Sergio did. BUT:
> >>
> >> ***********
> >> What a HLUT *cannot* do is to select *which one* from those
> >> countless behaviors will be the one presented by that instance
> >> of Sergio.
> >> ***********
> >
> > Sure but the goal of AI is to build a machine as intelligent
> > as Sergio.  A machine possibly able to predict Sergios probable
> > actions in some context.  It is not to predict Sergio's future
> > with 100% accuracy.
> >
>
> To predict the possible actions of Sergio in one context you
> have to use a probabilistic model (like a gaussian curve)
> centered in the average aspect of the possible responses of
> Sergio to that circumstance. The HLUT does not have such a
> model, the probabilistic model a HLUT uses is a *straight
> horizontal line* (all entries with equal probability, remember,
> address lines!).

Wrong.  My first HLUT had an exact probabilistic model based
on the rules of quantum mechanics.  As for my SimDB's HLUT, does
this looks like an equiprobabilistic model:

> > "Now add to SimDB a pseudo random number generator and a
> > reasonable statistical model of DeepBlue's network behavior to
> > weight the probability of each candidate being the actual
> > output of SimDB." ?

Note that this is not post hoc modification of the HLUT.  Rather
it starts with a realistic interpretation of DeepBlue's low level
implementation, proceeds with its modelisation in a FSA closely
following DeepBlue's relevant organizational structure.  This
model is then reflected without any further assumption in this
FSA's HLUT.

> Denying this means that you're denying the original HLUT you
> have proposed. Correcting this with another auxiliary mechanism
> means that the original HLUT *is not* capable of doing it by
> itself without the help of tricks.

I have not been upgrading my original HLUT.  On the contrary,
I have been simplifying it so as to match a more reasonable FSA
emulation of Sergio.  Those tricks you refer to are not ad hock
additions.  They are implicit in the neural implementation of
Sergio's mind.  I am suggesting that there is no reason for these
natural 'tricks' to be absent from the FSA emulation despite our
erroneous intuitions about HLUT limitations.

[snip]

> >I can not
> >imagine answering all possible objections the real Sergio
> >could come up against HLUTs :-)
>
> That's my *creative* aspect that cannot be accounted
> by any HLUT! :-)

This illusion comes from looking at the wrong level.  It is
also Searle's mistake.  The HLUT is like the Chinese Room.
Creativity, intelligence, thought... are to be found in the
Room/HLUT overall behavior and organization.

If creativity is the ability to behave non deterministically
then the HLUT can have it.  If it is the ability behave
differently in new circumstances then the HLUT can have it.
If it is the ability to transcend one's own wet/hardware
specifications then Sergio can not have it. (Unless he is
a good brain surgeon and he knows how to use a sharp knife
and two mirrors).

> > Note that the HLUT just has
> > to select from a small range of distinguishable and reasonable
> > outcomes.  There is no need to attribute the same probability
> > to all distinguishable and possible outputs.
>
> Tell me, Pierre, how you'll do this *without* introducing additional
> mechanisms to the HLUT? (I want to stick with the original HLUT that
> you, Daryl and Balter proposed). How will you add a judging factor

I don't want to stick with a red herring!  The HLUT I proposed
is reasonable and does not require any run time tempering.

> (you want **reasonable** outcomes, no?) without modifying the original
> table-only aspect of the HLUT? If you're not attributing the same
> probability to all possible and distinguishable outputs, tell me
> what is the criterion you'll use to do that selection and how a
> HLUT will know which one to use? Remember that you're trying to
> make that HLUT behave similarly (exactly, as we've seen, is
> not guaranteed) as Sergio.

Sergio, please go back to my DeepBlue example above and read
carefully.  You overlooked the essential.

> > A modelisation
> > of the neural activity corresponding to Sergios low level cognitive
> > processes and some reasonable hypothesis about the underlying
> > physics will ensure that the probabilities are computed right
> > and that the HLUT behaves reasonably.
>
> So you're adding a special computing process to the HLUT. A model
> that can correct the *flat* aspect of the original HLUT. I may

I'm not adding anything that was not implicitly already there.
The *flat* aspect of the HLUT is just like the dull aspect of
Searle's Chinese room.  It needs no correction, just the
realization that one it looking at the wrong level.  The dull
aspect of HLUTs, Chinese rooms and bunches of neurons is a red
herring.  It does not allow us to conclude anything (well, almost)
about the high level performances of FSAs, computers or brains.

Regards,
Pierre-Normand Houle

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own   


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net