Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: On Rickert/Daryl's debate
Date: 29 Jan 1999 00:00:00 GMT
Message-ID: <36b1e548@news3.ibm.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 29 Jan 1999 16:43:52 GMT, 166.72.29.54
Organization: SilWis
Newsgroups: comp.ai.philosophy

In this post I'll try to address the main points, as I
understood them, of both Neil Rickert and Daryl McCullough
regarding lookup tables, external/internal behaviors and
learning matters.

I hope I can be convincing in showing that *both* are wrong.

Why Daryl is wrong:
------------------

My first reaction about Daryl's initial proposition was of
acceptance of his arguments but recognition that it would
be utterly impractical. I think I have refined my arguments
to show that his idea is not highly unlikely, it is
simply impossible.

First, what I think Daryl's original argument was: that a
conveniently assembled robot, externally indistinguishable
from a human, driven by a mechanism based on a *huge* lookup
table would be able to present a behavior also indistinguishable
from a normal human being.

I propose that, obeying everything we know about physics, this
is impossible. And I call upon Albert Einstein to help me
in making my point.

a) The size of the problem.
As I said in another post (buried somewhere else) the size
of the lookup table should be very, very large. I will
spare myself of reasoning about it again (a red spot in
your preferred beer cup is another entry in your table;
tomorrow morning it will be another, because its color
will be slightly changed because of light).

b) Impossibility of duplicating reaction time.
If we want to duplicate the behavior of a human so as to be
indistinguishable from another exemplar, we've got to
duplicate its reaction time to events and also to actuate
on the world with equal timing. This is not possible.
And without proper reaction time, one human could have
said something (in a verbal discussion, for example) while
the equivalent robot would have missed it (and probably
the entire discussion).

c) Information in Andromeda.
To allow the assembly of that lookup table, we've got to
use most (if not all) atoms of the universe to (quantumly)
code each bit. A simple calculation may show why this is
necessary (argument of size). That means that the time
for this robot to, for example, check if an application
for a loan have correct filling (age of applicant, salary
conditions, city in which he lives, time of employment,
etc, etc) would require an effort that could take "a couple
of years" to reason, even with the speed of light (to access
information deep in the table, one would have to *transfer*
information far in space; Andromeda is 2 million light years
from Earth).

This is similar to the way of discovering if the 100,000th
digit of this sequence:

2.383838383838383838383838383838383838383838383838........

is a '3' or an '8'. You have at least two ways to solve this:
you can internally "count" up to the 100,000th digit and
check what it is or you may notice that there's a pattern
in it and then through a simple calculation obtain the
desired answer. Our problem is more likely to be on the
order of knowing if the (googol factorial) digit is 3 or
8. The difference in processing time will not allow these
alternatives to be similar.

But the problem I raised with Daryl's model is not what
Rickert wanted to emphasize.

Why Rickert is wrong:
---------------------

I agree with Neil in most of his reasonings, because of
the importance he gives to perceptual systems. I am
convinced that much of what we should pay attention is
related to perception.

The main point which we must focus on is in the
flexibility of the perceptual system. Just to remember,
perceptual system here is something that comprises
not only the sensory transducers but also "that part"
of the brain used in *recognition* of objects and events
and extraction of information from raw data. This
conceptualization is, perhaps, one of the points in doubt.

The example of the party (I guess concocted by Mark Young)
was pertinent, because it revealed the root of the problem:
whatever the method, we are sensitive to special "patterns"
among others that can even be considered "noise". In a party,
if somebody utters our name, even using voice intonation
similar to the "background conversations", we will be able
to notice with (apparently) no effort at all. It is an
amazing ability we have of detecting significance among
"meaningless" jumbled chitchats.

Neil suggested that this is an ability obtained through the
refinement of a phase *prior* to digitization (apparently
everybody agreed that digitization occurs somewhere).

This is, maybe, the point that Daryl, Mark and others didn't
understand. To be sincere, neither did I. I don't know of
any neurobiological evidence suggesting that this could
happen (well, except for a recent article in Nature, but I'll
let this for another time).

But I do agree that something along these lines must happen
somewhere in the circuit, because without it we would not
be able to present the fast object recognition behavior that
we do.

Neil's argument are strong, when he says that without
such an adjustment, the organism would be seriously
impaired on its adaptive (and "recognitive") capabilities,
necessary, for example, for the perception of small details
in the raw data that the original (fixed) equipment would
not be able to capture. My claim here is that there is a
problem with this line of thought due to the "forgetting"
of one important aspect.

I think the missing link to solve this is to remember
of *another* way that we humans had to learn about
our world: evolutionary (natural) selection.

All our senses where "designed" by nature to have the
"right" sensitivity/accuracy/range. Evolutionary development
of eyes, ears, touch, taste buds, etc can be seen as
a way of learning how to adapt sensory transduction to
the "big picture", as presented by our world (average
light conditions, important colors to notice, frequency
range and intensity of sounds we have to care about, etc).
This "learning" took eons to happen, but it certainly
influenced the *accuracy* and *range* of our sensory
equipment.

If we put humans in Mars and come back 500,000 years
later, I bet we will find some significant differences in
its sensory system. Maybe they will be able to see better
in dark, or maybe to hear sounds with more than 20khz
frequency. It is a slow adaptation to different environment
conditions, something that affects the *physical* characteristics
of the sensory mechanisms. I know of no other way to explain
physical changes in senses.

The sensitivity and accuracy of our senses today is, to the
amount of knowledge of neurobiology that we have today, fixed
given the time range we're talking (the life span of a single
human). So they could be conveniently substituted by equivalent
"solid state" machinery.

Thus, a robot which could duplicate the same sensitivity,
accuracy and range of our current sensors *could really* be
used to develop a behaviorally identical human (given
adequately built brain). In my opinion, we will be able to
do this in the next 50 years (Hans Moravec also seems to
be similarly optimistic).

I may accept that this robot may have some problems to keep
this "isomorphism" in the long future, because we will keep
evolving (eventually improving our senses) and the robot's
mechanisms, built with today's fixed technology and without
"upgrades", will not. Hence, in this regard, Rickert's
idea seems a little bit pointless.

But that does not cancel the *goal* of Rickert's argument.
We do indeed perceive our name when uttered in the middle
of a party.

I suggest that this is the "gestalt" effect I mentioned
in other post: our brain really rewires itself to assemble
"feature detectors" specialized in the recognition of,
for example, the phonological sequence that makes up our
name. This is done, probably, through synaptic plasticity,
Hebbian learning, LTP, whatever, in a very "inner" part of the
mechanism: inside the brain itself. There's no need to change
the physical (or analogical) part of the senses to get
this effect. If we are able to notice our name in a jumbled
rumor, so could do a robot if it had a *similar* feature
detector. Obviously we will never be able to notice to our
name *whispered* in such a circumstance, the same happening
with that robot, if its ear were isomorphic with ours.

There is, indeed, something important to notice here: that
the plasticity in our brain is responsible for the continuous
development of better detectors, recognizers and classifiers
of information. This detection, once automated, ease our
high level activities, such as driving a car, lecturing,
understanding written language, perceiving the emotional
state of somebody else, etc. A baby is not able to do all
this, but he/she will be, once it transforms *experiences*
into cognition.

My concern is in what happens with the brain of a baby
that allows it to solve the "lookup table" problem without
using it. That's where the secret to intelligence lies.

Final Points
------------

If I had to take sides here and elect not the winner, but the
most *useful* argument, I would no doubt pick Neil's. Daryl's
is a "mathematical" and pointless conjecture that gives very
little chance to more interesting discussions. Neil's, on the
other hand, is much more thought-provoking, besides representing
a significant and important departure of a series of claims
and assumptions made in the past by traditional AI researchers.

AI is stuck, no matter what we listen to the contrary.
Unless we break some preconceptions, we will keep insisting in
building lookup tables and their more fancy (but equally impractical)
versions such as first-order predicate logic, semantic networks,
description logic, expert systems, ontological knowledge bases,
etc, etc, etc.

Regards,
Sergio Navega.

From: houlepn@ibm.net
Subject: Re: On Rickert/Daryl's debate
Date: 29 Jan 1999 00:00:00 GMT
Message-ID: <78t9jf$95k$1@nnrp1.dejanews.com>
References: <36b1e548@news3.ibm.net>
X-Http-Proxy: 1.0 x3.dejanews.com:80 (Squid/1.1.22) for client 207.96.209.191
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Fri Jan 29 21:35:43 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.05 [en] (Win95; U)

"Sergio Navega" <snavega@ibm.net> wrote:

> The example of the party (I guess concocted by Mark Young)
> was pertinent, because it revealed the root of the problem:
> whatever the method, we are sensitive to special "patterns"
> among others that can even be considered "noise". In a party,
> if somebody utters our name, even using voice intonation
> similar to the "background conversations", we will be able
> to notice with (apparently) no effort at all. It is an
> amazing ability we have of detecting significance among
> "meaningless" jumbled chitchats.
>
> Neil suggested that this is an ability obtained through the
> refinement of a phase *prior* to digitization (apparently
> everybody agreed that digitization occurs somewhere).

I would expect this to happen at an intermediate phase of
digitization.  At least one level of abstraction would need
to have taken place if we are to account for the ability to
recognize our name pronounced with any accent, pitch,
rhythm...  Could such versatility be attainable analogically?

Pierre-Normand Houle

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own   

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: On Rickert/Daryl's debate
Date: 30 Jan 1999 00:00:00 GMT
Message-ID: <36b31b28@news3.ibm.net>
References: <36b1e548@news3.ibm.net> <78t9jf$95k$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 30 Jan 1999 14:46:00 GMT, 166.72.21.217
Organization: SilWis
Newsgroups: comp.ai.philosophy

houlepn@ibm.net wrote in message <78t9jf$95k$1@nnrp1.dejanews.com>...
>"Sergio Navega" <snavega@ibm.net> wrote:
>
>> The example of the party (I guess concocted by Mark Young)
>> was pertinent, because it revealed the root of the problem:
>> whatever the method, we are sensitive to special "patterns"
>> among others that can even be considered "noise". In a party,
>> if somebody utters our name, even using voice intonation
>> similar to the "background conversations", we will be able
>> to notice with (apparently) no effort at all. It is an
>> amazing ability we have of detecting significance among
>> "meaningless" jumbled chitchats.
>>
>> Neil suggested that this is an ability obtained through the
>> refinement of a phase *prior* to digitization (apparently
>> everybody agreed that digitization occurs somewhere).
>
>I would expect this to happen at an intermediate phase of
>digitization.  At least one level of abstraction would need
>to have taken place if we are to account for the ability to
>recognize our name pronounced with any accent, pitch,
>rhythm...  Could such versatility be attainable analogically?
>

I didn't understand what you called "intermediate phase". In
my vision, recognition of our name with different timbre or
with uncommon accents is done in a level somewhat far from
the processing of input signals. If you take an oscillogram
of two people pronouncing the same word, you'd see almost
nothing in common in the "fine details", although you may
find certain resemblance in high level structures (position
of fricative consonants, vowels, etc).

Regards,
Sergio Navega.

From: houlepn@my-dejanews.com
Subject: Re: On Rickert/Daryl's debate
Date: 30 Jan 1999 00:00:00 GMT
Message-ID: <78vqmt$aiq$1@nnrp1.dejanews.com>
References: <36b1e548@news3.ibm.net> <78t9jf$95k$1@nnrp1.dejanews.com> <36b31b28@news3.ibm.net>
X-Http-Proxy: 1.0 x15.dejanews.com:80 (Squid/1.1.22) for client 207.96.209.191
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Sat Jan 30 20:39:58 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.05 [en] (Win95; U)

"Sergio Navega" <snavega@ibm.net> wrote:

> houlepn@ibm.net wrote in message <78t9jf$95k$1@nnrp1.dejanews.com>...
> >"Sergio Navega" <snavega@ibm.net> wrote:
> >
> >> In a party,
> >> if somebody utters our name, even using voice intonation
> >> similar to the "background conversations", we will be able
> >> to notice with (apparently) no effort at all. It is an
> >> amazing ability we have of detecting significance among
> >> "meaningless" jumbled chitchats.
> >>
> >> Neil suggested that this is an ability obtained through the
> >> refinement of a phase *prior* to digitization (apparently
> >> everybody agreed that digitization occurs somewhere).
> >
> >I would expect this to happen at an intermediate phase of
> >digitization.  At least one level of abstraction would need
> >to have taken place if we are to account for the ability to
> >recognize our name pronounced with any accent, pitch,
> >rhythm...  Could such versatility be attainable analogically?
> >
>
> I didn't understand what you called "intermediate phase". In
> my vision, recognition of our name with different timbre or
> with uncommon accents is done in a level somewhat far from
> the processing of input signals. If you take an oscillogram
> of two people pronouncing the same word, you'd see almost
> nothing in common in the "fine details", although you may
> find certain resemblance in high level structures (position
> of fricative consonants, vowels, etc).

This is exacly what I meant.  "Intermediate" as opposed to "first"
level of processing.

Pierre-Normand Houle

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own   

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: On Rickert/Daryl's debate
Date: 01 Feb 1999 00:00:00 GMT
Message-ID: <36b59c5e@news3.ibm.net>
References: <36b1e548@news3.ibm.net> <78t9jf$95k$1@nnrp1.dejanews.com> <36b31b28@news3.ibm.net> <78vqmt$aiq$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 1 Feb 1999 12:21:50 GMT, 166.72.21.59
Organization: SilWis
Newsgroups: comp.ai.philosophy

houlepn@my-dejanews.com wrote in message
<78vqmt$aiq$1@nnrp1.dejanews.com>...
>"Sergio Navega" <snavega@ibm.net> wrote:
>
>> houlepn@ibm.net wrote in message <78t9jf$95k$1@nnrp1.dejanews.com>...
>> >"Sergio Navega" <snavega@ibm.net> wrote:
>> >
>> >> In a party,
>> >> if somebody utters our name, even using voice intonation
>> >> similar to the "background conversations", we will be able
>> >> to notice with (apparently) no effort at all. It is an
>> >> amazing ability we have of detecting significance among
>> >> "meaningless" jumbled chitchats.
>> >>
>> >> Neil suggested that this is an ability obtained through the
>> >> refinement of a phase *prior* to digitization (apparently
>> >> everybody agreed that digitization occurs somewhere).
>> >
>> >I would expect this to happen at an intermediate phase of
>> >digitization.  At least one level of abstraction would need
>> >to have taken place if we are to account for the ability to
>> >recognize our name pronounced with any accent, pitch,
>> >rhythm...  Could such versatility be attainable analogically?
>> >
>>
>> I didn't understand what you called "intermediate phase". In
>> my vision, recognition of our name with different timbre or
>> with uncommon accents is done in a level somewhat far from
>> the processing of input signals. If you take an oscillogram
>> of two people pronouncing the same word, you'd see almost
>> nothing in common in the "fine details", although you may
>> find certain resemblance in high level structures (position
>> of fricative consonants, vowels, etc).
>
>This is exacly what I meant.  "Intermediate" as opposed to "first"
>level of processing.
>

Well, I may have misunderstood your sentence. I thought you
were saying "intermediate phase of digitization". That's the
very root of the problem: in my vision, digitization happens
only in the very initial phase. All other "phases" don't
use any analog components (although the coding strategy may
be different, when you go deep into that architecture).

Regards,
Sergio Navega.

From: houlepn@ibm.net
Subject: Re: On Rickert/Daryl's debate
Date: 03 Feb 1999 00:00:00 GMT
Message-ID: <798cjg$btc$1@nnrp1.dejanews.com>
References: <36b1e548@news3.ibm.net> <78t9jf$95k$1@nnrp1.dejanews.com> <36b31b28@news3.ibm.net> <78vqmt$aiq$1@nnrp1.dejanews.com> <36b59c5e@news3.ibm.net>
X-Http-Proxy: 1.0 x1.dejanews.com:80 (Squid/1.1.22) for client 207.96.209.191
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Wed Feb 03 02:34:24 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.05 [en] (Win95; U)

In article <36b59c5e@news3.ibm.net>,
  "Sergio Navega" <snavega@ibm.net> wrote:
> houlepn@my-dejanews.com wrote in message
> <78vqmt$aiq$1@nnrp1.dejanews.com>...
> >"Sergio Navega" <snavega@ibm.net> wrote:
> >
> >> houlepn@ibm.net wrote in message <78t9jf$95k$1@nnrp1.dejanews.com>...
> >> >"Sergio Navega" <snavega@ibm.net> wrote:
> >> >
> >> >> In a party,
> >> >> if somebody utters our name, even using voice intonation
> >> >> similar to the "background conversations", we will be able
> >> >> to notice with (apparently) no effort at all. It is an
> >> >> amazing ability we have of detecting significance among
> >> >> "meaningless" jumbled chitchats.
> >> >>
> >> >> Neil suggested that this is an ability obtained through the
> >> >> refinement of a phase *prior* to digitization (apparently
> >> >> everybody agreed that digitization occurs somewhere).
> >> >
> >> >I would expect this to happen at an intermediate phase of
> >> >digitization.  At least one level of abstraction would need
> >> >to have taken place if we are to account for the ability to
> >> >recognize our name pronounced with any accent, pitch,
> >> >rhythm...  Could such versatility be attainable analogically?
> >> >
> >>
> >> I didn't understand what you called "intermediate phase". In
> >> my vision, recognition of our name with different timbre or
> >> with uncommon accents is done in a level somewhat far from
> >> the processing of input signals. If you take an oscillogram
> >> of two people pronouncing the same word, you'd see almost
> >> nothing in common in the "fine details", although you may
> >> find certain resemblance in high level structures (position
> >> of fricative consonants, vowels, etc).
> >
> >This is exactly what I meant.  "Intermediate" as opposed to "first"
> >level of processing.
> >
>
> Well, I may have misunderstood your sentence. I thought you
> were saying "intermediate phase of digitization". That's the
> very root of the problem: in my vision, digitization happens
> only in the very initial phase. All other "phases" don't
> use any analog components (although the coding strategy may
> be different, when you go deep into that architecture).

OK.  I was thinking of the action of a neuron as digitization at all
levels.  Let's say the output of a neuron is correlated with "I am
happy".  What are the meaning of the inputs to this neurons?  Relative
to the neurons from the previous layer, these digital signals could
correspond to:

(positive synaptic weights) "I am well fed",  "I am rested",
"I just won the lottery"...

(negative synaptic weights) "My back hurts", "My mother in law is going
to visit tomorrow"...

But relative to the single neuron downstream the meanings would just be:

"I am a little bit happy", "I am a tiny bit happy", "I am very happy"...

"I am somewhat unhappy", "I am very unhappy"...

So, the weighted summed input to the "happiness" neuron could be thought
of as a very nearly analogical "happiness" signal being converted into a
one bit "happiness" signal.

(I do not claim this particular example to be any realistic)

Pierre-Normand Houle

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own   

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: On Rickert/Daryl's debate
Date: 03 Feb 1999 00:00:00 GMT
Message-ID: <36b861c3@news3.ibm.net>
References: <36b1e548@news3.ibm.net> <78t9jf$95k$1@nnrp1.dejanews.com> <36b31b28@news3.ibm.net> <78vqmt$aiq$1@nnrp1.dejanews.com> <36b59c5e@news3.ibm.net> <798cjg$btc$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 3 Feb 1999 14:48:35 GMT, 166.72.21.70
Organization: SilWis
Newsgroups: comp.ai.philosophy

houlepn@ibm.net wrote in message <798cjg$btc$1@nnrp1.dejanews.com>...
>In article <36b59c5e@news3.ibm.net>,
>  "Sergio Navega" <snavega@ibm.net> wrote:
>> houlepn@my-dejanews.com wrote in message
>> <78vqmt$aiq$1@nnrp1.dejanews.com>...
>> >"Sergio Navega" <snavega@ibm.net> wrote:
>> >
>> >> houlepn@ibm.net wrote in message <78t9jf$95k$1@nnrp1.dejanews.com>...
>> >> >"Sergio Navega" <snavega@ibm.net> wrote:
>> >> >
>> >> >> In a party,
>> >> >> if somebody utters our name, even using voice intonation
>> >> >> similar to the "background conversations", we will be able
>> >> >> to notice with (apparently) no effort at all. It is an
>> >> >> amazing ability we have of detecting significance among
>> >> >> "meaningless" jumbled chitchats.
>> >> >>
>> >> >> Neil suggested that this is an ability obtained through the
>> >> >> refinement of a phase *prior* to digitization (apparently
>> >> >> everybody agreed that digitization occurs somewhere).
>> >> >
>> >> >I would expect this to happen at an intermediate phase of
>> >> >digitization.  At least one level of abstraction would need
>> >> >to have taken place if we are to account for the ability to
>> >> >recognize our name pronounced with any accent, pitch,
>> >> >rhythm...  Could such versatility be attainable analogically?
>> >> >
>> >>
>> >> I didn't understand what you called "intermediate phase". In
>> >> my vision, recognition of our name with different timbre or
>> >> with uncommon accents is done in a level somewhat far from
>> >> the processing of input signals. If you take an oscillogram
>> >> of two people pronouncing the same word, you'd see almost
>> >> nothing in common in the "fine details", although you may
>> >> find certain resemblance in high level structures (position
>> >> of fricative consonants, vowels, etc).
>> >
>> >This is exactly what I meant.  "Intermediate" as opposed to "first"
>> >level of processing.
>> >
>>
>> Well, I may have misunderstood your sentence. I thought you
>> were saying "intermediate phase of digitization". That's the
>> very root of the problem: in my vision, digitization happens
>> only in the very initial phase. All other "phases" don't
>> use any analog components (although the coding strategy may
>> be different, when you go deep into that architecture).
>
>OK.  I was thinking of the action of a neuron as digitization at all
>levels.  Let's say the output of a neuron is correlated with "I am
>happy".  What are the meaning of the inputs to this neurons?  Relative
>to the neurons from the previous layer, these digital signals could
>correspond to:
>
>(positive synaptic weights) "I am well fed",  "I am rested",
>"I just won the lottery"...
>
>(negative synaptic weights) "My back hurts", "My mother in law is going
>to visit tomorrow"...
>
>But relative to the single neuron downstream the meanings would just be:
>
>"I am a little bit happy", "I am a tiny bit happy", "I am very happy"...
>
>"I am somewhat unhappy", "I am very unhappy"...
>
>So, the weighted summed input to the "happiness" neuron could be thought
>of as a very nearly analogical "happiness" signal being converted into a
>one bit "happiness" signal.
>
>(I do not claim this particular example to be any realistic)
>
>
>Pierre-Normand Houle
>

I think I got the gist of your example and I agree with it, provided
we keep the "traditional" vision of neuron modeling. However, I think
that, using your metaphor, we might have something like this:

Inputs:
  "You're better off firing at once"
  "in my opinion you shouldn't fire now"
  "fire now or you're fired!"
  "I wouldn't do that right now"
  "how about waiting a little?"
  "fire or don't fire, I don't care"

Outputs:
  "I think I'll fire now, I've been doing this for a while"

That seems to me to be the result of a multi-participant wrestling
competition, in which only one survives (the fire guy or the don't
fire guy), then the digital aspect.

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net