Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Neurons and consciousness
Date: 30 Jul 1999 00:00:00 GMT
Message-ID: <37a2186b@news3.us.ibm.net>
References: <7nb0q9$rkv@ux.cs.niu.edu> <7ng8ti$ata@dfw-ixnews4.ix.netcom.com> <7ni3e7$6ab@ux.cs.niu.edu> <379CB19E.5583@online.no> <7nsg5s$6k5$1@scotsman.ed.ac.uk> <37A1E9F7.5201@online.no>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 30 Jul 1999 21:26:03 GMT, 200.229.240.161
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy,sci.physics

Tore Lund wrote in message <37A1E9F7.5201@online.no>...
>Chris Malcolm wrote:
>>
>
>> To be less facetious, any robotic creature somewhere between the size
>> of a mouse and a car which interacts with the physical world on a
>> timescale close to ours (can usually avoid thrown stones) and has some
>> similar needs (has to find an electric socket at least once a day) is
>> likely to have enough basic shared physical world experience to make a
>> start at gestural communication possible.
>
>Yes, we have heard this from a number of people.  So why don't they go
>ahead and make a mobile robot?  It would not need the full range of
>human limbs.  After all, some humans are disabled from birth, and this
>does not stop them from being as intelligent as everyone else.  The main
>hardware could reside in a mainframe that communicated with the robot by
>infrared light or whatever.  In short, put a terminal with eyes and ears
>and rudimentary arms in a wheelchair, equip it with "needs" and "bring
>it up" as you would a disabled human child.  Piece of cake...

We have all the hardware to do this with incredible efficacy.
So what is the problem?

The problem is that nobody knows how this architecture will cross the
barrier between monkey-like behavior and human-similar language and
cognition. We understand reasonably well how low level sensory
processing appears to occur. We understand very well how high level
aspects of cognition appear to occur. But what's in between???

The big problem of AI, in my opinion, is not the perceptual system.
We're very good at the processing of video and microphone signals.
Neither the problem is to handle symbolic inference. Computers are
already masters at this. The problem, that should be given full
attention from big scientists, is the way in which perceptual
mechanisms link (and support) naturally these linguistic and
cognitive abilities. This is what we should focus.

We have connectionism and symbolicism, each one with its share of
advantages and disadvantages. The answer appears to be in the
"middle level".

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Neurons and consciousness
Date: 31 Jul 1999 00:00:00 GMT
Message-ID: <37a30514@news3.us.ibm.net>
References: <37A1E9F7.5201@online.no> <37a2186b@news3.us.ibm.net> <7ntg4l$dg@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 31 Jul 1999 14:15:48 GMT, 129.37.183.63
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy,sci.physics

Neil W Rickert wrote in message <7ntg4l$dg@ux.cs.niu.edu>...
>"Sergio Navega" <snavega@ibm.net> writes:
>>Tore Lund wrote in message <37A1E9F7.5201@online.no>...
>
>>>Yes, we have heard this from a number of people.  So why don't they go
>>>ahead and make a mobile robot?  It would not need the full range of
>>>human limbs.  After all, some humans are disabled from birth, and this
>>>does not stop them from being as intelligent as everyone else.  The main
>>>hardware could reside in a mainframe that communicated with the robot by
>>>infrared light or whatever.  In short, put a terminal with eyes and ears
>>>and rudimentary arms in a wheelchair, equip it with "needs" and "bring
>>>it up" as you would a disabled human child.  Piece of cake...
>
>>We have all the hardware to do this with incredible efficacy.
>
>I'm not so sure of that.
>

If we don't have the necessary hardware, we have the knowledge of how
to do it very quickly. The problem appears to be "what to do", in other
words, design decisions, not technical proneness. I've heard of
artificial muscles using electrically excited polymers. We have
plenty of materials to serve as metalic "bones". We have fast and
capable controller systems, with powerful embedded electronics. The
problem appears to be that we don't have all the insights in order
to put all that stuff to work meaningfully. We appear to have all
the materials, but we don't know what to do.

>>So what is the problem?
>
>>The problem is that nobody knows how this architecture will cross the
>>barrier between monkey-like behavior and human-similar language and
>>cognition. We understand reasonably well how low level sensory
>>processing appears to occur. We understand very well how high level
>>aspects of cognition appear to occur. But what's in between???
>
>I suggest that the big problem is far earlier.  The step from monkey
>to human is small biologically and evolutionary, and I believe it to
>also be small in terms of core cognitive functionality.  There is a
>far bigger cognitive gap between fish and primitive mammal than
>between the most primitive mammal and human.
>

Depending on what we're comparing, I may agree with you. Fishes are
very primitive when compared with monkeys, they don't have to
coordinate several limbs with precision, they don't have to worry
much about standing and balancing its body. Monkeys are much more
sophisticated in this regard.

But there's a way to see them compare favorably, when we put them
along with humans. In terms of creative and problem solving abilities,
in regard to the depth and sophistication of language, in abstract
thinking, in causally modeling the world, we're far, far superior.
I'm puzzled by this difference, given that the brains of monkeys and
humans are so similar. It seems to be a small difference that
means a lot. It also seems to encompass strong external, cultural
feedback.

>>The big problem of AI, in my opinion, is not the perceptual system.
>>We're very good at the processing of video and microphone signals.
>
>Neither video systems nor microphone signals are perceptual systems.
>They are signal transducers, but the actual perception is left to the
>human users of these gadgets.
>

You're right, I had again misused that word. I wanted to say in terms
of sensory systems. AI is very underdeveloped in terms of *perceptual*
systems.

>>Neither the problem is to handle symbolic inference. Computers are
>>already masters at this.
>
>On that one we agree.
>
>>                          The problem, that should be given full
>>attention from big scientists, is the way in which perceptual
>>mechanisms link (and support) naturally these linguistic and
>>cognitive abilities. This is what we should focus.
>
>I would say that we need to look earlier, into perceptual systems
>themselves.
>

Ok, we have two problems awaiting solution. One is, as you say,
the perceptual systems. The other, is the way in which these
perceptual systems *support* the remainder of our cognition. The
solution to the former problem, although still in its infancy, is
not what worries me. Cognitive science is making some progress
in this area (although there aren't lots of practical implementation
of capable systems). The latter (what comes *after* the perceptual
systems) is where I believe we can commit serious errors. It is
the solution to this problem that, in my opinion, may illuminate
one of the darkest corners that we have in our minds: where
language come from, and how it supports (and/or is supported)
by our thoughts, how abstractions are processed using perceptual
systems as a basis.

>>We have connectionism and symbolicism, each one with its share of
>>advantages and disadvantages. The answer appears to be in the
>>"middle level".
>
>Both connectionism and symbolicism are attempts to do the same
>thing -- to process the output of a perceptual system.  What is
>missing is the perceptual system itself.

Hum, I have some doubts about this. I would say that connectionism
tries to do stuff *before* the perceptual system and symbolicism
attempts to simplify (or even disregard) perceptual systems and
fuse them together with sensory systems and pretend they are only
one "module". Both approaches appear to be going to dead ends.

>The only perceptual systems
>we have are biological ones.  And as long as we take perceptual
>systems for granted and assume that they are no more than signal
>transducers, we shall continue to make little progress.
>

I agree entirely. That's the reason why I can't see AI work done
without a solid ground on neuroscience and cognitive science. I've
been criticized once by such a statement in comp.ai some time ago,
but my critics were from the old school...

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Neurons and consciousness
Date: 02 Aug 1999 00:00:00 GMT
Message-ID: <37a59848@news3.us.ibm.net>
References: <7ntg4l$dg@ux.cs.niu.edu> <37a30514@news3.us.ibm.net> <7nv842$2i1@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 2 Aug 1999 13:08:24 GMT, 200.229.243.58
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy,sci.physics

Neil W Rickert wrote in message <7nv842$2i1@ux.cs.niu.edu>...
>"Sergio Navega" <snavega@ibm.net> writes:
>>Neil W Rickert wrote in message <7ntg4l$dg@ux.cs.niu.edu>...
>>>"Sergio Navega" <snavega@ibm.net> writes:
>>>>Tore Lund wrote in message <37A1E9F7.5201@online.no>...
>
>>>>>Yes, we have heard this from a number of people.  So why don't they go
>>>>>ahead and make a mobile robot?  It would not need the full range of
>>>>>human limbs.  After all, some humans are disabled from birth, and this
>>>>>does not stop them from being as intelligent as everyone else.   The main
>>>>>hardware could reside in a mainframe that communicated with the robot by
>>>>>infrared light or whatever.  In short, put a terminal with eyes and ears
>>>>>and rudimentary arms in a wheelchair, equip it with "needs" and "bring
>>>>>it up" as you would a disabled human child.  Piece of cake...
>
>>>>We have all the hardware to do this with incredible efficacy.
>
>>>I'm not so sure of that.
>
>>If we don't have the necessary hardware, we have the knowledge of how
>>to do it very quickly.
>
>I'm not so sure of that either.
>
>Every so often I have to take my automobile in for a tuneup.  Even
>more often, I have to take it in for an oil change.  If we were using
>the hardware you would really need, the auto would be keeping itself
>in tune, and keeping its oil in a sufficiently clean state that such
>regular maintenance would not be required.
>

I know what you mean, but I see here another question: we didn't
designed cars that way, probably because of Mobil Oil and other
economic constraints. It would be wiser to devise a self-maintainable
car, but it would not fit very well in our economic model.

>[I am implicitly arguing here that intelligence is not just abstract
>computation, but depends heavily on the relation with the external
>world.]
>
>Voice recognition systems still have a long way to go.  Optical
>character recognition still has a long way to go.  We had to invent
>bar codes for our groceries because we cannot make reliable and
>efficient scanners to read the labels intended for humans.
>

That's right, but I have that impression that it is software that
is refraining us.

>
>>Depending on what we're comparing, I may agree with you. Fishes are
>>very primitive when compared with monkeys, they don't have to
>>coordinate several limbs with precision, they don't have to worry
>>much about standing and balancing its body. Monkeys are much more
>>sophisticated in this regard.
>
>>But there's a way to see them compare favorably, when we put them
>>along with humans. In terms of creative and problem solving abilities,
>>in regard to the depth and sophistication of language, in abstract
>>thinking, in causally modeling the world, we're far, far superior.
>
>Sure.  But we get to set the standards as to what is superior.
>Perhaps the monkeys consider themselves superior, and wonder about
>many of the silly things that humans do.
>
>I'm not discounting the importance of language.  But language is
>built upon the kinds of cognitive competencies that we share with
>monkeys.  The mistake of AI is to think we could have artificial
>system with language competence, yet without first giving them these
>same cognitive competencies.
>

I fully agree.

>>>I would say that we need to look earlier, into perceptual systems
>>>themselves.
>
>>Ok, we have two problems awaiting solution. One is, as you say,
>>the perceptual systems. The other, is the way in which these
>>perceptual systems *support* the remainder of our cognition.
>
>But once we understand perceptual systems, this may become obvious.
>Indeed, I expect that it will.
>

I am definitely counting on this :-)

>>                                                              The
>>solution to the former problem, although still in its infancy, is
>>not what worries me. Cognitive science is making some progress
>>in this area (although there aren't lots of practical implementation
>>of capable systems).
>
>Sorry, but I don't see much evidence of this progress.
>
>It seems to me that we have an occasional psychologist with a
>forceful personality who makes some progress -- J.J. Gibson, Piaget,
>I. Rock and Kosslyn might be in this group.  But the majority of
>psychologists and almost all philosophers and AI folk either ignore
>their work or argue against it on a priori philosophical grounds.

Unfortunately, this is right. I'd like to see their ideas taken
more seriously. However, most of the behavioral studies done today
by cognitive scientists present a huge amount of evidences for us
to work with. What is missing, IMHO, are convincing models which
can fit into those evidences.

>
>>                      The latter (what comes *after* the perceptual
>>systems) is where I believe we can commit serious errors. It is
>>the solution to this problem that, in my opinion, may illuminate
>>one of the darkest corners that we have in our minds: where
>>language come from, and how it supports (and/or is supported)
>>by our thoughts, how abstractions are processed using perceptual
>>systems as a basis.
>
>Good luck with the effort.  I don't expect it to succeed.
>

Yes, I know I'll have to be lucky. But I guess it is a reasonable
attempt.

>>>>We have connectionism and symbolicism, each one with its share of
>>>>advantages and disadvantages. The answer appears to be in the
>>>>"middle level".
>
>>>Both connectionism and symbolicism are attempts to do the same
>>>thing -- to process the output of a perceptual system.  What is
>>>missing is the perceptual system itself.
>
>>Hum, I have some doubts about this. I would say that connectionism
>>tries to do stuff *before* the perceptual system and symbolicism
>>attempts to simplify (or even disregard) perceptual systems and
>>fuse them together with sensory systems and pretend they are only
>>one "module". Both approaches appear to be going to dead ends.
>
>The majority of connectionist research is done inside a computer
>using data that was prepared by methods devised by perceiving
>humans.  It seems to me that this places it well after the perceptual
>systems.  A real perceptual system is faced with a cacaphony of
>apparently meaningless random signals, from which it has to derive
>useful information.  We already separate out what we think is useful
>information, and feed that to our artificial neural networks.  In my
>view, this means that we are doing the perceptual work for the ANNs
>and expecting them to carry on afterwards.
>

I guess this is where we differ most and is where probably we both
differ from Bill Modlin's ideas. I see, then, three different ways of
facing this question. In Modlin's vision, sensory systems are directly
connected to the network (after a simple digitization step). His network
is in charge of doing correlational analysis of the low level signals
and propagating this correlation to higher levels, in which higher order
statistics naturally emerge. I question here how language and abstract
reasoning appears and the complexity of the initial levels.

In your vision (up to the point I understand it), there is a perceptual
layer just behind sensory systems which adjusts the incoming signals
(perhaps using analogical methods) using feedback routes from higher
levels, in order to adjust some parameters and then the thing is
finally digitized and fed to the network (or whatever follows) for the
remainder of the task. I question here the real need for the adjustment
at the initial levels.

In my vision, the first level is an "innate layer" responsible for
the digitization *and* the first statistical processing of the signals.
This layer is fixed in its design and working functionality, contains
hardwired "circuitry" and its main task is to *simplify* the signals,
cutting out most of the "unimportant" information and establishing
the "information representation" that will be used by deeper levels.

After this level, I find the place for a very, very thick perceptual
system (probably requesting the majority of the neurons of our brain),
acting on the already digitized signals and entirely driven by
experience (which means, starting empty and successively refining as
new information is processed). Only after this level conscious thought
finally "appears". At this last level, what is important is not the
specific coding strategy of individual neurons, but rather their
collective, ensemble behavior. The mechanics of this level works
throughout different principles, and its computational implementation
have been done in the past in unreasonable (read: propositional or logic)
form. I explain the failures of AI so far because this level is what
we "notice" by introspection. I think that most systems developed
using logic were the result of the misleading vision that our cognition
is formed only by this level. But the greatest part of everything we
have in our brain is, in my opinion, perceptual.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Neurons and consciousness
Date: 02 Aug 1999 00:00:00 GMT
Message-ID: <37a5fc07@news3.us.ibm.net>
References: <7nv842$2i1@ux.cs.niu.edu> <37a59848@news3.us.ibm.net> <7o4ind$9m0@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 2 Aug 1999 20:13:59 GMT, 166.72.21.143
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy,sci.physics

Neil W Rickert wrote in message <7o4ind$9m0@ux.cs.niu.edu>...
>"Sergio Navega" <snavega@ibm.net> writes:
>>Neil W Rickert wrote in message <7nv842$2i1@ux.cs.niu.edu>...
>
>>>>>>We have all the hardware to do this with incredible efficacy.
>
>>>>>I'm not so sure of that.
>
>>>>If we don't have the necessary hardware, we have the knowledge of how
>>>>to do it very quickly.
>
>>>I'm not so sure of that either.
>
>>>Every so often I have to take my automobile in for a tuneup.  Even
>>>more often, I have to take it in for an oil change.  If we were using
>>>the hardware you would really need, the auto would be keeping itself
>>>in tune, and keeping its oil in a sufficiently clean state that such
>>>regular maintenance would not be required.
>
>>I know what you mean, but I see here another question: we didn't
>>designed cars that way, probably because of Mobil Oil and other
>>economic constraints. It would be wiser to devise a self-maintainable
>>car, but it would not fit very well in our economic model.
>
>I don't actually have any problem with the way we design
>automobiles.  But if we want a truly intelligent robot, we will need
>to do it differently.  A great deal of human learning arises from
>having to cope with contingencies.  If we design our robots such that
>whenever there is a problem, the engineers fix things, then our
>engineers will learn from the contingencies but the robots will not
>learn.

In the sense of self-repair, I strongly agree with your ideas.
Today's hardware are a joke if compared to biological systems.
But there are limits even for us. A serious injury will put us
in a similar dependence of other humans to help with our healing.
But it is right that the technology we're handling today for
robots is much worse in this regard. But this is not the central
point we've been discussing.

>
>>>[I am implicitly arguing here that intelligence is not just abstract
>>>computation, but depends heavily on the relation with the external
>>>world.]
>
>>>Voice recognition systems still have a long way to go.  Optical
>>>character recognition still has a long way to go.  We had to invent
>>>bar codes for our groceries because we cannot make reliable and
>>>efficient scanners to read the labels intended for humans.
>
>>That's right, but I have that impression that it is software that
>>is refraining us.
>
>If you have fixed hardware, then the software is the variable, so you
>will conclude that it is a software problem.  But maybe the real
>difficulty with the software is that the hardware is not providing it
>the information required to do the job.
>

This makes sense, but I wonder that we (humans) are subject to strong
'hardware' limitations imposed by natural selection and that it is
our 'software' that allows us to go beyond that. A pair of eyeglasses
is a tipical hardware device designed by that creative software to
improve (or regularize) perception.

>
>>In my vision, the first level is an "innate layer" responsible for
>>the digitization *and* the first statistical processing of the signals.
>>This layer is fixed in its design and working functionality, contains
>>hardwired "circuitry" and its main task is to *simplify* the signals,
>>cutting out most of the "unimportant" information and establishing
>>the "information representation" that will be used by deeper levels.
>
>I have a portable radio.  It has digitized controls.  It looks for AM
>radio stations at 10KHz intervals.  This is fine in the USA.
>Apparently most of the world spaces its AM radio stations at 9 MHz
>intervals.  Fortunately my radio has an internal switch that can be
>used to change the scanning interval.
>
>My point is that if you digitize the world, you have to digitize in a
>manner that it well fitted to the world.  If you look at 10MHz
>intervals, but the stations are at 9MHz intervals, you will get poor
>results.  I see feedback systems which do things comparable to tuning
>a radio, tuning a piano, or tuning an automobile engine.  That is,
>these feedback systems are making fine adjustments so as to keep
>everything coordinated and efficient.  This is similar to finding
>local maxima and minima of functions, something based in continuous
>mathematics.

I understand what you say and this is reasonable, my only doubt is
that perhaps there is a "hidden" calibration method here. For instance,
suppose that we have 1000 radios, devised with different (but fixed)
precision in the interval among stations. Using your example, one
radio is able to scan in intervals of 6 Khz, other in 7 Khz, then
9 Khz, 10, 11, 12, 13. These radios are spread all over the world in
"competition" for consumers. There's a price penalty for somebody
that chooses a 6 Khz radio (which appears to be technically superior)
while radios with 13 Khz are cheap. By some kind of "natural selection"
(which involves price/performance ratios) things will, after some time,
cluster around the best local position (in US, 10 Khz, in some African
coutry, maybe 12, and so on). This could provide a way for this
calibration to occur because of darwinian selection and because the
world in which we're in is relatively constant in terms of sensory
demands. A fish of very deep waters may not evolve eyes, because
they are unnecessary, given the scarce luminosity of that environment.
An eagle developed better eyes than ours, because they are important
to their survival.

>>After this level, I find the place for a very, very thick perceptual
>>system (probably requesting the majority of the neurons of our brain),
>>acting on the already digitized signals and entirely driven by
>>experience (which means, starting empty and successively refining as
>>new information is processed). Only after this level conscious thought
>>finally "appears". At this last level, what is important is not the
>>specific coding strategy of individual neurons, but rather their
>>collective, ensemble behavior. The mechanics of this level works
>>throughout different principles, and its computational implementation
>>have been done in the past in unreasonable (read: propositional or logic)
>>form. I explain the failures of AI so far because this level is what
>>we "notice" by introspection. I think that most systems developed
>>using logic were the result of the misleading vision that our cognition
>>is formed only by this level. But the greatest part of everything we
>>have in our brain is, in my opinion, perceptual.
>
>The Modlin/ McCullough/ Balter view of things is that, because of the
>finiteness of resolution, you don't need any of the fine adjustments
>that I want to see.  From that perspective, it should be possible
>to construct pianos, violins, cellos, trombones, etc, to precise
>engineering specifications on the tension and structure of
>the strings and other components.  Then the players could just sit down
>and play, and they would never have to tune their
>instruments, because those precise engineering specifications
>would have guaranteed that everything would be in tune.
>
>I am arguing that this doesn't work.  That no matter how precise are
>the engineering specifications, the instruments won't all be in tune
>until the musicians themselves make the fine tuning adjustments.
>Moreover, it is precisely because of the "finiteness of resolution"
>problem that the engineers specifications are not sufficient to have
>the instruments in tune.  But once that fine tuning has been done by
>the musicians, in a preliminary feedback session, the orchestra as a
>whole can act with the necessary "collective, ensemble behavior"
>whose importance you correctly point out.
>

Your analogy was very well conceived and I grant that this idea
is not entirely alien to me. Although I also see some kind of
calibration happening in the perceptual system, in my thesis I
find this perceptual system somewhat deep inside the architecture.
Perhaps this is the biggest difference between our visions of
this effect: you propose that this occurs in analogical terms,
close to the sensory inputs and I hypothesize that it happens
after all aspects of transduction, perhaps in some kind of
digital coding (although I have some doubts that this inner
level works exclusively in digital terms).

One way or another, this adaptation of the perceptual system is
probably the most important thing that happens in animal cognition
in general and I miss a more wide discussion (by other scientists)
of such ideas.

Regards,
Sergio Navega.

From: Neil W Rickert <rickert+nn@cs.niu.edu>
Subject: Re: Neurons and consciousness
Date: 02 Aug 1999 00:00:00 GMT
Message-ID: <7o51j5$avj@ux.cs.niu.edu>
References: <7o4ind$9m0@ux.cs.niu.edu> <37a5fc07@news3.us.ibm.net>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy,sci.physics

"Sergio Navega" <snavega@ibm.net> writes:
>Neil W Rickert wrote in message <7o4ind$9m0@ux.cs.niu.edu>...

>>If you have fixed hardware, then the software is the variable, so you
>>will conclude that it is a software problem.  But maybe the real
>>difficulty with the software is that the hardware is not providing it
>>the information required to do the job.

>This makes sense, but I wonder that we (humans) are subject to strong
>'hardware' limitations imposed by natural selection and that it is
>our 'software' that allows us to go beyond that. A pair of eyeglasses
>is a tipical hardware device designed by that creative software to
>improve (or regularize) perception.

We certainly have hardware limitations.  But those limitations are
not nearly as restrictive as the limitations of our mechanical
artifacts.

>>I have a portable radio.  It has digitized controls.  It looks for AM
>>radio stations at 10KHz intervals.  This is fine in the USA.
>>Apparently most of the world spaces its AM radio stations at 9 MHz
>>intervals.  Fortunately my radio has an internal switch that can be
>>used to change the scanning interval.

>>My point is that if you digitize the world, you have to digitize in a
>>manner that it well fitted to the world.  If you look at 10MHz
>>intervals, but the stations are at 9MHz intervals, you will get poor
>>results.  I see feedback systems which do things comparable to tuning
>>a radio, tuning a piano, or tuning an automobile engine.  That is,
>>these feedback systems are making fine adjustments so as to keep
>>everything coordinated and efficient.  This is similar to finding
>>local maxima and minima of functions, something based in continuous
>>mathematics.

>I understand what you say and this is reasonable, my only doubt is
>that perhaps there is a "hidden" calibration method here. For instance,
>suppose that we have 1000 radios, devised with different (but fixed)
>precision in the interval among stations. Using your example, one
>radio is able to scan in intervals of 6 Khz, other in 7 Khz, then
>9 Khz, 10, 11, 12, 13. These radios are spread all over the world in
>"competition" for consumers. There's a price penalty for somebody
>that chooses a 6 Khz radio (which appears to be technically superior)
>while radios with 13 Khz are cheap. By some kind of "natural selection"
>(which involves price/performance ratios) things will, after some time,
>cluster around the best local position (in US, 10 Khz, in some African
>coutry, maybe 12, and so on). This could provide a way for this
>calibration to occur because of darwinian selection and because the
>world in which we're in is relatively constant in terms of sensory
>demands. A fish of very deep waters may not evolve eyes, because
>they are unnecessary, given the scarce luminosity of that environment.
>An eagle developed better eyes than ours, because they are important
>to their survival.

I agree with much of this.  However, evolution is a slow process.  It
can make the adjustment/calibration to handle matters that are
relatively stable over long periods of time.  But perhaps many of our
perceptual functions require recalibration during the day, or in the
change from summer to winter.  Such changes are far too rapid for
evolution.  The best evolution could do would be to provide us with a
flexible system that can adjust its own calibration according to the
circumstances.  And, in my view, our intelligence arises from just
such a flexible system.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net