Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Seth's conjecture
Date: 29 Apr 1999 00:00:00 GMT
Message-ID: <37285e79@news3.us.ibm.net>
References: <37261136.0@news.victoria.tc.ca> <3727354c@news3.us.ibm.net> <3727D51B.6F5DEE50@clickshop.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 29 Apr 1999 13:28:25 GMT, 166.72.29.178
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Seth Russell wrote in message <3727D51B.6F5DEE50@clickshop.com>...
>Sergio Navega wrote:
>
>> We are lucky for being alive in these times. We have a delightful
>> and tremendously complex problem to solve: that of discovering
>> how to embed some intelligence in our stupid machines. But this
>> problem is very, very complex. It cannot be solved by a single
>> man's wisdom. No, I'm not saying that it can't be solved by a
>> single man, I'm saying that it can't be solved only by the
>> *ideas* of a single man.
>
>And I am saying that is cannot be solved by a single man.  There is
>a vast scale of this thing and a single individual simply does not have
>the bandwidth to do it.  That is pretty much Bozo's  conjecture ....
>woops excuse me Seth's conjecture.
>

I know what you mean here and I'm ready to agree if we change a bit
what I was posing as "problem". I divide the AI problem in two parts:
a) To discover or invent a mechanism(s) that can be considered
intelligent and b) the process of acquisition of information,
throughout experience or manually, by that architecture.

Item a) above is what I think can be discovered by a single man,
using not only his ideas but also a lot of previous ideas from
other researchers.

Item b) is what demands a lot of people. The question is that item
b) is "easy", although time consuming. Item a), on the other hand,
is where the things are cloudy. It is the real problem that AI should
concern in this phase, IMHO.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Seth's conjecture
Date: 29 Apr 1999 00:00:00 GMT
Message-ID: <3728c84c@news3.us.ibm.net>
References: <37261136.0@news.victoria.tc.ca> <3727354c@news3.us.ibm.net> <3727D51B.6F5DEE50@clickshop.com> <37285e79@news3.us.ibm.net> <3728830B.318AD3A0@clickshop.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 29 Apr 1999 20:59:56 GMT, 129.37.182.62
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Seth Russell wrote in message <3728830B.318AD3A0@clickshop.com>...
>
>The conjecture is that the "mechanisms that can be considered intelligent"
>are of the "second order of complexity".  Sergio, can you help me here, I'm
>trying to remember something I read about a year ago about complexity
>theory.  My understanding is that there is one order of complexity that can
>be generated by simple rules,  Conway's Life game is an example.  There is
>another order of complexity that cannot, a natural language is an example,
>or the human Genoa another.  If your "mechanisms" are of this second order
>of complexity, then the conjecture kicks in at step A.
>

I don't think that the mechanism I mentioned would suffer this problem.
I guess you're talking about second order cybernetics, where two entities
communicate with one another in the presence of an observer which is
also influencing the system. In this case, the environment in which this
takes place is also to be taken into account because it would affect
profoundly the growth of each individual entity. Thus, the knowledge
acquired by one entity could not be explained only by the internal
characteristics of this entity, but by the whole "loop"
of actions/reactions in which it is immersed.

I don't discard this vision, but I'm concerned with the capacities
that this individual *must have* in order to perform adequately in
such a scheme.

>But let us assume for a moment that the conjecture at step A is false, and
>that there is a single set of first order complex mechanisms.
>
>> Item b) is what demands a lot of people. The question is that item
>> b) is "easy", although time consuming. Item a), on the other hand,
>> is where the things are cloudy. It is the real problem that AI should
>> concern in this phase, IMHO.
>
>So then we would get to step B, and would be able to implemented a simple
>system that can grow to be intelligent.  Yet no one will be able to tell
>that it has that capability until step B is well underway.  So the biggest
>problem is to sustain the system until sufficient bandwidth can notice it.
>Now let us talk of stark political and social reality, and I ask what is
the
>likelihood of that actually happening?
>

I think this depends on the breadth of what one understands by intelligence.
If we want human-like intelligence, then there's no question, only with
such an environment we would be able to have it. But if one is evaluating
things minimally intelligent, then we could live without the "full thing"
for a while. And why would anyone satisfy itself with just that minimum
intelligence? To understand better what it means to be intelligent.
To understand what are the fundamental components of an intelligent system.

>Take for example Mentifex.  Now I've spent some one-on-one time with Arthur
>Murray in Seattle, and I got close enough to his idea to understand that it
>might work.  I really don't know whether this is the kind of idea that you
>allude to in A or not, but Arthur certainly thinks it is, and I cannot
prove
>it isn't.  But my real point is that every such idea that satisfies your
>step A will be in exactly that same predicament.  It doesn't matter whether
>it is discovered by a college professor with a reputation or one of us
cooks
>on the outside looking in, the predicament will be the same.  The person
>with the idea will need to convince enough bandwidth to participate in the
>"process of acquisition" on  __FAITH__ in the idea alone.  The conjecture
is
>that a sufficient number of humans cannot be convinced of someone else's
>idea before the project will die from  lack of resources.
>

You may be right in your hypothesis, but the question is that I don't
think we have exausted the possibilities of building that mechanism.
If you take the history of AI you can see less than a dozen proposed
architectures. All others are simple variations of the same theme.
Few introduced really different conceptions. I think there's still space
for innovation here. Once a suitable architecture emerges, under new and
promising principles, it may be possible to put this system in the
"world" and see it take part on the "loop" of social and cultural
interactions, boosting the system to the level of performance you mention.
What we can say today is that none of the proposed architectures would
survive if put in that environment.

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net