Selected Newsgroup Message
From: "Sergio Navega" <email@example.com>
Subject: Re: CYC
Date: 27 Mar 1999 00:00:00 GMT
X-Notice: should be reported to firstname.lastname@example.org
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Trace: 27 Mar 1999 15:15:14 GMT, 18.104.22.168
Seth Russell wrote in message <36FABBA2.742995C7@clickshop.com>...
>I just found David Whitten's CYC Faq in the first edition
>of Alma ( http://www.diemme.it/~luigi/ ), please refer to
>http://www.diemme.it/~luigi/cyc.html . As this rendition is
>better formatted than my own I will change the pointers in
>the PDKB when I get a chance. Incidentally, Dave, when can
>we expect the new version of the FAQ?
>I have been meaning to put pen to my own perspective on CYC
>and although my mind is still in flux on the subject I feel
>compelled to summarize it here. If AI research journals and
>Usenet postings are any sample of the industry, then it
>appears that mainstream AI is moving away from the Cyc's
>ontological approach with the lack of publicly visible
>progress not helping the project. But does this mean that
>there is no value in such ontologies for robust artificial
>intelligence? My answer to that is a resounding NO, if
>interested read on.
Seth, I don't think that ontological approaches are dying.
I, for one, always keep an eye on large conferences about
this subject. By the way, I want to give my kudos to the
KAW people that put all the proceedings of their conferences
available online. I have all volumes of KAW printed:
There's a lot of things in there that can be useful in the
near future, even if what we're doing is not exactly
>There has been a tacit premise that the CYC ontology is to
>be read into the internal data structures of a functioning
>agent and that the agent should calculate its responses
>based solely on logical inferences on that ontology. This
>premise is what AI researchers are challenging and I believe
>rightly so. However, there is a entirely different way to
>use such ontologies. Robust agents could view such
>ontologies as their *environment* and as their culture and
>interact with them as we humans interact with our culture.
I agree with your proposition, to the extent that these
ontologies are not the *only* environment they have and to
the extent that their *initial*, internal ontology have
been developed by the agent itself, interacting with the
rest of its "environment".
In that case, those external ontologies may be considered as
"public libraries" for intelligent agents. The agent goes
there, read a "book" or a segment of it and takes that
information home. Then, "he" may be able to talk about
it with another agent, that will have the opportunity to
go to that same library and, then, discover that the first
agent got that *wrong*. The second agent will then establish
one discussion with the first trying to persuade it. That's
a nice idea. Soon we will have a specific newsgroup for such
>This means that each agent is free to use whatever
>systemization it (or its designer) has developed for it to
>cope with the world. As human children learn in school and
>from books how to intelligently interact with their culture,
>so artificial agents can learn from the patterns of logic
>(CYC) and language (WordNet) how to intelligently interact
>with their culture.
That seems to be fine, provided, as I said, that the agent
got "off the ground" previously, by a careful nursing period,
where it should be treated much like a child.
Back to Menu of Messages Sergio's Homepage
Any Comments? email@example.com