Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: NN formats
Date: 14 May 1999 00:00:00 GMT
Message-ID: <373c8d4b@news3.us.ibm.net>
References: <3739A199.9E2B8209@erols.com> <3739B919.B4A6AF36@clickshop.com> <3739C62F.527BAF91@erols.com> <3739E816.4BF79A4B@clickshop.com> <373A143C.6A467BFF@erols.com> <373C1480.7482A5B5@tig.com.au> <373C58B7.E700B8F6@erols.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 14 May 1999 20:53:31 GMT, 129.37.183.159
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.nat-lang,comp.ai.neural-nets

S. Lorber wrote in message <373C58B7.E700B8F6@erols.com>...
>I think I'm trying to map my imaginary network into something that is not a NN.
>
>My idea of a NN was pathways and connections that are weighted and in context (close
>proximity) and transversed to interpret meaning from input data.  Maybe this would
>somehow apply to NLP but not NN.  I am only vaguely familiar with NLP (from what I'm
>reading on this newsgroup and but have tried to go through some of the on-line
>material).
>
>Is a NN supposed to mimic the human brain and how sure are we of that?
>
>Sean
>

Hello, Sean,

Neural networks are sometimes applied to the task of natural language
processing, but there aren't much attempts in this regard. NLP is
implemented today mostly throughout symbolic methods. This is the
first thing you've got to notice, that we have symbolic methods,
acting on characters, words, sentences and relying on rules and
there are subsymbolic methods, which are based in neural networks.

Some posts ago you mentioned "is-a" links. This reminds me of two
things. One of them is semantic networks, the other is frame-based
methods, both of which are knowledge representation formalisms.

To handle natural language in the traditional way you've got to
select a knowledge representation in which to represent the things
you've processed, but the act of processing is not directly linked
to this representation, it is associated with a task known as
*parsing*. There is an enormous amount of parsing methods available
but just in case you want a very, very simple introduction, I have
appended in the end of this message a text that I've written some
time ago.

About NN mimicking the brain, this is just a summer night's
dream. Biological neurons are much more complicated than the
methods used in artificial NN. What's more important, there
are a lot of unknown aspects about biological neurons and,
more importantly, the behavior of *populations* of neurons.
So when a connectionist guy says that ANNs are closer to the
brain, you should face this as saying that his kite is closer
to the moon than you are.

Regards,
Sergio Navega.

----------------------------------
The Extremely-Reduced Introduction to Parsing

Conventions

S    Sentence

NP   Noun Phrase

VP   Verb Phrase

Adj  Adjective

Det  Determiner

N    Noun

V    Verb

First step, define a *simple* grammar that you want to parse.
Here is an example:

S -> NP VP
NP -> Det N
NP -> N
NP -> Det Adj N
VP -> V
VP -> V NP
Det -> "the", "a"
N -> "Sally", "man", "ball"
V -> "saw"
Adj -> "beautiful", "old"

From this basic grammar you could go on adding things, like transitive
verbs, intransitive, adverbials, etc. There are several methods to
parse, but I like one that's intuitive: you start with the phrase,
substituting each token for its part of speech (this is like
starting from the "bottom of the grammar"). Then, you try to get
up in the grammar, substituting the terms for ones in the upper
part. Here's one example:

Sally saw the old man.
N     V   Det Adj N
NP    V   NP
NP    VP
S

Of course, I'm not considering here words with ambiguous part
of speech, neither conflicting rules (in this case, span another
parse tree) but you've got to agree this is a good try in a so
short space.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: NN formats
Date: 17 May 1999 00:00:00 GMT
Message-ID: <37403447@news3.us.ibm.net>
References: <3739A199.9E2B8209@erols.com> <3739B919.B4A6AF36@clickshop.com> <3739C62F.527BAF91@erols.com> <3739E816.4BF79A4B@clickshop.com> <373A143C.6A467BFF@erols.com> <373C1480.7482A5B5@tig.com.au> <373C58B7.E700B8F6@erols.com> <373c8d84@news3.us.ibm.net> <373CA1EE.E0606C88@erols.com> <926725987.14745.0.nnrp-04.c2de710a@news.demon.co.uk> <373EC5FA.4AB9F30C@erols.com> <926896986.2996.0.nnrp-12.c2de710a@news.demon.co.uk>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 17 May 1999 15:22:47 GMT, 129.37.183.189
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.nat-lang,comp.ai.neural-nets

Brian Smith wrote in message
<926896986.2996.0.nnrp-12.c2de710a@news.demon.co.uk>...
>
>>I don't think storing every possible phrase is a great way to do things.
>>
>
>I think it's natural to assume this is what the brain is doing. After all,
>it has 10 billion neurons to use, and each neuron potentially has the
>capablilty to represent a new word, phrase, story (based on linking to other
>neurons representing lower-level knowledge). Combine this with the fact that
>it only has to store on average 70 years worth of input  (learning can slow
>over time too, since more and more situations will fit with existing stored
>knowledge) and learning every new phrase seems to be the way to go. Doing so
>provides context for future understaning and response to new situations.
>

Well, this line of reasoning may present some problems. In fact, we do
have about 100 billion neurons and each one is connected to neighboring
neurons by something on the order of 100 to 100.000 different connections
(throughout synapses). It is a very extensive network that, apparently,
could support your assertion even more. But neuroscience is discovering
that this is not the case.

What each neuron does is a very small part of a complex signal processing
task. There's no place in our brain for a "word" or an "image". Everything
is distributed and vague. We don't have exact memories of nothing,
only generalized informations captured from sensory inputs. We store
only those generalizations.

Besides, the main activity of our brain is not language. I'd say that
language is less than 30% of the "processing time" of our brain. Most
of the time, our brain is occupied with sensory processing (vision,
audition, touch, proprioception, etc). Even our thoughts that are
related to abstract linguistic matter often utilizes areas of our
brain that are also used when we, for instance, move our arm.

What's interesing is that most of our intelligent responses also
appear to come from processing directly related to sensorimotor
areas.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: NN formats
Date: 17 May 1999 00:00:00 GMT
Message-ID: <37403445@news3.us.ibm.net>
References: <3739A199.9E2B8209@erols.com> <3739B919.B4A6AF36@clickshop.com> <3739C62F.527BAF91@erols.com> <3739E816.4BF79A4B@clickshop.com> <373A143C.6A467BFF@erols.com> <373C1480.7482A5B5@tig.com.au> <373C58B7.E700B8F6@erols.com> <373c8d84@news3.us.ibm.net> <373CA1EE.E0606C88@erols.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 17 May 1999 15:22:45 GMT, 129.37.183.189
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.nat-lang,comp.ai.neural-nets

Sorry, I didn't have enough time to accompany this thread...

S. Lorber wrote in message <373CA1EE.E0606C88@erols.com>...
>[snip]
>
>If there was a sentence spoken such as "All magazines in the library go on the
>magazine rack"  The computer would say, have trouble with the last word "rack".
>It doesn't' know if it's "rack" or "crack".
>
>Suppose there is a semantic net in the form mentioned above "is a", "goes on",
>etc etc.  At this point the computer looks for context within this sentence.It
>would go to "library" and look for either of the words in close proximity to
>library.  Perhaps Library-has-> books, videos, magazines, software;
>magazines-are stored-> periodical section, racks.  Because of the close
>proximity it would decide that it was racks and not cracks.
>

That will work fine for some of the cases. The big question is that
there are a significant number of situations in which this will not
work well, mostly related with specific contexts. For instance, let
your system receive this set of sentences:

a) "That magazines have just arrived"
b) "All that magazines should go to the magazine rack"

This is a coherent set of sentences which is also coherent with the
meaning of sentence b) taken in isolation. But now see:

c) "That magazines are useless, old material, ripped pages, dirty"
d) "All that magazines should go to the magazine crack"

Now sentence d), even if very similar to b), carries entirely
different meaning. To decide if "crack" should be assumed to be "rack"
does not resolve just in a semantic network consultation. So what
is important here is not the relations as specified in a semantic
net, but the pattern of usage (old, dirty, useless magazines are
more likely to be disposed in an automatic "crack" machine than new
ones).

You may then say that the semantic net should contain these relations
too (dirt, old-> disposed). And the problem cumulates, because:

e) "The department for classification is in the back of the building"
f) "We store new magazines in the department for classification"
g) "All that magazines should go to the back"

Magazines go to the back? Back of the building. Why? Because that's
where the classification dept is located. Even without explicit
saying it, "all that magazines" is referring to "new" magazines.

There's a huge context to be stored here, a context that *does not*
appear in a semantic network, but is part of a specific situation.
So unless the mechanism is able to learn (by itself) from patterns
of phrases and store associatively all these informations, it will
be useless.

This is one of the reasons which I criticize semantic networks or any
other knowledge representation that does not accumulate probabilistically
patterns of usage.

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net