Computational
Linguistics
CSC 485 Summer 2020
4 Reading: Jurafsky & Martin: 20.1–5.
Copyright © 2017 Graeme Hirst and Gerald Penn.
All rights reserved.
4. Word sense disambiguation
Gerald Penn
Department of Computer Science, University of Toronto
Based on slides by Lu Wang
Word sense disambiguation
• Word sense disambiguation (WSD), lexical disambiguation, resolving lexical ambiguity, lexical ambiguity resolution.
2
How big is the problem?
• Most words of English have only one sense. (62% in Longman’s Dictionary of Contemporary English; 79% in WordNet.)
• But the others tend to have several senses. (Avg 3.83 in LDOCE; 2.96 in WordNet.)
• Ambiguous words are more frequently used (In British National Corpus, 84% of instances have more than one sense in WordNet.)
• Some senses are more frequent than others. 3
Words occurring in the British National Corpus are plotted on the horizontal axis in rank order by frequency in the corpus. Number of WordNet senses per word is plotted on the vertical axis. Each point represents a bin of 100 words and the average number of senses of words in the bin.
Edmonds, Philip. “Disambiguation, Lexical.” Encyclopedia of Language and Linguistics (second edition), Elsevier, 2006, pp 607–623.
4
Number of WordNet senses per word
Number of WordNet senses per word
In each column, the senses are ordered by frequency, normalized per word, and averaged over all words with that number of senses.
Edmonds, Philip. “Disambiguation, Lexical.” Encyclopedia of Language and Linguistics (second edition), Elsevier,
2006, pp 607–623.
5
Proportion of occurrences of each sense
Sense inventory of a word
• Dictionaries, WordNet list senses of a word.
• Often, no agreement on proper sense-
division of words.
• Don’t want sense-divisions to be too coarse- grained or too fine-grained.
Frequent criticism of WordNet
6
The American Heritage Dictionary of the English Language (3rd edition)
Oxford Advanced Learner’s Dictionary (encyclopedic edition)
7
8
OALD
AHDEL
What counts as the right answer?
• Often, no agreement on which sense a given word-token is.
• Some tokens seem to have two or more senses at the same time.
9
Which senses are these? 1
• image
1. a picture formed in the mind;
2. a picture formed of an object in front of a mirror
or lens;
3. the general opinion about a person, organ-
ization, etc, formed or intentionally created in people’s minds;
[and three other senses]
“… of the Garonne, which becomes an unforgettable
image. This is a very individual film, mannered, …”
Example from: Kilgarriff, Adam. “Dictionary word sense distinctions: An enquiry into their nature.” Computers and the Humanities, 26: 365–387, 1993. Definitions from Longman Dictionary of Contemporary English, 2nd edition, 1987.
10
Which senses are these? 2
• distinction
1. the fact of being different;
2. the quality of being unusually good; excellence.
“… before the war, shares with Rilke and Kafka the distinction of having origins which seem to escape …”
Example from: Kilgarriff, Adam. “Dictionary word sense distinctions: An enquiry into their nature.” Computers and the Humanities, 26: 365–387, 1993. Definitions from Longman Dictionary of Contemporary English, 2nd edition, 1987.
11
What counts as the right answer?
• Therefore, hard to get a definitive sense- tagged corpus.
• And hard to get human baseline for performance.
• Human annotators agree about 70–95% of the
time.
[Depending on word, sense inventory, context size, discussions, etc.]
12
Baseline algorithms 1
• Assume that input is PoS-tagged. Why?
• Obvious baseline algorithm:
Pick most-likely sense (or pick one at random).
• Accuracy: 39–62%
13
Baseline algorithms 2
• Simple tricks (1):
Notice when ambiguous word is in unambiguous fixed phrase.
• private school, private eye. (But maybe not right in all right.)
14
Baseline algorithms 3
• Simple tricks (2):
“One sense per discourse”:
A homonymous word is rarely used in more than one sense in the same text.
• If word occurs multiple times, …
• Not true for polysemy.
• Simple tricks (3):
Lesk’s algorithm (see below).
15
“Context” 1
• Meaning of word in use depends on (determined by) its context.
• Circumstantial context.
• Textual context.
• Complete text.
• Sentence, paragraph.
• Window of n words.
16
“Context” 2
• Words of context are also ambiguous; need for mutual constraints; often ignored in practice.
• “One sense per collocation”.
• Collocation: words that tend to co-occur together.
17
Selectional preferences
•
Constraints imposed by one word meaning on another—especially verbs on nouns.
Eagle Airways which has applied to serve New York … Plain old bean soup, served daily since the turn of the
century …
I don’t mind washing dishes now and then.
Sprouted grains and seeds are used in preparing salads and
dishes such as chop suey.
It was the most popular dish served in the Ladies’ Grill.
Some words select more strongly than others. see (weak) — drink (moderate) — elapse (strong)
•
Examples from the Brown University Standard Corpus of Present-Day American English.
18
Limitations of selectional preferences
• Negation:
• You can’t eat good intentions.
It’s nonsense to say that a book elapsed. I am not a crook. (Richard Nixon, 17 Nov 1973)
• Odd events:
• Los Angeles secretary Jannene Swift married a 50-pound pet rock in a formal ceremony in Lafayette Park. (Newspaper report)
19
Limitations of selectional preferences
• Metaphor:
The issue was acute because the exiled Polish Government in London, supported in the main by Britain, was still competing with the new Lublin Government formed behind the Red Army. More time was spent in trying to marry these incompatibles than over any subject discussed at Yalta. … The application of these formulae could not please both sides, for they really attempted to marry the impossible to the inevitable.
Text from the Brown Corpus
20
Limitations of selectional preferences
• In practice, attempts to induce selectional preferences or to use them have not been very successful.
• Apply in only about 20% of cases, achieve about 50% accuracy. (Mihalcea 2006, McCarthy & Carroll 2003)
• At best, they are a coarse filter for other methods.
21
Lesk’s algorithm 1
• Sense si of ambiguous word w is likely to be the intended sense if many of the words used in the dictionary definition of si are also used in the definitions of words in the context window.
• For each sense si of w, let Di be the bag of words in its dictionary definition.
• Bag of words: unordered set of words in a string, excepting those that are very frequent (stop list).
• Let B be the bag of words of the dictionary definitions of all senses of all words v ≠ w in the context window of w. (Might also (or instead) include all v in B.)
• Choose the sense si that maximizes overlap(Di,B).
24
Lesk’s algorithm Example
• … the keyboard of the terminal was …
terminal
1. a point on an electrical device at which electric current enters or leaves.
2. where transport vehicles load or unload passengers or goods. 3. an input-output device providing access to a computer.
keyboard
1. set of keys on a piano or organ or typewriter or typesetting machine or computer or the like.
2. an arrangement of hooks on which keys or locks are hung.
25
Lesk’s algorithm 2
• Many variants of overlap score, but most common are based on cosine similarity of vectors that count occurrences of each word.
• Results: Simple versions of Lesk achieve accuracy around 50–60%; Lesk plus simple smarts gets to nearly 70%.
• Many variants possible on what is included in Di and B.
• E.g., include the examples in dictionary definitions.
• E.g., include other manually tagged example texts.
• PoS tags on definitions.
• Give extra weight to infrequent words occurring in the vectors.
26
Cosine Similarity Score
27
Maths revision: Bayes’s rule
•
• Typical problem: We have B, and want to know which A is now most likely.
28
Supervised Bayesian methods 1
• Classify contexts according to which sense of each ambiguous word they tend to be associated with.
• Bayes decision rule: Pick sense, sj, that is most probable in given context, j = argmaxi P(si | C).
• Bag-of-words model of context.
• For each sense sk of w in the given context C, we know the prior probability P(sk) of the sense, but require its posterior probability P(sk|C).
29
Supervised Bayesian methods 2
• Want sense s′ of word w in context C such that P(s′|C) > P(sk|C) for all sk ≠ s′.
where …
30
Supervised Bayesian methods 3
• Naïve Bayes assumption: Attributes vj of context C of sense sk of w are conditionally independent of one another. Hence
31
Supervised Bayesian methods 4
and c(vj, sk) is the number of times vj occurs in the context window of sk.
32
Training corpora for supervised WSD
• Problem: Need large training corpus with each ambiguous word tagged with its sense.
• Expensive, time-consuming human work.
• “Large” for a human is small for WSD training.
• Some sense-tagged corpora:
• SemCor: 700K PoS-tagged tokens (200K WordNet-sense-tagged) of Brown corpus and a short novel.
• Singapore DSO corpus: About 200 “interesting” word-types tagged in about 2M tokens of Brown corpus and Wall Street Journal.
33
Evaluation
• Systems based on naïve Bayes methods
have achieved 62–72% accuracy for selected
words with adequate training data.
(Màrquez etal 2006, Edmonds 2006)
34
Recent Neural Approaches
• There haven’t been many successful ones,
but the currently most influential is just an
extension of Lesk’s algorithm:
(Luo et al. 2018)
• Instead of word counts, use lexical-semantic vector- space embeddings.
• Use an attention mechanism to distort context-word vectors and dictionary-definition vectors with respect to each other.
35
Recent Neural Approaches
These dot-products measure the similarity between dictionary senses and context.
36
Evaluation
• Lesk’s algorithm: 50-60%
• Naïve Bayes: 62–72%
• Neural Lesk: 65-72%
• Neural Lesk with “sentence” instead of word
embeddings: 69-72%
• Neural hierarchical model with both: 68-73%
• Verbs are still particularly tough: 56-58%
37
Yarowsky 1995
Unsupervised decision-list learning
• Decision list: ordered list of strong, specific clues to senses of homonym.*
*Yarowsky calls them “polysemous words”.
38
Decision list for bass: LogL Context
10.98 fish in ±k words 10.92 striped bass
9.70 guitar in ±k words 9.20 bass player
9.10 piano in ±k words 8.87 sea bass
8.49 play bass
8.31 river in ±k words 7.71 on bass
5.32 bass are
Sense
FISH FISH
MUSIC MUSIC
MUSIC FISH
MUSIC FISH
MUSIC FISH
39
Yarowsky 1995 Basic ideas
• Separate decision list learned for each homonym.
• Bootstrapped from seeds, very large corpus, heuristics.
• One sense per discourse.
• One sense per collocation.
• Uses supervised classification algorithm to build decision-list.
• Training corpus: 460M words, mixed texts.
40
Yarowsky 1995 Method 1
• 1–2. Get data (instances of target word); choose seed rules; apply them.
41
used to strain microscopic plant life from the zonal distribution of plant life .
close-up studies of plant life and natural too rapid growth of aquatic plant life in water
the proliferation of plant and animal life establishment phase of the plant virus life cycle
that divide life into plant and animal kingdom many dangers to plant and animal life
mammals . Animal and plant life are delicately
automated manufacturing plant in Fremont vast manufacturing plant and distribution
chemical manufacturing plant , producing viscose keep a manufacturing plant profitable without
computer manufacturing plant and adjacent discovered at a St. Louis plant manufacturing
copper manufacturing plant found that they copper wire manufacturing plant , for example
s cement manufacturing plant in Alpena
vinyl chloride monomer plant , which is molecules found in plant and animal tissue
Nissan car and truck plant in Japan is
and Golgi apparatus of plant and animal cells
union responses to plant closures . cell types found in the plant kingdom are
company said the plant is still operating Although thousands of plant and animal species
animal rather than plant tissues can be
42
Figure from Yarowsky 1995.
43
Initial state after use of seed rules
Yarowsky 1995 Method 2
• 3. Iterate:
• 3a. Create a new decision-list classifier:
supervised training with the data tagged so far.
Looks for collocations as features for classification.
• 3b. Apply new classifier to whole data set, tag some new instances.
• 3c. Optional: Apply one-sense-per-discourse rule wherever one sense now dominates a text.
44
Figure from Yarowsky 1995.
45
Intermediate state
Figure from Yarowsky 1995.
46
Final state
Yarowsky 1995: Method 3
• 4. Stop when converged. (Optional: Apply one- sense-per-discourse constraint.)
• 5. Use final decision list for WSD.
47
Yarowsky 1995 Evaluation
• Experiments: 12 homonymous words.
• 400–12,000 hand-tagged instances of each.
• Baseline (most frequent sense) = 63.9%.
• Best results, avg 96.5% accuracy.
• Base seed on dictionary definition; use one-sense-
per-discourse heuristic.
• As good as or better than supervised algorithm used directly on fully labelled data.
48
Yarowsky 1995 Discussion 1
• Strength of method:
• The one-sense heuristics.
• Use of precise lexical and positional information.
• Huge training corpus.
• Bootstrapping: Unsupervised use of supervised algorithm.
• Disadvantages:
• Train each word separately.
• Homonyms only. Why?
49
Yarowsky 1995 Discussion 2
• Not limited to regular words; e.g., in speech synthesis system:
• / as fraction or date:
3/4 → “three-quarters” or “third of April”.
• Roman number as cardinal or ordinal: chapter VII → “chapter seven”; Henry VII → “Henry the seventh”.
Yarowsky, David. “Homograph disambiguation in speech synthesis.” In Jan van Santen, Richard Sproat, Joseph Olive and Julia Hirschberg (eds.), Progress in Speech Synthesis. Springer-Verlag, pp. 159–175, 1996.
50