程序代写代做代考 arm algorithm Information Extraction

Information Extraction

Vector Semantics

Dan Jurafsky

Why vector models of meaning?
computing the similarity between words

“fast” is similar to “rapid”

“tall” is similar to “height”

Question answering:

Q: “How tall is Mt. Everest?”
Candidate A: “The official height of Mount Everest is 29029 feet”

2

Dan Jurafsky

Word similarity for plagiarism detection

Dan Jurafsky

Word similarity for historical linguistics:
semantic change over time

4

Kulkarni, Al-Rfou, Perozzi, Skiena 2015Sagi, Kaufmann Clark 2013

0

5

10

15

20

25

30

35

40

45

dog deer hound

S
e

m
a

n
ti

c
B

ro
a

d
e

n
in

g <1250 Middle 1350-1500 Modern 1500-1710 Dan Jurafsky Distributional models of meaning = vector-space models of meaning = vector semantics Intuitions: Zellig Harris (1954): • “oculist and eye-doctor … occur in almost the same environments” • “If A and B have almost identical environments we say that they are synonyms.” Firth (1957): • “You shall know a word by the company it keeps!” 5 Dan Jurafsky Intuition of distributional word similarity • Nida example: A bottle of tesgüino is on the table Everybody likes tesgüino Tesgüino makes you drunk We make tesgüino out of corn. • From context words humans can guess tesgüino means • an alcoholic beverage like beer • Intuition for algorithm: • Two words are similar if they have similar word contexts. Dan Jurafsky Four kinds of vector models Sparse vector representations 1. Mutual-information weighted word co-occurrence matrices Dense vector representations: 2. Singular value decomposition (and Latent Semantic Analysis) 3. Neural-network-inspired models (skip-grams, CBOW) 4. Brown clusters 7 Dan Jurafsky Shared intuition • Model the meaning of a word by “embedding” in a vector space. • The meaning of a word is a vector of numbers • Vector models are also called “embeddings”. • Contrast: word meaning is represented in many computational linguistic applications by a vocabulary index (“word number 545”) • Old philosophy joke: Q: What’s the meaning of life? A: LIFE’ 8 Dan Jurafsky As You Like It Twelfth Night Julius Caesar Henry V battle 1 1 8 15 soldier 2 2 12 36 fool 37 58 1 5 clown 6 117 0 0 Term-document matrix • Each cell: count of term t in a document d: tft,d: • Each document is a count vector in ℕv: a column below 9 Dan Jurafsky Term-document matrix • Two documents are similar if their vectors are similar 10 As You Like It Twelfth Night Julius Caesar Henry V battle 1 1 8 15 soldier 2 2 12 36 fool 37 58 1 5 clown 6 117 0 0 Dan Jurafsky The words in a term-document matrix • Each word is a count vector in ℕD: a row below 11 As You Like It Twelfth Night Julius Caesar Henry V battle 1 1 8 15 soldier 2 2 12 36 fool 37 58 1 5 clown 6 117 0 0 Dan Jurafsky The words in a term-document matrix • Two words are similar if their vectors are similar 12 As You Like It Twelfth Night Julius Caesar Henry V battle 1 1 8 15 soldier 2 2 12 36 fool 37 58 1 5 clown 6 117 0 0 Dan Jurafsky Term-context matrix for word similarity • Two words are similar in meaning if their context vectors are similar 13 aardvark computer data pinch result sugar … apricot 0 0 0 1 0 1 pineapple 0 0 0 1 0 1 digital 0 2 1 0 1 0 information 0 1 6 0 4 0 Dan Jurafsky The word-word or word-context matrix • Instead of entire documents, use smaller contexts • Paragraph • Window of ± 4 words • A word is now defined by a vector over counts of context words • Instead of each vector being of length D • Each vector is now of length |V| • The word-word matrix is |V|x|V|14 Dan Jurafsky Word-Word matrix Sample contexts ± 7 words 15 aardvark computer data pinch result sugar … apricot 0 0 0 1 0 1 pineapple 0 0 0 1 0 1 digital 0 2 1 0 1 0 information 0 1 6 0 4 0 … … Dan Jurafsky Word-word matrix • We showed only 4x6, but the real matrix is 50,000 x 50,000 • So it’s very sparse • Most values are 0. • That’s OK, since there are lots of efficient algorithms for sparse matrices. • The size of windows depends on your goals • The shorter the windows , the more syntactic the representation ± 1-3 very syntacticy • The longer the windows, the more semantic the representation ± 4-10 more semanticy 16 Dan Jurafsky 2 kinds of co-occurrence between 2 words • First-order co-occurrence (syntagmatic association): • They are typically nearby each other. • wrote is a first-order associate of book or poem. • Second-order co-occurrence (paradigmatic association): • They have similar neighbors. • wrote is a second- order associate of words like said or remarked. 17 (Schütze and Pedersen, 1993) Vector Semantics Positive Pointwise Mutual Information (PPMI) Dan Jurafsky Problem with raw counts • Raw word frequency is not a great measure of association between words • It’s very skewed • “the” and “of” are very frequent, but maybe not the most discriminative • We’d rather have a measure that asks whether a context word is particularly informative about the target word. • Positive Pointwise Mutual Information (PPMI) 19 Dan Jurafsky Pointwise Mutual Information Pointwise mutual information: Do events x and y co-occur more than if they were independent? PMI between two words: (Church & Hanks 1989) Do words x and y co-occur more than if they were independent? PMI 𝑤𝑜𝑟𝑑1, 𝑤𝑜𝑟𝑑2 = log2 𝑃(𝑤𝑜𝑟𝑑1, 𝑤𝑜𝑟𝑑2) 𝑃 𝑤𝑜𝑟𝑑1 𝑃(𝑤𝑜𝑟𝑑2) PMI(X,Y ) = log 2 P(x,y) P(x)P(y) Dan Jurafsky Positive Pointwise Mutual Information • PMI ranges from −∞ to +∞ • But the negative values are problematic • Things are co-occurring less than we expect by chance • Unreliable without enormous corpora • Imagine w1 and w2 whose probability is each 10-6 • Hard to be sure p(w1,w2) is significantly different than 10-12 • Plus it’s not clear people are good at “unrelatedness” • So we just replace negative PMI values by 0 • Positive PMI (PPMI) between word1 and word2: PPMI 𝑤𝑜𝑟𝑑1, 𝑤𝑜𝑟𝑑2 = max log2 𝑃(𝑤𝑜𝑟𝑑1, 𝑤𝑜𝑟𝑑2) 𝑃 𝑤𝑜𝑟𝑑1 𝑃(𝑤𝑜𝑟𝑑2) , 0 Dan Jurafsky Computing PPMI on a term-context matrix • Matrix F with W rows (words) and C columns (contexts) • fij is # of times wi occurs in context cj 22 pij = fij fij j=1 C å i=1 W å pi* = fij j=1 C å fij j=1 C å i=1 W å p* j = fij i=1 W å fij j=1 C å i=1 W å pmi ij = log 2 p ij p i* p * j ppmiij = pmiij if pmiij > 0

0 otherwise

ì
í
ï

îï

Dan Jurafsky

p(w=information,c=data) =

p(w=information) =

p(c=data) =

23

p(w,context) p(w)

computer data pinch result sugar

apricot 0.00 0.00 0.05 0.00 0.05 0.11

pineapple 0.00 0.00 0.05 0.00 0.05 0.11

digital 0.11 0.05 0.00 0.05 0.00 0.21

information 0.05 0.32 0.00 0.21 0.00 0.58

p(context) 0.16 0.37 0.11 0.26 0.11

= .326/19

11/19 = .58

7/19 = .37

pij =
fij

fij
j=1

C

å
i=1

W

å

p(wi ) =

fij
j=1

C

å

N
p(c j ) =

fij
i=1

W

å

N

Dan Jurafsky

24

pmi
ij

= log
2

p
ij

p
i*
p

* j

• pmi(information,data) = log2 (

p(w,context) p(w)

computer data pinch result sugar

apricot 0.00 0.00 0.05 0.00 0.05 0.11

pineapple 0.00 0.00 0.05 0.00 0.05 0.11

digital 0.11 0.05 0.00 0.05 0.00 0.21

information 0.05 0.32 0.00 0.21 0.00 0.58

p(context) 0.16 0.37 0.11 0.26 0.11

PPMI(w,context)

computer data pinch result sugar

apricot – – 2.25 – 2.25

pineapple – – 2.25 – 2.25

digital 1.66 0.00 – 0.00 –

information 0.00 0.57 – 0.47 –

.32 / (.37*.58) ) = .58
(.57 using full precision)

Dan Jurafsky

Weighting PMI

• PMI is biased toward infrequent events

• Very rare words have very high PMI values

• Two solutions:

• Give rare words slightly higher probabilities

• Use add-one smoothing (which has a similar effect)

25

Dan Jurafsky

Weighting PMI: Giving rare context words
slightly higher probability

• Raise the context probabilities to 𝛼 = 0.75:

• This helps because 𝑃𝛼 𝑐 > 𝑃 𝑐 for rare c

• Consider two events, P(a) = .99 and P(b)=.01

• 𝑃𝛼 𝑎 =
.99.75

.99.75+.01.75
= .97 𝑃𝛼 𝑏 =

.01.75

.01.75+.01.75
= .0326

Dan Jurafsky

Use Laplace (add-1) smoothing

27

Dan Jurafsky

28

Add-2 Smoothed Count(w,context)

computer data pinch result sugar

apricot 2 2 3 2 3

pineapple 2 2 3 2 3

digital 4 3 2 3 2

information 3 8 2 6 2

p(w,context) [add-2] p(w)

computer data pinch result sugar

apricot 0.03 0.03 0.05 0.03 0.05 0.20

pineapple 0.03 0.03 0.05 0.03 0.05 0.20

digital 0.07 0.05 0.03 0.05 0.03 0.24

information 0.05 0.14 0.03 0.10 0.03 0.36

p(context) 0.19 0.25 0.17 0.22 0.17

Dan Jurafsky

PPMI versus add-2 smoothed PPMI

29

PPMI(w,context) [add-2]

computer data pinch result sugar

apricot 0.00 0.00 0.56 0.00 0.56

pineapple 0.00 0.00 0.56 0.00 0.56

digital 0.62 0.00 0.00 0.00 0.00

information 0.00 0.58 0.00 0.37 0.00

PPMI(w,context)

computer data pinch result sugar

apricot – – 2.25 – 2.25

pineapple – – 2.25 – 2.25

digital 1.66 0.00 – 0.00 –

information 0.00 0.57 – 0.47 –

Vector Semantics

Measuring similarity: the
cosine

Dan Jurafsky

Measuring similarity

• Given 2 target words v and w

• We’ll need a way to measure their similarity.

• Most measure of vectors similarity are based on the:

• Dot product or inner product from linear algebra

• High when two vectors have large values in same dimensions.

• Low (in fact 0) for orthogonal vectors with zeros in complementary
distribution31

Dan Jurafsky

Problem with dot product

• Dot product is longer if the vector is longer. Vector length:

• Vectors are longer if they have higher values in each dimension

• That means more frequent words will have higher dot products

• That’s bad: we don’t want a similarity metric to be sensitive to
word frequency

32

Dan Jurafsky

Solution: cosine

• Just divide the dot product by the length of the two vectors!

• This turns out to be the cosine of the angle between them!

33

Dan Jurafsky

Cosine for computing similarity

cos(v,w) =
v ·w

v w
=
v

v
·
w

w
=

viwi
i=1

N

å

vi
2

i=1

N

å wi
2

i=1

N

å

Dot product Unit vectors

vi is the PPMI value for word v in context i
wi is the PPMI value for word w in context i.

Cos(v,w) is the cosine similarity of v and w

Sec. 6.3

Dan Jurafsky

Cosine as a similarity metric

• -1: vectors point in opposite directions

• +1: vectors point in same directions

• 0: vectors are orthogonal

• Raw frequency or PPMI are non-
negative, so cosine range 0-1

35

Dan Jurafsky

large data computer

apricot 2 0 0

digital 0 1 2

information 1 6 1

36

Which pair of words is more similar?

cosine(apricot,information) =

cosine(digital,information) =

cosine(apricot,digital) =

cos(v,w) =
v ·w

v w
=
v

v
·
w

w
=

viwi
i=1

N

å

vi
2

i=1

N

å wi
2

i=1

N

å

1+ 0+0

1+36+1

1+36+1

0+1+ 4

0+1+ 4

0 + 6 + 2

0 + 0 + 0

=
8

38 5
= .58

= 0

2 + 0 + 0

2 + 0 + 0
=

2

2 38
= .23

Dan Jurafsky

Visualizing vectors and angles

1 2 3 4 5 6 7

1

2

3

digital

apricot
information

D
im

e
n

si
o

n
1

:
‘l

a
rg

e

Dimension 2: ‘data’37

large data

apricot 2 0

digital 0 1

information 1 6

Dan Jurafsky

Clustering vectors to
visualize similarity in
co-occurrence
matrices

Rohde, Gonnerman, Plaut Modeling Word Meaning Using Lexical Co-Occurrence

HEAD

HAND
FACE

DOG

AMERICA

CAT

EYE

EUROPE

FOOT

CHINA
FRANCE

CHICAGO

ARM

FINGER

NOSE

LEG

RUSSIA

MOUSE

AFRICA

ATLANTA

EAR

SHOULDER

ASIA

COW

BULL

PUPPY
LION

HAWAII

MONTREAL

TOKYO

TOE

MOSCOW

TOOTH

NASHVILLE

BRAZIL

WRIST

KITTEN

ANKLE

TURTLE

OYSTER

Figure 8: Multidimensional scaling for three noun classes.

WRIST
ANKLE

SHOULDER
ARM
LEG

HAND
FOOT

HEAD
NOSE
FINGER

TOE
FACE

EAR
EYE

TOOTH
DOG
CAT

PUPPY
KITTEN

COW
MOUSE

TURTLE
OYSTER

LION
BULL
CHICAGO
ATLANTA

MONTREAL
NASHVILLE

TOKYO
CHINA

RUSSIA
AFRICA
ASIA
EUROPE

AMERICA
BRAZIL

MOSCOW
FRANCE

HAWAII

Figure 9: Hierarchical clustering for three noun classes using distances based on vector correlations.

20

38 Rohde et al. (2006)

Dan Jurafsky

Other possible similarity measures

Vector Semantics

Measuring similarity: the
cosine

Dan Jurafsky

Using syntax to define a word’s context

• Zellig Harris (1968)

“The meaning of entities, and the meaning of grammatical
relations among them, is related to the restriction of
combinations of these entities relative to other entities”

• Two words are similar if they have similar syntactic contexts

Duty and responsibility have similar syntactic distribution:

Modified by
adjectives

additional, administrative, assumed, collective,
congressional, constitutional …

Objects of verbs assert, assign, assume, attend to, avoid, become, breach..

Dan Jurafsky

Co-occurrence vectors based on syntactic dependencies

• Each dimension: a context word in one of R grammatical relations
• Subject-of- “absorb”

• Instead of a vector of |V| features, a vector of R|V|

• Example: counts for the word cell :

Dekang Lin, 1998 “Automatic Retrieval and Clustering of Similar Words”

Dan Jurafsky

Syntactic dependencies for dimensions

• Alternative (Padó and Lapata 2007):
• Instead of having a |V| x R|V| matrix

• Have a |V| x |V| matrix

• But the co-occurrence counts aren’t just counts of words in a window

• But counts of words that occur in one of R dependencies (subject, object,
etc).

• So M(“cell”,”absorb”) = count(subj(cell,absorb)) + count(obj(cell,absorb))
+ count(pobj(cell,absorb)), etc.

43

Dan Jurafsky

PMI applied to dependency relations

• “Drink it” more common than “drink wine”

• But “wine” is a better “drinkable” thing than “it”

Object of “drink” Count PMI

it 3 1.3

anything 3 5.2

wine 2 9.3

tea 2 11.8

liquid 2 10.5

Hindle, Don. 1990. Noun Classification from Predicate-Argument Structure. ACL

Object of “drink” Count PMI

tea 2 11.8

liquid 2 10.5

wine 2 9.3

anything 3 5.2

it 3 1.3

Dan Jurafsky

Alternative to PPMI for measuring
association

• tf-idf (that’s a hyphen not a minus sign)

• The combination of two factors
• Term frequency (Luhn 1957): frequency of the word (can be logged)

• Inverse document frequency (IDF) (Sparck Jones 1972)

• N is the total number of documents

• dfi = “document frequency of word i”

• = # of documents with word I

• wij = word i in document j

wij=tfij idfi
45

idf
i

= log
N

df
i

æ

è

ç
ç

ö

ø

÷
÷

Dan Jurafsky

tf-idf not generally used for word-word
similarity

• But is by far the most common weighting when we are
considering the relationship of words to documents

46

Vector Semantics

Evaluating similarity

Dan Jurafsky

Evaluating similarity

• Extrinsic (task-based, end-to-end) Evaluation:
• Question Answering

• Spell Checking

• Essay grading

• Intrinsic Evaluation:
• Correlation between algorithm and human word similarity ratings

• Wordsim353: 353 noun pairs rated 0-10. sim(plane,car)=5.77

• Taking TOEFL multiple-choice vocabulary tests

• Levied is closest in meaning to:

imposed, believed, requested, correlated

Dan Jurafsky

Summary

• Distributional (vector) models of meaning

• Sparse (PPMI-weighted word-word co-occurrence matrices)

• Dense:

• Word-word SVD 50-2000 dimensions

• Skip-grams and CBOW

• Brown clusters 5-20 binary dimensions.

49