COMP6714: Informa2on Retrieval & Web Search
Introduc)on to
Informa(on Retrieval
Lecture 1: Boolean retrieval
COMP6714: Informa2on Retrieval & Web Search
Unstructured data in 1680
Which plays of Shakespeare contain the words Brutus
AND Caesar but NOT Calpurnia?
One could grep all of Shakespeare’s plays for Brutus
and Caesar, then strip out lines containing Calpurnia?
Why is that not the answer?
Slow (for large corpora)
NOT Calpurnia is non‐trivial
Other opera)ons (e.g., find the word Romans near
countrymen) not feasible
Ranked retrieval (best documents to return)
Later lectures
2
Sec. 1.1
COMP6714: Informa2on Retrieval & Web Search
Term‐document incidence
1 if play contains
word, 0 otherwise
Brutus AND Caesar BUT NOT
Calpurnia
Sec. 1.1
COMP6714: Informa2on Retrieval & Web Search
Incidence vectors
So we have a 0/1 vector for each term.
To answer query: take the vectors for Brutus, Caesar
and Calpurnia (complemented) bitwise AND.
110100 AND 110111 AND 101111 = 100100.
4
Sec. 1.1
COMP6714: Informa2on Retrieval & Web Search
Answers to query
Antony and Cleopatra, Act III, Scene ii
Agrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus,
When Antony found Julius Caesar dead,
He cried almost to roaring; and he wept
When at Philippi he found Brutus slain.
Hamlet, Act III, Scene ii
Lord Polonius: I did enact Julius Caesar I was killed i’ the
Capitol; Brutus killed me.
5
Sec. 1.1
COMP6714: Informa2on Retrieval & Web Search
Basic assump)ons of Informa)on Retrieval
Collec)on: Fixed set of documents
Goal: Retrieve documents with informa)on that is
relevant to the user’s informa)on need and helps
the user complete a task
6
Sec. 1.1
COMP6714: Informa2on Retrieval & Web Search
The classic search model
Corpus
TASK
Info Need
Query
Verbal
form
Results
SEARCH
ENGINE
Query
Refinement
Info about removing mice
without killing them
mouse trap
Misconception?
Mistranslation?
Misformulation?
COMP6714: Informa2on Retrieval & Web Search
How good are the retrieved docs?
Precision : Frac)on of retrieved docs that are
relevant to user’s informa)on need
Recall : Frac)on of relevant docs in collec)on that
are retrieved
More precise defini)ons and measurements to
follow in later lectures
8
Sec. 1.1
COMP6714: Informa2on Retrieval & Web Search
Bigger collec)ons
Consider N = 1 million documents, each with about
1000 words.
Avg 6 bytes/word including spaces/punctua)on
6GB of data in the documents.
Say there are M = 500K dis2nct terms among these.
9
Sec. 1.1
COMP6714: Informa2on Retrieval & Web Search
Can’t build the matrix
500K x 1M matrix has half‐a‐trillion 0’s and 1’s.
But it has no more than one billion 1’s.
matrix is extremely sparse.
What’s a be^er representa)on?
We only record the 1 posi)ons.
10
Why?
Sec. 1.1
COMP6714: Informa2on Retrieval & Web Search
Inverted index
For each term t, we must store a list of all documents
that contain t.
Iden)fy each by a docID, a document serial number
Can we used fixed‐size arrays for this?
11
Brutus
Calpurnia
Caesar 1 2 4 5 6 16 57 132
1 2 4 11 31 45 173
2 31
What happens if the word Caesar
is added to document 14?
Sec. 1.2
174
54 101
COMP6714: Informa2on Retrieval & Web Search
Inverted index
We need variable‐size pos)ngs lists
On disk, a con)nuous run of pos)ngs is normal and best
In memory, can use linked lists or variable length arrays
Some tradeoffs in size/ease of inser)on
12
Dictionary Postings
Sorted by docID (more later on why).
Pos2ng
Sec. 1.2
Brutus
Calpurnia
Caesar 1 2 4 5 6 16 57 132
1 2 4 11 31 45 173
2 31
174
54 101
COMP6714: Informa2on Retrieval & Web Search
Tokenizer
Token stream. Friends Romans Countrymen
Inverted index construc)on
Linguistic
modules
Modified tokens. friend roman countryman
Indexer
Inverted index.
friend
roman
countryman
2 4
2
13 16
1
More on
these later.
Documents to
be indexed.
Friends, Romans, countrymen.
Sec. 1.2
COMP6714: Informa2on Retrieval & Web Search
Indexer steps: Token sequence
Sequence of (Modified token, Document ID) pairs.
I did enact Julius
Caesar I was killed
i’ the Capitol;
Brutus killed me.
Doc 1
So let it be with
Caesar. The noble
Brutus hath told you
Caesar was ambitious
Doc 2
Sec. 1.2
COMP6714: Informa2on Retrieval & Web Search
Indexer steps: Sort
Sort by terms
And then docID
Core indexing step
Sec. 1.2
COMP6714: Informa2on Retrieval & Web Search
Indexer steps: Dic)onary & Pos)ngs
Mul)ple term
entries in a single
document are
merged.
Split into Dic)onary
and Pos)ngs
Doc. frequency
informa)on is
added.
Why frequency?
Will discuss later.
Sec. 1.2
COMP6714: Informa2on Retrieval & Web Search
Where do we pay in storage?
17 Pointers
Terms
and
counts Later in the
course:
• How do we
index
efficiently?
• How much
storage do
we need?
Sec. 1.2
Lists of
docIDs
COMP6714: Informa2on Retrieval & Web Search
The index we just built
How do we process a query?
Later ‐ what kinds of queries can we process?
18
Today’s
focus
Sec. 1.3
COMP6714: Informa2on Retrieval & Web Search
Query processing: AND
Consider processing the query:
Brutus AND Caesar
Locate Brutus in the Dic)onary;
Retrieve its pos)ngs.
Locate Caesar in the Dic)onary;
Retrieve its pos)ngs.
“Merge” the two pos)ngs:
19
128
34
2
4
8
16
32
64
1
2
3
5
8
13
21
Brutus
Caesar
Sec. 1.3
COMP6714: Informa2on Retrieval & Web Search
The merge
Walk through the two pos)ngs simultaneously, in
)me linear in the total number of pos)ngs entries
20
34
128
2
4
8
16
32
64
1
2
3
5
8
13
21
128
34
2
4
8
16
32
64
1
2
3
5
8
13
21
Brutus
Caesar
2
8
If the list lengths are x and y, the merge takes O(x+y)
operations.
Crucial: postings sorted by docID.
Sec. 1.3
COMP6714: Informa2on Retrieval & Web Search
Intersec)ng two pos)ngs lists
(a “merge” algorithm)
21
COMP6714: Informa2on Retrieval & Web Search
Boolean queries: Exact match
The Boolean retrieval model is being able to ask a
query that is a Boolean expression:
Boolean Queries are queries using AND, OR and NOT to
join query terms
Views each document as a set of words
Is precise: document matches condi)on or not.
Perhaps the simplest model to build an IR system on
Primary commercial retrieval tool for 3 decades.
Many search systems you s)ll use are Boolean:
Email, library catalog, Mac OS X Spotlight
22
Sec. 1.3
COMP6714: Informa2on Retrieval & Web Search
Example: WestLaw http://www.westlaw.com/
Largest commercial (paying subscribers) legal
search service (started 1975; ranking added
1992)
Tens of terabytes of data; 700,000 users
Majority of users still use boolean queries
Example query:
What is the statute of limitations in cases involving
the federal tort claims act?
LIMIT! /3 STATUTE ACTION /S FEDERAL /2
TORT /3 CLAIM
foo! = foo*, /3 = within 3 words, /S = in same sentence
23
Sec. 1.4
COMP6714: Informa2on Retrieval & Web Search
Example: WestLaw http://www.westlaw.com/
Another example query:
Requirements for disabled people to be able to access a
workplace
disabl! /p access! /s work‐site work‐place (employment /3
place
Note that SPACE is disjunc)on, not conjunc)on!
Long, precise queries; proximity operators;
incrementally developed; not like web search
Many professional searchers s)ll like Boolean search
You know exactly what you are geqng
But that doesn’t mean it actually works be^er….
Sec. 1.4
COMP6714: Informa2on Retrieval & Web Search
Boolean queries:
More general merges
Exercise: Adapt the merge for the queries:
Brutus AND NOT Caesar
Brutus OR NOT Caesar
Can we s)ll run through the merge in )me O(x+y)?
What can we achieve?
25
Sec. 1.3
COMP6714: Informa2on Retrieval & Web Search
Merging
What about an arbitrary Boolean formula?
(Brutus OR Caesar) AND NOT
(Antony OR Cleopatra)
Can we always merge in “linear” )me?
Linear in what?
Can we do be^er?
26
Sec. 1.3
COMP6714: Informa2on Retrieval & Web Search
Query op)miza)on
What is the best order for query processing?
Consider a query that is an AND of n terms.
For each of the n terms, get its pos)ngs, then
AND them together.
Brutus
Caesar
Calpurnia
1 2 3 5 8 16 21 34
2 4 8 16 32 64 128
13 16
Query: Brutus AND Calpurnia AND Caesar
27
Sec. 1.3
COMP6714: Informa2on Retrieval & Web Search
Query op)miza)on example
Process in order of increasing freq:
start with the smallest set, then keep cuNng further.
28
This is why we kept
document freq. in dictionary
Execute the query as (Calpurnia AND Brutus) AND Caesar.
Sec. 1.3
Brutus
Caesar
Calpurnia
1 2 3 5 8 16 21 34
2 4 8 16 32 64 128
13 16
COMP6714: Informa2on Retrieval & Web Search
More general op)miza)on
e.g., (madding OR crowd) AND (ignoble OR
strife) AND (light OR lord)
Get doc. freq.’s for all terms.
Es)mate the size of each OR by the sum of its
doc. freq.’s (conserva)ve).
Process in increasing order of OR sizes.
29
Sec. 1.3
COMP6714: Informa2on Retrieval & Web Search
Exercise
Recommend a query
processing order for
30
(tangerine OR trees) AND
(marmalade OR skies) AND
(kaleidoscope OR eyes)
Q: Any more accurate way to es)mate the cardinality of intermediate results?
Q: Can we merge mul)ple lists (>2) simultaneously?
COMP6714: Informa2on Retrieval & Web Search
Problema)c Cases
COMP6714: Informa2on Retrieval & Web Search
Query processing exercises
Exercise: If the query is friends AND romans AND
(NOT countrymen), how could we use the freq of
countrymen?
Exercise: Extend the merge to an arbitrary Boolean
query. Can we always guarantee execu)on in )me
linear in the total pos)ngs size?
Hint: Begin with the case of a Boolean formula
query: in this, each query term appears only once in
the query.
32
COMP6714: Informa2on Retrieval & Web Search
Exercise
Try the search feature at
h^p://www.rhymezone.com/shakespeare/
Write down five search features you think it could do
be^er
33
COMP6714: Informa2on Retrieval & Web Search
FASTER POSTINGS MERGES:
SKIP POINTERS/SKIP LISTS
COMP6714: Informa2on Retrieval & Web Search
Recall basic merge
Walk through the two pos)ngs simultaneously, in
)me linear in the total number of pos)ngs entries
128
31
2 4 8 41 48 64
1 2 3 8 11 17 21
Brutus
Caesar
2 8
If the list lengths are m and n, the merge takes O(m+n)
operations.
Can we do better?
Yes (if index isn’t changing too fast).
Sec. 2.3
COMP6714: Informa2on Retrieval & Web Search
Augment pos)ngs with skip pointers
(at indexing )me)
Why?
To skip pos)ngs that will not figure in the search
results.
How?
Where do we place skip pointers?
128 2 4 8 41 48 64
31 1 2 3 8 11 17 21
31 11
41 128
Sec. 2.3
COMP6714: Informa2on Retrieval & Web Search
Query processing with skip pointers
128 2 4 8 41 48 64
31 1 2 3 8 11 17 21
31 11
41 128
Suppose we’ve stepped through the lists until we
process 8 on each list. We match it and advance.
We then have 41 and 11 on the lower. 11 is smaller.
But the skip successor of 11 on the lower list is 31, so
we can skip ahead past the intervening postings.
Sec. 2.3
COMP6714: Informa2on Retrieval & Web Search
Where do we place skips?
Tradeoff:
More skips → shorter skip spans ⇒ more likely to skip.
But lots of comparisons to skip pointers.
Fewer skips → few pointer comparison, but then long skip
spans ⇒ few successful skips.
Sec. 2.3
Can we skip w/o skip pointers?
COMP6714: Informa2on Retrieval & Web Search
Placing skips
Simple heuris)c: for pos)ngs of length L, use L1/2
evenly‐spaced skip pointers.
This ignores the distribu)on of query terms.
Easy if the index is rela)vely sta)c; harder if L keeps
changing because of updates.
This definitely used to help; with modern hardware it
may not (Bahle et al. 2002) unless you’re memory‐
based
The I/O cost of loading a bigger pos)ngs list can outweigh
the gains from quicker in memory merging!
Sec. 2.3
COMP6714: Informa2on Retrieval & Web Search
Skip Pointers
A skip pointer (d, p) contains a document number d
and a byte (or bit) posi)on p
Means there is an inverted list pos)ng that starts at
posi)on p, and the pos)ng before it was for document d
skip pointers
Inverted list
CMS09::Chap5
COMP6714: Informa2on Retrieval & Web Search
Skip Pointers
Example
Inverted list
D‐gaps
Skip pointers
COMP6714: Informa2on Retrieval & Web Search
PHRASE QUERIES AND POSITIONAL
INDEXES
COMP6714: Informa2on Retrieval & Web Search
Phrase queries
Want to be able to answer queries such as “stanford
university” – as a phrase
Thus the sentence “I went to university at Stanford”
is not a match.
The concept of phrase queries has proven easily
understood by users; one of the few “advanced search”
ideas that works
Many more queries are implicit phrase queries
For this, it no longer suffices to store only
Sec. 2.4
COMP6714: Informa2on Retrieval & Web Search
Solu)on 1: Biword indexes
Index every consecu)ve pair of terms in the text as a
phrase
For example the text “Friends, Romans,
Countrymen” would generate the biwords
friends romans
romans countrymen
Each of these biwords is now a dic)onary term
Two‐word phrase query‐processing is now
immediate.
Sec. 2.4.1
COMP6714: Informa2on Retrieval & Web Search
Longer phrase queries
Longer phrases are processed as we did with wild‐
cards:
stanford university palo alto can be broken into the
Boolean query on biwords:
stanford university AND university palo AND palo alto
Without the docs, we cannot verify that the docs
matching the above Boolean query do contain the
phrase.
Can have false positives!
Sec. 2.4.1
COMP6714: Informa2on Retrieval & Web Search
Extended biwords
Parse the indexed text and perform part‐of‐speech‐tagging
(POST).
Bucket the terms into (say) Nouns (N) and ar)cles/
preposi)ons (X).
Call any string of terms of the form NX*N an extended
biword.
Each such extended biword is now made a term in the
dic)onary.
Example: catcher in the rye
N X X N
Query processing: parse it into N’s and X’s
Segment query into enhanced biwords
Look up in index: catcher rye
Sec. 2.4.1
COMP6714: Informa2on Retrieval & Web Search
Issues for biword indexes
False posi)ves, as noted before
Index blowup due to bigger dic)onary
Infeasible for more than biwords, big even for them
Biword indexes are not the standard solu)on (for all
biwords) but can be part of a compound strategy
Sec. 2.4.1
COMP6714: Informa2on Retrieval & Web Search
Solu)on 2: Posi)onal indexes
In the pos)ngs, store, for each term the posi)on(s) in
which tokens of it appear:
Sec. 2.4.2
COMP6714: Informa2on Retrieval & Web Search
Posi)onal index example
For phrase queries, we use a merge algorithm
recursively at the document level
But we now need to deal with more than just
equality
Which of docs 1,2,4,5
could contain “to be
or not to be”?
Sec. 2.4.2
COMP6714: Informa2on Retrieval & Web Search
Processing a phrase query
Extract inverted index entries for each dis)nct term:
to, be, or, not.
Merge their doc:posi2on lists to enumerate all
posi)ons with “to be or not to be”.
to:
2:1,17,74,222,551; 4:8,16,190,429,433; 7:13,23,191; …
be:
1:17,19; 4:17,191,291,430,434; 5:14,19,101; …
Same general method for proximity searches
Sec. 2.4.2
COMP6714: Informa2on Retrieval & Web Search
Proximity queries
LIMIT! /3 STATUTE /3 FEDERAL /2 TORT
Again, here, /k means “within k words of”.
Clearly, posi)onal indexes can be used for such
queries; biword indexes cannot.
Exercise: Adapt the linear merge of pos)ngs to
handle proximity queries. Can you make it work for
any value of k?
This is a li^le tricky to do correctly and efficiently
See Figure 2.12 of IIR (Page 39)
There’s likely to be a problem on it!
Sec. 2.4.2
COMP6714: Informa2on Retrieval & Web Search
Posi)onal index size
You can compress posi)on values/offsets: we’ll talk
about that in lecture 5
Nevertheless, a posi)onal index expands pos)ngs
storage substan2ally
Nevertheless, a posi)onal index is now standardly
used because of the power and usefulness of phrase
and proximity queries … whether used explicitly or
implicitly in a ranking retrieval system.
Sec. 2.4.2
COMP6714: Informa2on Retrieval & Web Search
Posi)onal index size
Need an entry for each occurrence, not just once per
document
Index size depends on average document size
Average web page has <1000 terms
SEC filings, books, even some epic poems … easily 100,000
terms
Consider a term with frequency 0.1%
Why?
100 1 100,000
1 1 1000
Positional postings Postings Document size
Sec. 2.4.2
COMP6714: Informa2on Retrieval & Web Search
Rules of thumb
A posi)onal index is 2–4 as large as a non‐posi)onal
index
Posi)onal index size 35–50% of volume of original
text
Caveat: all of this holds for “English‐like” languages
Sec. 2.4.2
COMP6714: Informa2on Retrieval & Web Search
Combina)on schemes
These two approaches can be profitably
combined
For par)cular phrases (“Michael Jackson”, “Britney
Spears”) it is inefficient to keep on merging posi)onal
pos)ngs lists
Even more so for phrases like “The Who”
Williams et al. (2004) evaluate a more
sophis)cated mixed indexing scheme
A typical web query mixture was executed in ¼ of the
)me of using just a posi)onal index
It required 26% more space than having a posi)onal
index alone
Sec. 2.4.3
COMP6714: Informa2on Retrieval & Web Search
Solu)on 3: Suffix Tree/Array
BANANA$
BANANA$ pos:0
ANANA$ pos:1
NANA$ pos:2
ANA$ pos:3
NA$ pos:4
A$ pos:5
A$ pos:5
ANA$ pos:3
ANANA$ pos:1
BANANA$ pos:0
NA$ pos:4
NANA$ pos:2
Sec. 2.4.3
Sort on the strings
COMP6714: Informa2on Retrieval & Web Search
Suffix Array
BANANA$
BANANA$ pos:0
ANANA$ pos:1
NANA$ pos:2
ANA$ pos:3
NA$ pos:4
A$ pos:5
A$ pos:5
ANA$ pos:3
ANANA$ pos:1
BANANA$ pos:0
NA$ pos:4
NANA$ pos:2
Sec. 2.4.3
Sort on the strings
If the original string is
available, each suffix can be
completely specified by the
index of its first character
B A N A N A $
4 3 6 2 5 1 7
COMP6714: Informa2on Retrieval & Web Search
Resources for today’s lecture
Introduc2on to Informa2on Retrieval, chapter 1
Shakespeare:
h^p://www.rhymezone.com/shakespeare/
Try the neat browse by keyword sequence feature!
Managing Gigabytes, chapter 3.2
Modern Informa2on Retrieval, chapter 8.2
58
COMP6714: Informa2on Retrieval & Web Search
Resources for today’s lecture
Skip Lists theory: Pugh (1990)
Mul)level skip lists give same O(log n) efficiency as trees
H.E. Williams, J. Zobel, and D. Bahle. 2004. “Fast Phrase
Querying with Combined Indexes”, ACM Transactions on
Information Systems.
h^p://www.seg.rmit.edu.au/research/research.php?author=4
D. Bahle, H. Williams, and J. Zobel. Efficient phrase querying with an
auxiliary index. SIGIR 2002, pp. 215‐221.
COMP6714: Informa2on Retrieval & Web Search
Es)ma)ng Result Set Size
How many pages contain all of the query terms?
For the query “a b c”:
fabc = N ∙ fa/N ∙ fb/N ∙ fc/N = (fa ∙ fb ∙ fc)/N2
Assuming that terms occur independently
fabc is the es)mated size of the result set
fa, fb, fc are the number of documents that terms a, b, and c occur
in
N is the number of documents in the collec)on
CMS09::Chap4
COMP6714: Informa2on Retrieval & Web Search
GOV2 Example
Collection size (N) is
25,205,179
COMP6714: Informa2on Retrieval & Web Search
Inconsistent Es)mate by Google circa
2007
62
iterative proportional
scaling
COMP6714: Informa2on Retrieval & Web Search
Result Set Size Es)ma)on
Poor es)mates because words are not independent
Be^er es)mates possible if co‐occurrence
informa)on available
P(a ∩ b ∩ c) = P(a ∩ c) ∙ P(b|(a ∩ c))
≈ P(a ∩ c) ∙ P(b|c)
= P(a ∩ c) ∙ P(b ∩ c) / P(c)
ftropical∩fish∩aquarium = ftropical∩aquarium ∙ ffish∩aquarium/faquarium
= 1921 ∙ 9722/26480 = 705
ftropical∩fish∩breeding = ftropical∩breeding ∙ ffish∩breeeding/fbreeding
= 5510 ∙ 36427/81885 = 2451
vs. 1529
vs. 3629
COMP6714: Informa2on Retrieval & Web Search
Result Set Es)ma)on
Even be^er es)mates using ini)al result set
Es)mate is simply C/s
where s is the propor)on of the total documents that have been
ranked, and C is the number of documents found that contain all
the query words
E.g., “tropical fish aquarium” in GOV2
a�er processing 3,000 out of the 26,480 documents that contain
“aquarium”, C = 258
ftropical∩fish∩aquarium = 258/(3000÷26480) = 2,277
A�er processing 20% of the documents,
ftropical∩fish∩aquarium = 1,778 (1,529 is real value)
COMP6714: Informa2on Retrieval & Web Search
Mo)va)on into a Be^er Es)mator
Example
1. N = 100, fA = 10, fB = 20
2. 1 sec into the intersec)on query processing, the current
cursors points to docID = 20 and 30 on A and B’s inverted
lists, respec)vely; and there are 1 documents in the
intersec)on. (Assuming docIDs are randomly assigned)
Es)ma)on
1. Based on independence: fAB = (10/100)*(20/100)*100 = 2
2. Based on “sampling”: fAB = 1 * (100/min(20, 30)) = 5
Can we combine the “strength” of both es)mators?
Condi)onal random sampling [Li & Church, A Sketch Algorithm for
Es)ma)ng Two‐Way and Mul)‐Way Associa)ons]