4/1/2021
CSE 473/573
‘-
Introduction to Computer Vision and Image Processing
1
Spend 30 minutes writing up ideas of how the following may be solved. Think about how we moved from pixels to features to ????
‘-
• What other tools do we have in our tool bag that can now be applied to “objects”?
2
• What can we do deal with Layout for Recognition?
• What can we about the Brute Force Matching problem you may have
seen in the project?
1
4/1/2021
But what about layout?
‘-
All of these images have the same color histogram
3
Spatial pyramid
‘-
Compute histogram in each spatial bin
4
2
4/1/2021
Spatial pyramid representation
‘-
• Extension of a bag of features
• Locally orderless representation at several levels of resolution
level 0
level 1
Lazebnik, Schmid & Ponce (CVPR 2006)
level 2
5
Scene category dataset
Multi-class classification r‘e-sults (100 training images per class)
6
3
4/1/2021
Caltech101
dataset
‘-
Multi-class classification results (30 training images per class)
7
Bags of features for action recognition
‘-
Space-time interest points
Juan Carlos Niebles, Hongcheng Wang and Li Fei-Fei, Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words, IJCV 2008.
8
4
4/1/2021
History of ideas in recognition
• 1960s – early 1990s: the geometric era
• 1990s: appearance-based models
• Mid-1990s: sliding window approaches
• Late 1990s: local features
• Early 2000s: parts-and-shape models
• Mid-2000s: bags of features
• Present trends: combination of local and global methods, context, deep learning
9
Svetlana Lazebnik
‘-
LARGE SCALE INSTANCE RETRIEVAL
‘-
10
5
4/1/2021
Multi-view matching
Matching two given views for depth
vs ? ‘-
Search for a matching view for recognition
Kristen Grauman
11
Efficient Retrieval?
How to quickly find images in a large database that match a given image region?
‘-
12
6
…
4/1/2021
Local Retrieval
1. Collect all words within query region
2. Inverted file index to find relevant frames
3. Compare word counts
4. Spatial verification
Sivic & Zisserman
Query region
‘-
13
Kristen Grauman
Application: Large-Scale Retrieval
‘-
Query Results from 5k Flickr images (demo available for 100k set) [Philbin CVPR’07]
14
7
Retrieved frames
4/1/2021
Exercise 5-10 minutes
What is Indexing?
• Provide a Concise Definition of an “index” or “indexing”.
• What is a reason for an index?
• List 4 or 5 places you have seen an index ‘-
• Have you ever thought about how an index works?
• Take the time to stop by and take the time to consider these questions.
15
‘-
Have you ever thought about how an index works?
16
8
4/1/2021
Indexing local features
• Each patch / region has a descriptor, which is a point in some high-dimensional feature space (e.g., SIFT)
‘-
Descriptor’s feature space
17
Kristen Grauman
Indexing local features
• When we see close points in feature space, we have similar descriptors, which indicates similar local content.
‘-
Descriptor’s feature space
Query image
Database images
Easily can have millions of
features to search! Kristen Grauman
18
9
Indexing local features: inverted file index
• For text documents, an efficient way to find all pages on which a word occurs is to use an index…
• We want to find all images in which a feature occurs.
• To use this idea, we’ll need to map our features to “visual words”.
‘-
Visual words
• Map high-dimensional descriptors to tokens/words
by quantizing the feature space
• Quantize via clustering, let cluster centers be the prototype “words”
• Determine which
word to assign to
each new image
region by finding
the closest cluster
Word #2
‘-
center.
Descriptor’s feature space
19
Kristen Grauman
Kristen Grauman
20
4/1/2021
10
Visual words
• Example: each group of patches belongs to the same visual word
‘-
Figure from Sivic & Zisserman, ICCV 2003 21 Kristen Grauman
Visual vocabulary formation
Issues:
• Vocabulary size, number of words
• Sampling strategy: where to extract features? ‘-
• Clustering / quantization algorithm
• Unsupervised vs. supervised
• What corpus provides features (universal vocabulary?)
22
Kristen Grauman
4/1/2021
11
4/1/2021
Sampling strategies
Sparse, at interest points
Multiple interest
operators
Image credits: F-F. Li, E. Nowak, J. Sivic
‘-
Dense, uniformly
Randomly
• To find specific, textured objects, sparse sampling from interest points often more reliable.
• Multiple complementary interest operators offer more image coverage.
• For object categorization, dense sampling offers better coverage.
23
[See Nowak, Jurie & Triggs, ECCV 2006]
23
Inverted file index
• Database images are loaded into the index mapping words to image numbers
‘-
24
Kristen Grauman
12
4/1/2021
Inverted file index
‘-
• New query image is mapped to indices of database
images that share a word.
25
Kristen Grauman
Inverted file index
• Key requirement for inverted file index to be efficient: sparsity
• If most pages/images contain most words then you’re no better off than
exhaustive search.
• Exhaustive search would mean comparing‘-the word distribution of a query versus every page.
26
13
4/1/2021
Instance recognition: remaining issues
• How to summarize the content of an entire image? And gauge overall similarity?
‘-
• Is having the same set of visual words enough to identify the object/scene? How to verify spatial agreement?
• How large should the vocabulary be? How to perform quantization efficiently?
• How to score the retrieval results?
27
Kristen Grauman
Analogy to documents
Of all the sensory impressions proceeding to China is forecasting a trade surplus of $90bn
the brain, the visual experiences are the (£51bn) to $100bn this year, a threefold
dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal
increase on 2004’s $32bn. The Commerce Ministry said the surplus would be created by a predicted 30% jump in exports to $750bn, compared with a 18% rise in imports to
image was tra
ns
m
it
te
dp
o
i
nt
b
y
p
o
in
t to visual
$660bn. The fig
ur
es
ar
e
lik
e
l
y
to
,
s
a
e
n
s
o
centers in the brain; the cerebral cortex was a
annoy the US, which has long argued that
r
y
visual, perception,
surplus, commerce,
movie screen, so to speak, upon which the
China’s exports are unfairly helped by a
retinal, cerebral cortex,
exports, imports, US,
,
b
r
a
i
n
,
C
h
in
a
,
t
r
d
e
fu
rther
image in the eye was projected. Through the discoveries of Hubel and Wiesel we now
deliberately undervalued yuan. Beijing agrees the surplus is too high, but says the
eye, cell, optical
yua‘n-, bank, domestic,
know that behind the origin of the visual
yuan is only one factor. Bank of China
nerve, image
foreign, increase,
perception in the brain there is a considerably
governor Zhou Xiaochuan said the country
H
u
b
e
By following the visual impulses along their path to the various cell layers of the optical cortex,
Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step- wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image.
e
more complicat
ed
c
ou
rs
e
o
fe
v
l
,
W
i
e
e
nt
s.
also needed to d
o
m
o
re
to
b
oo
st
domestic
s
e
l
t
r
a
d
e,
v
demand so more goods stayed within the country. China increased the value of the yuan against the dollar by 2.1% in July and permitted it to trade within a narrow band, but the US wants the yuan to be allowed to trade freely. However, Beijing has made it clear that it will take its time and tread carefully before allowing the yuan to rise further in value.
ICCV 2005 short course, L. Fei-Fei
a
lu
28
14
4/1/2021
‘-
29
Bags of visual words
• Summarize entire image based on its distribution (histogram) of word occurrences.
• Analogous to bag of words representation commonly used for documents.
‘-
30
15
4/1/2021
Comparing bags of words
• Rank frames by normalized scalar product between their (possibly weighted) occurrence counts—nearest neighbor search for similar images.
[1 8 1 4]
[5 1 1 0]
‘-
dj
q
for vocabulary of V words
31
Kristen Grauman
Instance recognition: remaining issues
• How to summarize the content of an entire image? And gauge overall similarity?
‘-
• Is having the same set of visual words enough to identify the object/scene? How to verify spatial agreement?
• How large should the vocabulary be? How to perform quantization efficiently?
• How to score the retrieval results?
32
Kristen Grauman
16
4/1/2021
Vocabulary size
Branching factors
Influence on performance, sparsity
Results for recognition task with 6347 images
‘-
Nister & Stewenius, CVPR 230306 Kristen Grauman
Recognition with K-tree
‘-
Following slides by David Nister (CVPR 2006)
34
17
‘-
‘-
35
36
4/1/2021
18
‘-
‘-
37
38
4/1/2021
19
‘-
‘-
39
40
4/1/2021
20
‘-
‘-
41
42
4/1/2021
21
4/1/2021
‘-
43
‘-
44
22
4/1/2021
‘-
45
‘-
46
23
4/1/2021
‘-
47
‘-
48
24
4/1/2021
‘-
49
‘-
50
25
4/1/2021
‘-
51
‘-
52
26
4/1/2021
‘-
53
‘-
54
27
4/1/2021
‘-
55
‘-
56
28
4/1/2021
‘-
57
Vocabulary Tree: Performance
Evaluated on large databases
• Indexing with up to 1M images
Online recognition for database of 50,000 CD covers
‘-
• Retrieval in ~1s
Find experimentally that large vocabularies
can be beneficial for recognition
[Nister & Stewenius, CVPR’06]
K. Grauman, B. Leibe
58 58
Slide credit: J. Sivic
29
4/1/2021
Visual words/bags of words
+ flexible to geometry / deformations / viewpoint
+ compact summary of image content
+ provides fixed dimensional vector representation for sets
+ very good results in practice
‘-
– background and foreground mixed when bag covers whole image
– optimal vocabulary formation remains unclear
– basic model ignores geometry – must verify afterwards, or encode via features
59
Kristen Grauman
Instance recognition:
remaining issues
• How to summarize the content of an entire image? And gauge overall similarity?
‘-
• Is having the same set of visual words enough to identify the object/scene? How to verify spatial agreement?
• How large should the vocabulary be? How to perform quantization efficiently?
• How to score the retrieval results?
60
Kristen Grauman
30
4/1/2021
Can we be more accurate?
So far, we treat each image as containing a “bag of words”, with no spatial information
Which matches better?
h
h
a
f
e
‘-
ez
a
f
e
e
61
Can we be more accurate?
So far, we treat each image as containing a “bag of words”, with no spatial information
‘-
Real objects have consistent geometry
62
31
4/1/2021
Spatial Verification
Query
DB image with high BoW similarity
Both image pairs have many visual words in common.
63
Slide credit: Ondrej Chum
Query
‘-
DB image with high BoW similarity
Spatial Verification
Query
DB image with high BoW similarity
Only some of the matches are mutually consistent
64
Slide credit: Ondrej Chum
Query
‘-
DB image with high BoW similarity
32
4/1/2021
Spatial Verification: three basic strategies
• RANSAC
• Typically sort by BoW similarity as initial filter
• Verify by checking support (inliers) for possible transformations
‐ e.g., “success” if find a transformation with > N inlier correspondences
‘-
• Generalized Hough Transform
• Let each matched feature cast a vote on location, scale, orientation
of the model object
• Verify parameters with enough votes
• Triplet Verification
65
Kristen Grauman
RANSAC verification
‘-
66
33
4/1/2021
RANSAC
‘-
67
Next Time: Finish Local Indexing
• Needed for Project Number 3
‘-
• Look for special lecture this week on Project Number 3
68
34