CS计算机代考程序代写 algorithm 2/23/2021

2/23/2021
CSE 473/573
‘-
Introduction to Computer Vision and Image Processing
1
FEATURE
DETECTION
AND MATCHING
PART II
Questions from Previous Lecture
‘-
2
1

2/23/2021
Feature Vectors – Image Templates
• How do we do image template matching?
• Given an image and a template, pass the template over the image • Compute the locations the pixels are the same
• Find the max over the image ‘-
• What are the challenges for character classification using image templates as defined?
• Does not generalize – Exact match (or threshold is required) • Scale dependent
• Must match at all locations in image
• Must match each template separately
3
Image Templates – Improvements*
• Scale Dependency?
• Rescale the images and templates
• Matching all locations?
• Perform Connected Component analysis to identify candidates • Change the Stride of matching
• Match templates separately?
• Other Efficiency Issues
• Extract and match other features (smaller feature vector) • Efficient Image Computation (later)
* But there are always tradeoffs
‘-
4
2

2/23/2021
Examples of other Features
• Zoning
• Crossings
• Stroke Extraction
‘-
5
Local features: main components
1. Detection:Identifytheinterestpoints
2. Description:Extractvectorfeaturedescriptor
surrounding each interest point.
3. Matching: Determine correspondence between descriptors in two views
‘-
6
3

2/23/2021
Main questions
• How to describe a local region?
• Howtoestablishcorrespondences,i.e.,compu‘-tematches?
• Where will the interest points come from?
• What are salient features that we’ll detect in multiple views?
7
‘-
8
4

2/23/2021
Many Existing Detectors Available
Hessian & Harris Laplacian, DoG Harris-/Hessian-Laplace Harris-/Hessian-Affine EBR and IBR
MSER
Salient Regions Others…
K. Grauman, B. Leibe
[Beaudet ‘78], [Harris ‘88] [Lindeberg ‘98], [Lowe 1999]
[Mikolajczyk & Schmid ‘01]
‘-
[Mikolajczyk & Schmid ‘04] [Tuytelaars & Van Gool ‘04] [Matas ‘02]
[Kadir & Brady ‘01]
9
Finding Corners
‘-
• Key property: in the region around a corner, image gradient has two or more dominant directions
• Corners are repeatable and distinctive
C.Harris and M.Stephens. “A Combined Corner and Edge Detector.“
Proceedings of the 4th Alvey Vision Conference: pages 147–151.
Source: Lana Lazebnik
10
5

2/23/2021
Corner Detection: Basic Idea
• We should easily recognize the point by looking through a small window
• Shifting a window in any direction should give a large change in intensity
‘-
“flat” region: no change in all directions
Source: A. Efros
“edge”:
no change along the edge direction
“corner”: significant change in all directions
11
Harris Detector formulation
Change of intensity for the shift [u,v]:
E ( u , v )   w ( x , y )  I ( x  u , y  v )  I ( x , y ) 2
x,y Window
function
Window function w(x,y) =
Shifted intensity
‘-
or
Intensity
1 in window, 0 outside
Gaussian
Source: R. Szeliski
12
6

2/23/2021
Harris Detector formulation
This measure of change can be approximated by:
E(u,v)  [u v] M
u  v 
where M is a 22 matrix computed from image derivatives: ‘-
Gradient with respect to x, times gradient with respect to y
13
I2 II Mw(x,y) x x2y
x,y IxIy Iy   
Sum over image region – area we are checking for corner
M
Harris Detector formulation
where M is a 22 matrix computed from image derivatives: ‘-
Gradient with respect to x, times gradient with respect to y
14
I2 II Mw(x,y) x x2y
x,y IxIy Iy   
Sum over image region – area we are checking for corner
M
7

2/23/2021
What does this matrix reveal?
First, consider an axis-aligned corner:
‘-
15
What does this matrix reveal?
First, consider an axis-aligned corner:
M  I2 x
 
I I  0
II
xy1
I2  0 

xy
y2  ‘-
This means dominant gradient directions align with x or y axis
If either λ is close to 0, then this is not a corner, so look for locations where both are large.
What if we have a corner that is not aligned with the
image axes?
16
Slide credit: David Jacobs
8

2/23/2021
General Case
Since M is symmetric, we have
We can visualize M as an ellipse with axis lengths determined by the eigenvalues and
orientation determined by R direction of the
fastest change
(max)-1/2
M R11 0R 0 2
‘-
direction of the slowest change
(min)-1/2
Slide adapted form Darya Frolova, Denis Simakov.
17
Interpreting the eigenvalues
Classification of 2 image points using eigenvalues of M:
1 and 2 are small;
E is almost constant in all directions
“Edge”
2 >> 1
“Flat” region
“Corner”
1 and 2 are large, ‘-~  ;
12
E increases in all directions
“Edge”
1 >> 2
1 18
9

2/23/2021
Corner response function
Rdet(M)trace(M)2  (  )2 1212
α: constant (0.04 to 0.06)
“Edge”
R<0 ‘- “Corner” R>0
|R| small
“Flat” region
“Edge”
R<0 19 Harris corner detector • Algorithm steps: • Compute M matrix within all image windows to get their R scores • Find points with large corner response (R > threshold)
• Take the points of local maxima of R
C.Harris and M.Stephens. “A Combined Corner and Edge Detector.” Proceedings of the 4th Alvey Vision Conference: pages 147—151, 1988.
20
‘-
10

2/23/2021
Harris Detector: Workflow
‘-
Slide adapted form Darya Frolova, Denis Simakov, Weizmann Institute.
21
Compute corner response R
‘-
22
11

2/23/2021
Find points with large corner response: R>threshold
‘-
23
Harris Detector: Workflow
Take only the points of local maxima of R
‘-
24
12

2/23/2021
‘-
25
Harris Detector: Properties
• Rotation invariance
‘-
Ellipse rotates but its shape (i.e. eigenvalues) remains the same
Corner response R is invariant to image rotation
26
13

2/23/2021
Harris Detector: Properties
• Not invariant to image scale
All points will be classified as edges
‘-
Corner !
27
• How can we detect scale invariant interest points?
‘-
28
14

2/23/2021
How to cope with transformations?
• Exhaustive search • Invariance
• Robustness
‘-
29
Exhaustive search
• Multi-scale approach
‘-
Slide from T. Tuytelaars ECCV 2006 tutorial
30
15

2/23/2021
Exhaustive search
• Multi-scale approach
‘-
31
Exhaustive search
• Multi-scale approach
‘-
32
16

2/23/2021
Exhaustive search
• Multi-scale approach
‘-
33
Invariance
• Extract patch from each image individually
‘-
34
17

2/23/2021
Invariance and covariance
• We want corner locations to be invariant to photometric transformations and covariant to geometric transformations
• Invariance: image is transformed and corner locations do not change
• Covariance: if we have two transformed versions of the same ‘-
image, features should be detected in corresponding locations
35
Main questions
• Where will the interest points come from?
• What are salient features that we’ll detect in multiple views?
• How to describe a local region?
• Howtoestablishcorrespondences,i.e.,compu‘-tematches?
36
18

2/23/2021
Local descriptors
• We know how to detect points • Next question:
How to describe them for matching?
?
‘-
Point descriptor should be: 1. Invariant
2. Distinctive
37
Local descriptors
• Simplest descriptor: list of intensities within a patch. • What is this going to be invariant to?
‘-
38
19

2/23/2021
Feature descriptors
• Disadvantage of patches as descriptors: • Small shifts can affect matching
score a lot
• Solution: histograms
‘-
2
0
39
Source: Lana Lazebnik
Feature descriptors: SIFT
• Scale Invariant Feature Transform
• Descriptor computation:
• Divide patch into 4×4 sub-patches: 16 cells
• Compute histogram of gradient orientations (8 reference angles)
for all pixels inside each sub-patch
• Resulting descriptor: 4x4x8 = 128 dimensions
David G. Lowe. “Distinctive image features from scale-invariant keypoints.” IJCV 60
(2), pp. 91-110, 2004. 40 Source: Lana Lazebnik
‘-
20

2/23/2021
Rotation Invariant Descriptors
Dominant direction of gradient for the image patch
• Find local orientation
• Rotatepatchaccordingtothisangle
This puts the patches into a canonical orientation.
‘-
41
Rotation Invariant Descriptors
‘-
CSE 576: Computer Vision
Image from Matthew Brown
42
21

2/23/2021
Feature descriptors: SIFT
• Extraordinarilyrobustmatchingtechnique • Can handle changes in viewpoint
‐ Up to about 60 degree out of plane rotation
• Can handle significant changes in illumination
‐ Sometimes even day vs. night (below)
• Fast and efficient—can run in real time
‘-
• Lots of code available
‐ http://people.csail.mit.edu/albert/ladypack/wiki/index.php/Known_implementations_of_SIFT
Steve Seitz 43
Working with SIFT descriptors • One image yields:
• n 128-dimensional descriptors: each one is a histogram of the gradient orientations within a patch
‐ [n x 128 matrix]
‘-
• n scale parameters specifying the size of each patch
‐ [n x 1 vector]
• n orientation parameters specifying the angle of the patch
‐ [n x 1 vector]
• n 2d points giving positions of the patches
‐ [n x 2 matrix]
44
22

2/23/2021
Main questions
• Where will the interest points come from?
• What are salient features that we’ll detect in multiple views?
• How to describe a local region?
• Howtoestablishcorrespondences,i.e.,compu‘-tematches?
45
Feature descriptors
We know how to detect and describe good points Next question: How to match them?
‘-
?
46
23

2/23/2021
Feature matching
Given a feature in I1, how to find the best match in I2?
1. Define distance function that compares two descriptors
2. Test all the features in I2, find the one with min distance
‘-
47
Feature distance
How to define the difference between two features f1, f2?

Simple approach is SSD(f1, f2)
‐ ‐
f1
sum of square differences between entries of the two descriptors can give good scores to very ambiguous (bad) matches
‘-
f2
48
24

2/23/2021
Feature distance
How to define the difference between two features f1, f2?

Better approach: ratio distance = SSD(f1, f2) / SSD(f1, f2’)
‐ f2 isbestSSDmatchtof1 inI2
‐ f2’ is 2nd bestSSDmatchtof1 inI2
‐ gives small values for ambiguous matches ‘-
f1
f2′ f2
49
Evaluating the results
How can we measure the performance of a feature matcher?
50 75
200
feature distance
‘-
50
25

2/23/2021
True/false positives
50 true match
75
200 ‘- false match
feature distance
The distance threshold affects performance
• Truepositives=#ofdetectedmatchesthatarecorrect
‐ Suppose we want to maximize these—how to choose threshold? • Falsepositives=#ofdetectedmatchesthatareincorrect
‐ Suppose we want to minimize these—how to choose threshold?
51
Precision, Recall, F1
‘-
52
26

2/23/2021
Evaluating the results
How can we measure the performance of a feature matcher?
1 0.7
# true positives true
‘-
# matching features (positives)
positive rate
0 0.1
false positive rate 1
# false positives
# unmatched features (negatives)
53
Evaluating the results
ROC curve (“Receiver Operator Characteristic”)
1 0.7
# true positives true
‘-
# matching features (positives)
positive rate
0 0.1
false positive rate 1
# false positives
# unmatched features (negatives)
ROC Curves
• Generated by counting # current/incorrect matches, for different threholds
• Want to maximize area under the curve (AUC)
• Useful for comparing different feature matching methods
• For more info: http://en.wikipedia.org/wiki/Receiver_operating_characteristic
54
27

2/23/2021
Advanced local features topics
• Self-Similarity • Space-Time
‘-
But we will save that for next time
55
28