CS计算机代考程序代写 GPU flex algorithm 3/2/2021

3/2/2021
CSE 473/573
‘-
Introduction to Computer Vision and Image Processing
1
FEATURE DESCRIPTORS Questions from Last Lecture?
Homework 2 Due Thursday (3/4)
Quiz Next Tuesday (3/9)
Updated Schedule – P2 Assigned Next Tuesday 2/4)2
‘-
1

3/2/2021
REVIEW: Feature descriptors
• Disadvantage of patches as descriptors: • Small shifts can affect matching
score a lot
• Solution: histograms
‘-
2
0
3
Source: Lana Lazebnik
REVIEW: Feature descriptors: SIFT
• Scale Invariant Feature Transform
• Descriptor computation:
• Divide patch into 4×4 sub-patches: 16 cells
• Compute histogram of gradient orientations (8 reference angles)
for all pixels inside each sub-patch
• Resulting descriptor: 4x4x8 = 128 dimensions
David G. Lowe. “Distinctive image features from scale-invariant keypoints.” IJCV 60
(2), pp. 91-110, 2004. 4 Source: Lana Lazebnik
‘-
2

3/2/2021
Rotation Invariant Descriptors
Dominant direction of gradient for the image patch
• Find local orientation
• Rotate patch
according to this ‘- angle
This puts the patches into a canonical orientation.
CSE 576: Computer Vision 5
Feature descriptors: SIFT
• Extraordinarilyrobustmatchingtechnique • Can handle changes in viewpoint
‐ Up to about 60 degree out of plane rotation
• Can handle significant changes in illumination
‐ Sometimes even day vs. night (below)
• Fast and efficient—can run in real time
‘-
• Lots of code available
‐ http://people.csail.mit.edu/albert/ladypack/wiki/index.php/Known_implementations_of_SIFT
Steve Seitz 6
3

3/2/2021
Observation:
• Many of the filters we have defined are made of rectangles or combinations of rectangles!
‘-
0000000000000000000000
0000000000000000000000
0000000000000000000000 111111 -1-1-1-1-1
0000000000000000000000 111111 -1-1-1-1-1
0000000000000000000000 111111 -1-1-1-1-1
0000000000000000000000 111111 -1-1-1-1-1
0000000000000000000000 0000000000000000000000 0000000000000000000000 0000000000000000000000 0000000000000000000000 0000000000
7
Integral Image
• Transforms an image in a single pass to an image where every pixel is the sum of all of the pixels above and to the left of it.
‘-
S(x,y-1) S(x-1,y)
S(x-1,y)
8
4

3/2/2021
Integral image
• a quick and effective way of calculating the sum of values (pixel values)
‘-
(x,y-1)
(x-1,y) (x,y)
9
Integral images
AB
‘-
CD
input image integral image • What’s the sum of pixels in the blue rectangle?
10
5

3/2/2021
Why rectangles?
(x,y)
integral image
Value at (x,y) is sum of pixels above and to the left of (x,y)
‘-
• Answer: very very fast to compute
• Trick: integral images (aka summed-area- tables)
11
Integral images
AB
‘-
CD
integral image
12
6

3/2/2021
Integral images
A
‘-
integral image
13
Integral images
B
‘-
integral image
14
7

3/2/2021
Integral images
C
‘-
integral image
15
Integral images
‘-
D
integral image
16
8

3/2/2021
Integral images
• What’s the sum of pixels in the rectangle?
ABCD
‘-
17
Computing sum within a rectangle
• Let A,B,C,D be the values of the integral image at the corners of a rectangle
• Then the sum of original image values within the rectangle can be computed as:
sum = D – B – C + A
• Only 3 additions are required for any size of rectangle!
AB
‘-
C
D
18
Lana Lazebnik
9

3/2/2021
‘-
19
Local Descriptors: SURF
• Fast approximation of SIFT idea
 Efficient computation by 2D box filters & integral images
 6 times faster than SIFT
 Equivalent quality for object identification ‘-
• GPU implementation available  Feature extraction @ 200Hz
(detector + descriptor, 640×480 img)
 http://www.vision.ee.ethz.ch/~surf
[Bay, ECCV’06], [Cornelis, CVGPU’08]
20
10

3/2/2021
MSER Operator:
Maximally Stable Extremal Regions
• MSER regions are connected areas characterized by almost uniform intensity, surrounded by contrasting background.
‘-
• They are constructed through a process of trying multiple thresholds. • The selected regions are those that maintain unchanged shapes over
a large set of thresholds.
Matas J, Chum O, Urban M, Pajdla T. Robust wide-baseline stereo from maximally stable extremal regions. Image and vision computing. 2004
Sep 1;22(10):761-7.
21
Examples of MSER Regions
‘-
22
11

3/2/2021
Another Example
‘-
23
MSER Construction
intensity image shown as a surface function
Watershed segmentation algorithms come from the concept of filling a basin with water to different levels.
24
‘-
12

3/2/2021
MSER Construction
‘-
25
MSER Computation
• For each threshold, compute the connected binary regions. • Compute a function, area A(i), at each threshold value i.
‘-
• Analyze this function for each potential region to determine those that persist with similar function value over multiple thresholds.
26
13

3/2/2021
Analysis of Area Function
‘-
27
Regions detected at different thresholds have different areas
‘-
28
14

3/2/2021
Normalization
‘-
MSER regions Ellipse Fitting
Ellipse Dilation
29
Affine transformation from ellipses to circular regions plus intensity normalization
‘-
30
15

3/2/2021
Maximally Stable Extremal Regions
• Based on Watershed segmentation algorithm
• Select regions that stay stable over a large parameter range
‘-
[Matas ‘02]
31
Comparison
Harris
MSER
‘-
32
16

3/2/2021
Local Descriptors: Shape Context
Count the number of points inside each bin, e.g.:
Coun‘t-= 4 Count = 10
Log-polar binning: more precision for nearby points, more flexibility for farther points.
Belongie & Malik, ICCV 2001
33
K. Grauman, B. Leibe
Shape Context Descriptor
‘-
34
17

3/2/2021
Self-similarity Descriptor
Matching Local Self-Similarities across Images and Videos, Shechtman and Irani, 2007
‘-
36
Self-similarity Descriptor
Matching Local Self-Similarities across Images and Videos, Shechtman and Irani, 2007
‘-
37
18

3/2/2021
Self-similarity Descriptor
Matching Local Self-Similarities across Images and Videos, Shechtman and Irani, 2007
‘-
38
Choosing a detector
• What do you want it for?
– Precise localization in x-y: Harris
– Good localization in scale: Difference of Gaussian – Flexible region shape: MSER
‘-
• Best choice often application dependent
– Harris-/Hessian-Laplace/DoG work well for many natural categories – MSER works well for buildings and printed things
• Why choose?
– Get more points with more detectors
• There have been extensive evaluations/comparisons – [Mikolajczyk et al., IJCV’05, PAMI’05]
– All detectors/descriptors shown here work well
40
19

3/2/2021
Comparison of Key Feature Detectors
‘-
41
Tuytelaars Mikolajczyk 2008
Things to remember
• Keypoint detection: repeatable and distinctive
• Corners, blobs, stable regions • Harris, DoG
‘-
• Descriptors: robust and selective • spatial histograms of orientation
• SIFT
42
20

3/2/2021
Main questions
• Where will the interest points come from?
• What are salient features that we’ll detect in multiple views?
• How to describe a local region?
• How to establish correspondences, i.e., compute matches?
‘-
43
Feature descriptors
We know how to detect and describe good points Next question: How to match them?
‘-
?
44
21

3/2/2021
Matching features (cont’d)
• Given a feature in I1, how to find the best match in I2?
1. Define distance function that compares two descriptors.
2. Test all the features in I2, find the one with min distance.
‘-
f1 f2
I1 I245
Common Distance Measures
• squared differences (SSD),
• mutual information (MI)
• normalized mutual information (NMI) • cross-correlation (CC).
‘-
46
22

3/2/2021
Matching features
• Accept a match if SSD(f1,f2) < t • How do we choose t? ‘- 47 Matching features • A better distance measure is the following: • SSD(f1, f2) / SSD(f1, f2’) ‐ ‐ f2 isbestSSDmatchtof1 inI2 f2’ is 2nd bestSSDmatchtof1 inI2 ‘- f1 I1 f2' f2 I2 48 23 3/2/2021 Matching features • Accept a match if SSD(f1, f2) / SSD(f1, f2’) < t • t=0.8 has given good results in object recognition. • 90% of false matches were eliminated. • Less than 5% of correct matches were discarded ‘- 49 Feature Matching Summary Given a feature in I1, how to find the best match in I2? 1. 2. • • Define distance function that compares two descriptors Test all the features in I2, find the one with min distance ‘- Simple approach is SSD(f1, f2) ‐ sum of square differences between entries of the two descriptors ‐ can give good scores to very ambiguous (bad) matches Better approach: ratio distance = SSD(f1, f2) / SSD(f1, f2’) ‐ f2isbestSSDmatchtof1inI2 ‐ f2’ is 2nd best SSD match to f1 in I2 ‐ gives small values for ambiguous matches 50 24 3/2/2021 Object Detection ‘- 51 ‘- 52 25 3/2/2021 ‘- 53 What are Fitting and Alignment? Fitting: find the parameters of a model that best fit the data ‘- Alignment: find the parameters of the transformation that best align matched points 54 26