程序代写代做代考 2812ICT Perceptual Computing

2812ICT Perceptual Computing
Local Image Features

Outline
• Corner features • Blob features

Local Feature: Corner Features
• Feature distinctiveness

Image matching
Hard
Harder

Harder still?
NASA Mars Rover images
Slide credit: T. Darrell

NASA Mars Rover images with SIFT feature matches
Figure by Noah Snavely Slide credit: T. Darrell

Motivation for using local features
• Global representations have major limitations
• sensitive to occlusion, lighting, parallax effects
• Instead, describe and match only local regions
• Increased robustness to • Occlusions
• Articulation:
• Intra-category variations:
Slide credit: Fei-Fei Li

General Approach
Slide credit: Bastian Leibe

Local features and alignment
• Detect feature points in both images
[Darya Frolova and Denis Simakov]

Local features and alignment
• Detect feature points in both images • Find corresponding pairs
[Darya Frolova and Denis Simakov]

Local features and alignment
• Detect feature points in both images • Find corresponding pairs
• Use these pairs to align images
[Darya Frolova and Denis Simakov]

Local features and alignment
• Problem 1:
• Detect the same point independently in both images
no chance to match!
We need a repeatable detector
[Darya Frolova and Denis Simakov]

Local features and alignment
• Problem 2:
• For each point correctly recognize the corresponding one
?
We need a reliable and distinctive descriptor
[Darya Frolova and Denis Simakov]

Geometric Transformations
Slide credit: T. Darrell

Photometric transformations
Figure from T. Tuytelaars ECCV 2006 tutorial
Slide credit: T. Darrell

And other nuisances…
• Noise
• Blur
• Compression artifacts •…
Slide credit: T. Darrell

Invariant local features
Subset of local feature types designed to be invariant to common geometric and photometric transformations.
Basic steps:
1) Detect distinctive interest points 2) Extract invariant descriptors
Figure: David Lowe

Main questions
• Where will the interest points come from?
• What are salient features that we’ll detect in multiple views? • How to describe a local region?
• How to establish correspondences, i.e., compute matches?

Requirements
• Repeatable and accurate: Features should be
• Invariant to geometric transformations (i.e. Affine transformation) • Robust to lighting variations, noise, blur, quantization
• Locality: Features are local, therefore robust to occlusion and clutter. • Quantity: We need a sufficient number of regions to cover the object. • Distinctiveness: The regions should contain “interesting” structure.
• Efficiency: Close to real-time performance.

Slide credit: T. Darrell

Finding Corners
• Key property: in the region around a corner, image gradient has two or more dominant directions
• Corners are repeatable and distinctive
C.Harris and M.Stephens. “A Combined Corner and Edge Detector.“ Proceedings of the 4th Alvey Vision Conference: pages 147–151.
Source: Lana Lazebnik

Corners as distinctive interest points
• We should easily recognize the point by looking through a small window
• Shifting a window in any direction should give a large change in intensity
“flat” region: no change in all directions
Source: A. Efros
“edge”:
no change along the edge direction
“corner”: significant change in all directions

Harris Detector Formulation
• Change of intensity for the shift [u,v]:
Slide credit: Rick Szeliski

Simplifying the measure
• Local first order Taylor Series expansion of I(x,y):
• Local quadratic approximation of E(u,v):

Simplifying the measure
• Local quadratic approximation of the surface E(u,v):
Sum over image region – the area we are checking for corner

Interpreting the second moment matrix
Horizontal edges: Ix= 0
Vertical edges: Iy= 0

What Does This Matrix Reveal?
• First, let’s consider an axis-aligned corner:
If either λ is close to 0, then this is not a corner,
so look for locations where both are large.
What if we have a corner that is not aligned with the image axes?
Slide credit: David Jacobs

General Case
• Since M is symmetric, we have
Where the column vectors of R are linearly independent eigenvectors of M, that are mutually
orthogonal
• We can visualize M as an ellipse with axis lengths determined by the eigenvalues and orientation determined by R

Interpreting the eigenvalues
Slide credit: Kristen Grauman

Corner Response Function
• Fast approximation
• Avoid computing the eigenvalues • α: constant (0.04 to 0.06)
Axis-aligned case
𝜃≈0

Window Function w(x,y) • Option 1: uniform window
• Sum over square window
• Option 2: Smooth with Gaussian • Gaussian performs weighted sum
• Result is rotation invariant
• Problem: not rotation invariant
Slide credit: B. Leibe

Summary: Harris Detector
Perform non-maximum suppression

Harris Detector – Responses
Slide credit: Krystian Mikolajczyk

Harris Detector – Responses
Slide credit: Kristen Grauman

Harris Detector: Properties
• Translation invariance • Rotation invariance?

Harris Detector: Properties
• Translation invariance • Rotation invariance
• Scale invariance?

Harris Detector: Scale robust corner detection
• Find scale that gives local maximum of score :

Automatic scale selection

Automatic scale selection

Automatic scale selection

Automatic scale selection

Automatic scale selection

Automatic scale selection

Blob Features
• stable in space and scale

Edges and Blobs

Edges and Blobs
• Edge: Ripple
• Blob: Superposition of two ripples
• The magnitude of the Laplacian response is maximum at the centre of the blob provided the scale of the Laplacian matches the scale of the blob

Laplacian of Gaussian
• Normalize to make the response independent of scale
Scale of Guassian g

Characteristic scale
• We define the characteristic scale as the scale that produces peak of Laplacian response

Scale selection
• To get maximum response, the zeros of the Laplacian have to be aligned with the circle
• The Laplacian is given by (up to scale):
• Therefore, the maximum response occurs at

Laplacian of Gaussian
• Circularly symmetric operator for blob detection in 2D
• Find maxima and minima of LoG operator in space and scale

LoG blob detector
• Convolve the image with scale-normalized LoG at several scales • Find maxima of squared LoG response in scale-space

LoG blob detector
• Example

LoG blob detector
• Example: Squared LoG response
Sigma = 2

LoG blob detector
• Example: Squared LoG response
Sigma = 2.5018

LoG blob detector
• Example: Squared LoG response
Sigma = 3.1296

LoG blob detector
• Example: Squared LoG response
Sigma = 3.9149

LoG blob detector
• Example: Squared LoG response
Sigma = 4.8972

LoG blob detector
• Example: Squared LoG response
Sigma = 6.126

LoG blob detector
• Example: Squared LoG response
Sigma = 7.6631

LoG blob detector
• Example: Squared LoG response
Sigma = 9.5859

LoG blob detector
• Example: Squared LoG response
Sigma = 11.9912

LoG blob detector
• Example: Squared LoG response
Sigma = 15

LoG blob detector
• Example: Detected blobs

Efficient implementation
• Approximating the Laplacian with a difference of Gaussians (DoG)

Efficient implementation
David G. Lowe. “Distinctive image features from scale-invariant keypoints” IJCV 60 (2), pp. 91-110, 2004.

Difference of Gaussians (DoG)

We have covered
• Corner detectors • Stable in space • Harris
• Blob detectors
• Stable in scale and space • LoG, DoG
• Further reading
• David G. Lowe, “Distinctive image features from scale-invariant keypoints” • T. Lindeberg, “Feature detection with automatic scale selection”

Next week:
• Feature descriptor: SIFT • Feature matching