程序代写 IJCV 2004]

PowerPoint 프레젠테이션

Changjae Oh

Copyright By PowCoder代写 加微信 powcoder

Computer Vision
– Features: edges, corners –

Semester 1, 22/23

Derivatives and Edges

• Edges: intensity discontinuities

̶ Ramp edge

̶ Step edge

Ramp edge step edge Ramp edge step edge

Derivatives and Edges

Effects of Noise in Edge Detection

Intensity profile

Source: D. Hoiem

Effects of Noise in Edge Detection

Source: D. Hoiem

With a little Gaussian noise

Effects of Noise in Edge Detection

• Consider a single row or column of the image

̶ Plotting intensity as a function of position

Where is the edge?

Source: S. Seitz

Effects of Noise in Edge Detection

• Difference filters respond strongly to noise

̶ Image noise results in pixels that look very different from their neighbors

̶ Generally, the larger the noise the stronger the response

• What can we do about it?

Source: D. Forsyth

Solution: Smooth First

To find edges, look for peaks in )( gf

Source: S. Seitz

Derivative Theorem of Convolution

• Differentiation is convolution, and convolution is associative:

• This saves us one operation:

Source: S. Seitz

• Smoothed derivative removes noise, but blurs edge.

1 pixel 3 pixels 7 pixels

Trade-off Between Smoothing and Localization

Source: D. Forsyth

Edge Detection: Derivative of Gaussian Filter

* [1 -1] =

Note) Sobel operators approximate the derivative of Gaussian filter
in a simple form.

Edge Detection:

• Sobel filter

𝛻𝐼 = 𝐼𝑥 , 𝐼𝑦 = 𝑆𝑥 ∗ 𝐼, 𝑆𝑦 ∗ 𝐼

𝑀 𝑥, 𝑦 = 𝐼𝑥

or 𝐼𝑥 + |𝐼𝑦|Sobel filter

For color image,

𝑀 𝑥, 𝑦 = 𝑀𝑅 𝑥, 𝑦 + 𝑀𝐺 𝑥, 𝑦 + 𝑀𝐵 𝑥, 𝑦 /3

Gaussian filtering is
often applied as pre-
processing.

1) Apply Gaussian filter
2) Apply Sobel filter

Edge Detection: Laplacian Filter

• Laplacian filter 0 1 0

More general form

For color image,

𝑂 𝑥, 𝑦 = 𝑂𝑅 𝑥, 𝑦 + 𝑂𝐺 𝑥, 𝑦 + 𝑂𝐵 𝑥, 𝑦 /3

Edge Detection: Laplacian of Gaussian (LoG)

• Marr- (Laplacian of Gaussian: LoG)

1. Apply the Gaussian filter for removing unnecessary noise.

2. Apply the Laplacian filter

Without Gaussian

With Gaussian smoothing

Canny Edge Detector

• The most popular method for edge detection

• Three criteria for edge detection
̶ Low error rate of detection: finding all edges

̶ Localization of edges: computing precise locations of edges

̶ Single response: returning a single pixel for a single edge

Image Filtering
(Low-pass &

High-pass filters)

Non-maximum
suppression &

Double thresholding

Canny Edge Detector

Step 1) Image Gradient using Low-pass & High-pass filters
1. Apply low-pass and high-pass filters

– Gaussian filter → Sobel filter (‘default implementation in OpenCV’)
– 1D derivative of Gaussian filter
– Difference of Gaussian filter

2. Compute the magnitude and angle of edge

𝑀(𝑥, 𝑦) = 𝐺𝑥

𝐴(𝑥, 𝑦) = 𝑡𝑎𝑛−1

𝛻𝐺 = (𝐺𝑥, 𝐺𝑦) =

Canny Edge Detector

Step 2) Non-maximum suppression
• Key idea: Survive only pixels with a larger edge magnitude𝑀(𝑥, 𝑦) within a small window

• Procedure

1. Within a small window (e.g. 3 × 3 window) centered at (𝑥, 𝑦),
find neighbor pixels in direction 𝐴(𝑥, 𝑦)

2. Compare the edge magnitudes 𝑀(𝑥, 𝑦) of these two neighbor pixels

𝑀(𝑥, 𝑦) = 𝐺𝑥

𝐴(𝑥, 𝑦) = 𝑡𝑎𝑛−1

Canny Edge Detector

• But, the image is a discrete signal.
̶ So, quantize the gradient direction

Gradient direction

𝐴(𝑥, 𝑦) = 𝑡𝑎𝑛−1

Canny Edge Detector

Step 3) Double thresholding

̶ Hysteresis thresholding (𝑇𝐿 , 𝑇𝐻)

̶ If 𝑀(𝑥, 𝑦) > 𝑇𝐻, then (𝑥, 𝑦) is an edge

̶ If 𝑀 𝑥, 𝑦 < 𝑇𝐿, then (𝑥, 𝑦) is NOT an edge ̶ If 𝑇𝐿 ≤ 𝑀 𝑥, 𝑦 ≤ 𝑇𝐻, • If the neighboring pixels of (x,y) is an edge, then (𝑥, 𝑦) is an edge. • Otherwise, then (𝑥, 𝑦) is NOT an edge. Canny Edge Detector Example original image (Lena) Compute Gradients X-Derivative of Gaussian Y-Derivative of Gaussian Gradient Magnitude Before Non-max Suppression After Non-max Suppression Final Canny Edges • After applying Hysteresis thresholding (Double thresholding) Interest points: Corners 9300 Pkwy, Charlotte, NC Slides from , , and Corners: Applications • Corners are used for: ̶ Image alignment/3D reconstruction • Find correspondences across different views ̶ Motion tracking/robot navigation • Which points are good to track? ̶ Object recognition/Indexing and database retrieval • Find patches likely to tell us something about object category Corner Detection: Basic Idea • We should easily recognize the point by looking through a small window • Shifting a window in any direction should give a large change in intensity no change along the edge direction significant change in all directions “flat” region: no change in all di Source: A. Efros Corner Detection: Mathematics ( , ) ( , ) ( , ) ( , ) E u v w x y I x u y v I x y= + + − Change in appearance of window w(x,y) for the shift [u,v]: Corner Detection: Mathematics The quadratic approximation simplifies to where M is a second moment matrix computed from image derivatives: MvuvuE ][),( 2 x 2 matrix of image derivatives (averaged in neighborhood of a point) Corners as Distinctive Interest Points Corner Response Function )()(trace)det(  +−=−= MMR α: constant (0.04 to 0.06) Note) The eigenvalues are not needed. Instead, we can use the determinant and trace of the second moment matrix. Detector: Procedure Step 1) Compute M matrix for each window to get their cornerness scores. Step 2) Find points whose surrounding window gave large corner response (R> threshold)

Step 3) Take the points of local maxima,

i.e., perform non-maximum suppression

Detector: Input

Performing corner detection to each image independently

Detector: Step 1)

Compute corner response R

Detector: Step 2)

Find points with large corner response: R>threshold

Detector: Step 3)

Take only the points of local maxima of R

Detector: Results

Advantages? Drawbacks?

Affine intensity change

• Only derivatives are used
=> invariance to intensity shift: 𝐼 → 𝐼 + 𝑏

• Intensity scaling: 𝐼 → 𝑎𝐼

x (image coordinate)

x (image coordinate)

• Partially invariant to affine intensity change

𝐼 → 𝑎𝐼 + 𝑏

Slide from J. Hays

Image translation

• Derivatives and window function are shift-invariant

Corner location is covariant w.r.t. translation

Slide from J. Hays

Image rotation

Second moment ellipse rotates but its shape (i.e

. eigenvalues) remains the same

Corner location is covariant w.r.t. rotation

Slide from J. Hays

All points will be
classified as edges

Corner location is not covariant to scaling!
Slide from J. Hays

Not always the best

https://medium.com/pixel-wise/detect-those-corners-aba0f034078b

Corner detection: pros & cons

• Corner detection can localize in x-y, but not scale

Slide from J. Hays

• Edge detection

̶ Identify sudden changes (discontinuities) in an image

̶ Derivative of Gaussian, Sobel, Laplacian, Laplacian of Gaussian, Canny detector

• Canny edge detector

̶ The most popular edge detector
1. Applying low- and high-pass filtering

2. Non-maximum suppression

3. Double thresholding

• Harris corner detector

1. Compute the second moment matrix

2. Non-maximum suppression

Changjae Oh

Computer Vision
– Features: SIFT, SURF, and matching –

Semester 1, 22/23

• Feature Descriptor

̶ Scale Invariant Feature Transform (SIFT) descriptor

̶ Speeded Up Robust Features (SURF) descriptor

• Feature Matching

̶ Nearest Neighbor (NN) Matching

• Feature Descriptor

̶ Scale Invariant Feature Transform (SIFT) descriptor

̶ Speeded Up Robust Features (SURF) descriptor

• Feature Matching

̶ Nearest Neighbor (NN) Matching

Harris corner detector – Pros & Cons

̶ Robust against shift/rotation transformations and brightness changes,

̶ Not robust against scale changes.

̶ Solution: Scale invariant blob detector by SIFT

Scale Change

Scale-Controllable Kernel
→ LoG (DoG)

Blob detector

[Ref: Distinctive Image Features from Scale-Invariant Keypoints, IJCV 2004]

Keypoint detection with scale selection

• We want to extract keypoints with characteristic scales that are covariant
with respect to the image transformation

Automatic scale selection

)),(( )),((

 = xIfxIf
mm iiii 

How to find corresponding patch sizes?
A: Blob detector

Basic idea

• Convolve the image with a “blob filter” at multiple scales and look for extre
ma of filter response in the resulting scale space

T. Lindeberg, Feature detection with automatic scale selection,
IJCV 30(2), pp 77-116, 1998

http://www.nada.kth.se/cvap/abstracts/cvap198.html

Blob detection

• Find maxima and minima of blob filter response in space and scale

Blob filter

• Laplacian of Gaussian

̶ Circularly symmetric operator for blob detection in 2D

Laplacian of Gaussian (LoG)

( ) LoGIGIG = 22 )(

: LoG Parameter

LoG function

Recall: Edge detection- Laplacian

Second derivative

of Gaussian

(Laplacian)

Edge = zero crossing

of second derivative

Source: S. Seitz

Laplacian of Gaussian (LoG)

• LoG applied on Edge

• LoG applied on Pulse (Blob):
̶ Superposition of two ripples, generating an extremal value at the center of the pulse (blob)

Convolving with a pulse signal, 1-D LoG generates an extremal pulse
when its sigma matches the width of the pulse.

Slides from Won

Scale-normalized LoG

Scale-normalized Laplacian response

Unnormalized Laplacian responseOriginal signal

multiply σ2 to

LoG for the

normalization

Slides from Won

Blob Detection with 2-D LoG

LoG filtered scale-space

Source: http://www.cs.utah.edu/~jfishbau/advimproc/project1/

DoG (Difference of Gaussian)

( ) ( )( ) ,,,,2 yxGyxGLoG yyxxnorm += ( ) ( ) ,,,, yxGkyxGDoG −=

Normalized LoG ≈(k-1) x Difference of Gaussian (DoG)

• Use DoG instead of LoG
1) Gaussian Filtering : G(x,y,σ), G(x,y,kσ)
2) Subtraction

( ) ( ) ( )

( ) ( )( ) ( )

, , , , , ,

, , , , , , ,

D x y I G x y k I G x y

I G x y k G x y I DoG x y k

-5 -4 -3 -2 -1 0 1 2 3 4 5

( ) ( ) ( ) ,,1,, yxLoGkyxDoG norm−

Scale-space blob detector: Example

Scale-space blob detector: Example

Scale-space blob detector: Example

SIFT descriptors

• Inspiration: complex neurons in the primary visual cortex

D. Lowe, Distinctive image features from scale-invariant keypoints,
IJCV 60 (2), pp. 91-110, 2004

http://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf

SIFT: Step 1) Scale Space Extrema Detection

Scale-Space

Localization

Orientation
Assignment

Descriptor

Construction

Scale-Space Extrema Detection
Detect the candidates of interest points, which are extrema points in the scale-
space domains.

Source: D. Descriptor

SIFT: Step 1) Scale Space Extrema Detection

Second Octave

First Octave

Ex) 2 Levels (S=2)
– Need to generate S+3=2+3=5
blurred images per octave

707.0  =

414,1  =k 00

828.2  =k

414,1  =k

828.2  =k 00

656.5  =k

Starting Image

FirstOctave : -1

Slides from Won

SIFT: Step 1) Scale Space Extrema Detection
Scale-Space Extrema Detection
– Compare a pixel with 26 pixels in current and adjacent scales (Green circles)
– Select a pixel as an extrema if larger/smaller than neighboring 26 pixels
– Needs further localization and sampling

Source: D. Lowe

SIFT: Step 2) Key Point Localization
(1) Sub-pixel localization and removal

of extrema points with low contrast:

Use Taylor series expansion of the
scale-space function D to find maximum of surface

D and its derivatives are evaluated at the sample point

The sub-pixel location of the extremum

The function value at the extremum, ( )XD ˆ

+= If , then discard the extremum. ( ) ThXD ˆ

Scale-Space

Localization

Orientation
Assignment

Descriptor

Construction

SIFT: Step 2) Key Point Localization

If r>Th, then delete the extreme point.

(2) Delete edge-like features by calculating the curvature
H: Hessian matrix

Scale-Space

Localization

Orientation
Assignment

Descriptor

Construction

SIFT: Step 3) Orientation Assignment

1. Take 16×16 square window

2. Compute edge orientation for each 2×2 block in 16×16 square

3. Throw out weak edges (threshold gradient magnitude)

4. Create histogram by accumulating the Gaussian weighted edge magnitude

Scale-Space

Localization

Orientation
Assignment

Descriptor

Construction

( ) ( ) ( )yxIyxGyxL ,*,,, = ( ) ( ) ( )( ) ( ) ( )( )
( ) ( ) ( )( ) ( ) ( )( )( )yxLyxLyxLyxLyx

yxLyxLyxLyxLyxm

,1,11,1,tan,

angle histogram

SIFT: Step 4) Descriptor Construction

1. Normalize the window as 16×16 window using orientation/scale.

2. For each 4×4 block, compute gradient histogram over 8 directions.
Gradient magnitudes are weighted by a Gaussian of variance half the window (for s
mooth fall-off)

SIFT descriptor: 128-D vector

Scale-Space

Localization

Orientation
Assignment

Descriptor

Construction

SIFT: Step 4) Descriptor Construction

3. Concatenate 8-D vectors of 4×4 arrays and normalise the magnitude 128-D vector t

4. Threshold gradient magnitudes to avoid excessive influence of high gradients
1. after normalization, clamp gradients >0.2

2. Renormalize the vector

SIFT Descriptor: Binning of Spatial Coordinates and
Gradient Orientations

Properties of SIFT

• Extraordinarily robust detection and description technique

̶ Can handle changes in viewpoint
• Up to about 60 degree out-of-plane rotation

̶ Can handle significant changes in illumination
• Sometimes even day vs. night

̶ Fast and efficient—can run in real time

Source: N. Snavely

SURF (Speeded Up Robust Features)

• SIFT is one of the best but slow

• Using integral images for an efficient implementation

• Detect and describe SIFT like features

• SURF describes image 3 times faster than SIFT

• SURF is not as well as SIFT on invariancae to
illumination change and viewpoint change

• Keypoint detection based on Hessian matrix: blob-like features

Bay, H., Tuytelaars, T., Gool, L.V., “SURF: Speeded Up Robust Features”, ECCV 2006.
Bay, H. et al., “Speeded-Up Robust Features (SURF)”, Computer Vision and Image Understanding, 2008

where is the convolution of the second order Gaussian derivative
with the image I in p and similarly for and

( ),pLxy ( ).,pLyy( )

/, xpG  

( ) ( ) ( ) ( )2,,,,  pLpLpLpH xyyyxx −=

( )DCBAS +−−=

Integral images: accumulated sum
of gray scale pixel values of images

• SURF is more than three times faster than SIFT

• SURF is inferior to SIFT for luminance and viewpoint changes.

• SURF integrates the gradient information within a subpatch, whereas SIFT depends on
the orientations of the individual gradients. This makes SURF less sensitive to noise.

• Feature Descriptor

̶ Scale Invariant Feature Transform (SIFT) descriptor

̶ Speeded Up Robust Features (SURF) descriptor

• Feature Matching

̶ Nearest Neighbor (NN) Matching

How do we decide which features match?

Overview of Feature Matching

1. Find a set of
distinctive keypoints

3. Extract and normalize
the region content

2. Define a region
around each keypoint

4. Compute a local
descriptor from the
normalized region

5. Match local descriptors

Slide from K. Grauman, B. Leibe

Feature Matching

• Nearest neighbor matching

̶ One feature matches to another if those features are nearest neighbors and their
distance is below some threshold.

• Problems

̶ Threshold 𝑇 is difficult to set

̶ Non-distinctive features could have lots of close matches, only one of which is correct

𝑑𝑖𝑠𝑡(𝑓𝑖 , 𝑔𝑗) & 𝑑𝑖𝑠𝑡(𝑓𝑖 , 𝑔𝑘) < 𝑇 → 𝑁𝑁(𝑓𝑖) = 𝑔𝑘 {𝑓𝑖|𝑖 = 1,… ,𝑁} for 𝐼1 and {𝑔𝑗|𝑗 = 1,… ,𝑀} for 𝐼2 Feature Matching • Simple Solution: Cross-checking technique 𝑑𝑖𝑠𝑡(𝑓𝑖 , 𝑔𝑗) → 𝑁𝑁(𝑓𝑖) = 𝑔𝑘 𝑑𝑖𝑠𝑡(𝑓𝑖 , 𝑔𝑘) → 𝑁𝑁(𝑔𝑘) = 𝑓𝑙 If 𝑖 = 𝑙 , the matching is assumed to be reliable. Otherwise, the matching is unreliable. 𝑔𝑘Reliable match Unreliable match Feature Matching • Simple Solution ̶ Refine matched points using threshold ratio of nearest to 2nd nearest descriptor 𝑑𝑖𝑠𝑡(𝑓𝑖 , 𝑔𝑗) 𝑘2 = second min 𝑑𝑖𝑠𝑡(𝑓𝑖, 𝑔𝑗) 𝑑𝑖𝑠𝑡(𝑓𝑖 , 𝑔𝑘1) 𝑑𝑖𝑠𝑡(𝑓𝑖 , 𝑔𝑘2) < 𝑇𝑟 → 𝑁𝑁(𝑓𝑖) = 𝑔𝑘1 This gets rid of 90% false matches, 5% of true matches in Lowe’s study Feature Matching and Fitting Feature matching Feature matching Image stitching using geometric transform estimation Fundamental matrix estimation Next topic • We’ve learned how to process pixels and detect features, e.g. edges, corners, blobs. • A higher-level, more compact representation of the features in the image by grouping multiple features according to a simple model ̶ Prerequisite • Review EBU6230 Image/Video Processing – Week3: Interest points • Review EBU7240 Computer Vision – Week1: feature (what you learned today) 程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com