程序代写代做代考 algorithm graph Fundamentals of Computer Vision

Fundamentals of Computer Vision
Lecture

Overview of today’s lecture
• Back to warping: image homographies.
• Computing with homographies.
• RANSAC: Random Sample Consensus

Slide credits
Most of these slides were adapted from:
• Kris Kitani (15-463, Fall 2016), Ioannis Gkioulekas (16-385, Spring 2019), Robert Colin (454, Fall 2019s).
Some slides were inspired or taken from:
• Fredo Durand (MIT).
• James Hays (Georgia Tech).

Re-write in matrix form:
How many equations from one point correspondence?
Determining the Homography Matrix

Determining the Homography Matrix
Stack together constraints from multiple point correspondences:
Homogeneous linear least squares problem • Solve with SVD

Homogeneous Linear Least Squares
We will frequently encounter problems of the form
Ah = 0
This is known as the Homogeneous Linear Least Squares problem. It is similar in appearance to the inhomogeneous linear least squares problem:
Ah = b
In this case we solve for h using the pseudoinverse or inverse of A. This won’t work with Ah = 0.
Instead we solve it using Singular Value Decomposition (SVD). We first compute the SVD of A:
ortho-normal diagonal ortho-normal
A = UΣVT
unit norm constraint
)
𝐴 = # 𝞼 % 𝑢 % 𝑣 %( %*+
David Kriegman

7

Homogeneous Linear Least Squares
The singular values σi will be sorted in descending order, so σ9 will be the smallest. There are three cases for the value of σ9:
If the homography is exactly determined, then σ9 = 0, and there exists a homography that fits the points exactly.
If the homography is overdetermined, then σ9 ≥ 0. Here σ9 represents a “residual” or goodness of fit.
We will not handle the case of the homography being underdetermined.
From the SVD we take the “right singular vector” (a column from V ) which corresponds to the smallest singular value, σ9.
This is the solution, h, which contains the coefficients of the homography matrix that best fits the points.
We reshape h into the matrix H, and form the equation 𝑃- = 𝐻 ⋅ 𝑃

Homogeneous Linear Least Squares Alternate Derivation
‖𝐴h‖2 h2=1
Starting again with the equation Ah = 0, the sum squared error can be written as:
Taking the derivative of f with respect to h and setting the result to zero, we get
Looking at the eigen-decomposition of AT A, we see that h should equal the eigenvector of ATA that has an eigenvalue of zero (or, in the presence of noise the eigenvalue closest to zero).
Given a matrix A with SVD decomposition A = UΣVT , the columns of V correspond to the eigenvectors of AT A. This result is identical to the result obtained using SVD.
minimize subject to

Linear least squares estimation only works when the transform function is linear! (duh)
Also doesn’t deal well with outliers.

How to stitch together a panorama?
• Basic Procedure
üTake a sequence of images from the same position • Rotate the camera about its optical center
üCompute the homography (transformation) between second image and first
• Transform the second image to overlap with the first • Blend the two together to create a mosaic
• (If there are more images, repeat)
Modified from Steve Seitz

• • • • •
Forward Warping
Image 2 Image 1 canvas
H(x,y) y y’
x f(x,y) x’ g(x’,y’)
Forward warping:
Form enough pixel-to-pixel correspondences between two images
Solve for linear transform parameters as before
Send each pixel f(x,y) to its corresponding location what is the
(x’,y’) = H(x,y) in the right image Modified from Alyosha Efros
problem with this?

Forward Warping
H(x,y)
y y’
x f(x,y) x’ g(x’,y’)
Q: What if pixel lands “between” two pixels?
A: Distribute color among neighboring pixels (x’,y’) (“splatting”)
Q: What if a pixel (x’,y’) receives intensity from more than one pixels (x,y)? A: We average their intensity contributions.
Alyosha Efros

Inverse Warping
Image 2 Image 1 canvas
H-1(x,y) y y’
Inverse warping:
• • •
Form enough pixel-to-pixel correspondences between two images
Solve for linear transform parameters as before, then compute its inverse Get each pixel f(x,y) from its corresponding location
x f(x,y) x’ g(x’,y’)
(x,y) = H-1(x’,y’) in the left image Modified from Alyosha Efros
what is the problem with this?

InverseWarping
H-1(x,y) y y’
Inverse warping:
• • •
Form enough pixel-to-pixel correspondences between two images
Solve for linear transform parameters as before, then compute its inverse Get each pixel f(x,y) from its corresponding location
(x,y) = H-1(x’,y’) in the left image
Q: what if pixel comes from “between” two pixels? A: interpolate color value from neighbors
x f(x,y) x’ g(x’,y’)
Alyosha Efros

1.Interpolate to find R2
2.Interpolate to find R1
3.Interpolate to find P
Grayscale example
In matrix form (with adjusted coordinates)
In Matlab:
>> help interp2
Bilinear interpolation

Forward vs inverse warping
Suppose we have two images.
• How do we compute the transform that takes one to the other?
y
x
H(x, y) H-1(x, y)
y’
f(x, y)
x’ g(x’, y’)
• Inverse warping eliminates holes in target image
• Forward warping does not require existence of inverse transform

Recap: robust feature-based alignment
• Extract features
Source: L. Lazebnik

Recap: robust feature-based alignment
• Extract features
• Compute putative matches.
Source: L. Lazebnik

Recap: robust feature-based alignment
• • •
Extract features
Compute putative matches
Loop:
• Hypothesize transformation T (small group of putative matches that are related by T).
Source: L. Lazebnik

Recap: robust feature-based alignment
• Extract features
• Compute putative matches
• Loop:
• Hypothesize transformation T (small group of putative matches that are
related by T).
• Verify transformation (search for other matches consistent with T).
Source: L. Lazebnik

Recap: robust feature-based alignment
• Extract features
• Compute putative matches
• Loop:
• Hypothesize transformation T (small group of putative matches that are
related by T).
• Verify transformation (search for other matches consistent with T).
• Blend the two together to create a mosaic.
Source: L. Lazebnik

Random Sample Consensus (RANSAC)

Bad Data => Outliers
Loosely speaking, outliers are points that don’t “fit” the model. Points that do fit are called “inliers”
outlier
inliers
outlier
Robert Collins

Fitting lines (with outliers)
Algorithm:
1. Sample (randomly) the number of points required to fit the model
2. Solve for model parameters using samples
3. Score by the fraction of inliers within a preset threshold of the model
Repeat 1-3 until the best model is found with high confidence

Fitting lines (with outliers)
Algorithm:
1. Sample (randomly) the number of points required to fit the model
2. Solve for model parameters using samples
3. Score by the fraction of inliers within a preset threshold of the model
Repeat 1-3 until the best model is found with high confidence

Fitting lines (with outliers)
Algorithm:
1. Sample (randomly) the number of points required to fit the model
2. Solve for model parameters using samples
3. Score by the fraction of inliers within a preset threshold of the model
Repeat 1-3 until the best model is found with high confidence

Fitting lines (with outliers)
Count = 4
Algorithm:
1. 2. 3.
Repeat 1-3 until the best model is found with high confidence
Sample (randomly) the number of points required to fit the model Solve for model parameters using samples
Score by the fraction of inliers within a preset threshold of the
model

Fitting lines (with outliers)
Count = 6
Algorithm:
1. Sample (randomly) the number of points required to fit the model
2. Solve for model parameters using samples
3. Score by the fraction of inliers within a preset threshold of the model
Repeat 1-3 until the best model is found with high confidence

Fitting lines (with outliers)
Count = 13
Algorithm:
1. Sample (randomly) the number of points required to fit the model
2. Solve for model parameters using samples
3. Score by the fraction of inliers within a preset threshold of the model
Repeat 1-3 until the best model is found with high confidence

Fitting lines (with outliers)
Count = 19
Algorithm:
1. Sample (randomly) the number of points required to fit the model
2. Solve for model parameters using samples
3. Score by the fraction of inliers within a preset threshold of the model
Repeat 1-3 until the best model is found with high confidence

Count = 4 Count = 6 Count = 19 Count = 13
Algorithm:
1. Sample (randomly) the number of points required to fit the model
2. Solve for model parameters using samples
3. Score by the fraction of inliers within a preset threshold of the model
Repeat 1-3 until the best model is found with high confidence

How to choose parameters?
• Initial number of sampled points ‘s’ –Minimum number needed to fit the model
• Number of samples (iterations) N
–Choose N so that, with probability p, at least one random sample is free from outliers
(e.g. p=0.99) (outlier ratio: e= # of outliers / # of data points)
• Distance threshold δ
–Choose δ so that a good point with noise is likely (e.g., prob=0.95) within threshold –Zero-mean Gaussian noise with std. dev. σ, δ2=3.84σ2

How to Calculate N?
• s: the number of sampled points (minimum number needed to fit the model) • p: probability of success
• e: percentage of outliers, so % of inliers = 1 − 𝑒
• Probability of sample set with all inliers = 1 − 𝑒 6
• Probability of sample set that one ore more points were outliers 1 − 1 − 𝑒 6 • Probability that N samples will have at least one outlier 1 − 1 − 𝑒 6 D
• We want the probability of all N samples have outlier = 1 − 𝑝
•Then 1− 1−𝑒6 D= 1−𝑝
N
(1 - (1 - e ) ) = 1 - p
s

How to Calculate N?
• s: the number of sampled points (minimum number needed to fit the model) • p: probability of success
• e: proportion of outliers, so % of inliers = 1 − 𝑒
• Probability of sample set with all inliers = 1 − 𝑒 6
• Probability of sample set that one ore more points were outliers 1 − 1 − 𝑒 6 • Probability that N samples will have at least one outlier 1 − 1 − 𝑒 6 D
• We want the probability of all N samples have outlier = 1 − 𝑝
•Then 1− 1−𝑒6 D= 1−𝑝
N
(1 - (1 - e ) ) = 1 - p
s

Example: N for the line-fitting problem
• Totaldatapoints=12points
• Minimal sample size s = 2
• 2 outliers: e = 1/6 => 20%
• So N = 5 gives us a 99% chance of getting a pure-inlier sample
– Compared to N = 66 by trying every pair of points
from Hartley & Zisserman
Robert Collins

Acceptable consensus set?
• We don’t have to exhaustively sample subsets of points, we just need to randomly sample N subsets.
• We don’t even have to sample N sets!
• Early termination: terminate when inlier ratio reaches expected ratio of inliers
Robert Collins

How to Calculate N?
proportion of outliers e
s 5% 10% 20% 25% 30% 40% 50%
2 2 3 5 6 7 11 17 3 3 4 7 9 11 19 35 4 3 5 9 13 17 34 72 5 4 6 12 17 26 57 146 6 4 7 16 24 37 97 293 7 4 8 20 33 54 163 588 8 5 9 26 44 78 272 1177
Have to choose N samples to get
p = 0.99 chance of getting for at least one of them is correct
s=4
Percentage of outliers
Samples required

Given two images…
find matching features (e.g., SIFT) and a translation transform

Matched points will usually contain bad correspondences
good correspondence
how should we estimate the transform?

LLS will find the ‘average’ transform
‘average’ transform
solution is corrupted by bad correspondences

Use RANSAC
How many correspondences to compute translation transform?

Need only one correspondence, to find translation model

Pick one correspondence, count inliers
one correspondence

Pick one correspondence, count inliers
2 inliers

Pick one correspondence, count inliers
one correspondence

Pick one correspondence, count inliers
5 inliers

Pick one correspondence, count inliers
5 inliers
Pick the model with the highest number of inliers!

Estimating homography using RANSAC
• RANSAC loop
1. Get four point correspondences (randomly) 2. Compute H (DLT)
3. Count inliers
4. Keep if largest number of inliers
• Recompute H using all inliers

Estimating homography using RANSAC
• RANSAC loop
1. Get four point correspondences (randomly) 2. Compute H using DLT
3. Count inliers
4. Keep if largest number of inliers
• Recompute H using all inliers

Estimating homography using RANSAC
• RANSAC loop
1. Get four point correspondences (randomly) 2. Compute H using DLT
3. Count inliers
4. Keep if largest number of inliers
• Recompute H using all inliers

Estimating homography using RANSAC
• RANSAC loop
1. Get four point correspondences (randomly) 2. Compute H using DLT
3. Count inliers
4. Keep H if largest number of inliers
• Recompute H using all inliers

Estimating homography using RANSAC
• RANSAC loop
1. Get four point correspondences (randomly) 2. Compute H using DLT
3. Count inliers
4. Keep H if largest number of inliers
• Recompute H using all inliers
Feature matching and homography estimation Do both simultaneously using RANSAC.

RANSAC pros and cons
• Pros
• Simple and general
• Applicable to many different problems
• Often works well in practice
• Cons
• Lots of parameters to tune
• Doesn’t work well for low inlier ratios (too many iterations, or can fail completely)
• Can’t always get a good initialization of the model based on the minimum number of samples

Basic reading:
• Szeliski textbook, Sections 6.1.
References
Additional reading:
• Hartley and Zisserman, “Multiple View Geometry,” Cambridge University Press 2003.
as usual when it comes to geometry and vision, this book is the best reference;; Sections 2 and 4 in particular discuss everything about homography estimation