CS计算机代考程序代写 algorithm Estimating Point

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 1/22

Chapter 4
Estimating Point Correspondence
Multiple View Geometry
Summer 2021

Prof. Daniel Cremers
Chair for Computer Vision and Artificial Intelligence

Departments of Informatics & Mathematics
Technical University of Munich

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 2/22

Overview

1 From Photometry to Geometry

2 Small Deformation & Optical Flow

3 The Lucas-Kanade Method

4 Feature Point Extraction

5 Wide Baseline Matching

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 3/22

From Photometry to Geometry

In the last sections, we discussed how points and lines are
transformed from 3D world coordinates to 2D image and pixel
coordinates.

In practice, we do not actually observe points or lines, but
rather brightness or color values at the individual pixels. In
order to transfer from this photometric representation to a
geometric representation of the scene, one can identify points
with characteristic image features and try to associate these
points with corresponding points in the other frames.

The matching of corresponding points will allow us to infer 3D
structure. Nevertheless, one should keep in mind that this
approach is suboptimal: By selecting a small number of feature
points from each image, we throw away a large amount of
potentially useful information contained in each image. Yet,
retaining all image information is computationally challenging.
The selection and matching of a small number of feature
points, on the other hand, allows tracking of 3D objects from a
moving camera in real time – even with limited processing
power.

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 4/22

Example of Tracking

Input frame 1 Input frame 2

Wire frame reconstruction with texture map

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 5/22

Example of Tracking

Tracked input sequence Textured reconstruction

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 6/22

Identifying Corresponding Points

Input frame 1 Input frame 2

To identify corresponding points in two or more images is one
of the biggest challenges in computer vision. Which of the
points identified in the left image corresponds to which point in
the right one?

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 7/22

Non-rigid Deformation

In what follows we will assume that objects move rigidly.

However, in general, objects may also deform non-rigidly.
Moreover, there may be partial occlusions:

Image 1 Image 2 Registration

Cremers, Guetter, Xu, CVPR ’06

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 8/22

Small Deformation versus Wide Baseline

In point matching one distinguishes two cases:

• Small deformation: The deformation from one frame to the
other is assumed to be (infinitesimally) small. In this case
the displacement from one frame to the other can be
estimated by classical optic flow estimation, for example
using the methods of Lucas/Kanade or Horn/Schunck. In
particular, these methods allow to model dense
deformation fields (giving a displacement for every pixel in
the image). But one can also track the displacement of a
few feature points which is typically faster.

• Wide baseline stereo: In this case the displacement is
assumed to be large. A dense matching of all points to all
is in general computationally infeasible. Therefore, one
typically selects a small number of feature points in each
of the images and develops efficient methods to find an
appropriate pairing of points.

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 9/22

Small Deformation
The transformation of all points of a rigidly moving object is
given by:

x2 = h(x1) =
1

λ2(X )
(Rλ1(X ) x1 + T ) .

Locally this motion can be approximated in several ways.

• Translational model:

h(x) = x + b.

• Affine model:
h(x) = Ax + b.

The 2D affine model can also be written as:

h(x) = x + u(x)

with

u(x) = S(x)p =
(

x y 1 0 0 0
0 0 0 x y 1

)
(p1,p2,p3,p4,p5,p6)

>.

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 10/22

Optic Flow Estimation

The optic flow refers to the apparent 2D motion field
observable between consecutive images of a video. It is
different from the motion of objects in the scene, in the extreme
case of motion along the camera axis, for example, there is no
optic flow, while on the other hand camera rotation generates
an optic flow field even for entirely static scenes.

In 1981, two seminal works on optic flow estimation were
published, namely the works of Lucas & Kanade, and of Horn
& Schunck. Both methods have become very influential with
thousands of citations. They are complementary in the sense
that the Lucas-Kanade method generates sparse flow vectors
under the assumption of constant motion in a local
neighborhood, whereas the Horn-Schunck method generates a
dense flow field under the assumption of spatially smooth
motion. Despite more than 30 years of research, the estimation
of optic flow fields is still a highly active research direction.

Due to its simplicity, we will review the Lucas-Kanade method.

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 11/22

The Lucas-Kanade Method

• Brightness Constancy Assumption: Let x(t) denote a
moving point at time t , and I(x , t) a video sequence, then:

I(x(t), t) = const. ∀t ,

i.e. the brightness of point x(t) is constant. Therefore the
total time derivative must be zero:

d
dt

I(x(t), t) = ∇I>
(

dx
dt

)
+
∂I
∂t

= 0.

This constraint is often called the (differential) optical flow
constraint. The desired local flow vector (velocity) is given
by v = dxdt .

• Constant motion in a neighborhood: Since the above
equation cannot be solved for v , one assumes that v is
constant over a neighborhood W (x) of the point x :

∇I(x ′, t)>v +
∂I
∂t

(x ′, t) = 0 ∀x ′ ∈W (x).

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 12/22

The Lucas-Kanade Method
The brightness is typically not exactly constant and the velocity
is typically not exactly the same for the local neighborhood.

Lucas and Kanade (1981) therefore compute the best velocity
vector v for the point x by minimizing the least squares error

E(v) =

W (x)

∣∣∇I(x ′, t)>v + It (x ′, t)∣∣2 dx ′.
Expanding the terms and setting the derivative to zero one
obtains:

dE
dv

= 2Mv + 2q = 0,

with

M =

W (x)
∇I ∇I>dx ′, and q =


W (x)

It ∇I dx ′.

If M is invertible, i.e. det(M) 6= 0, then the solution is

v = −M−1 q.

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 13/22

Estimating Local Displacements

• Translational motion: Lucas & Kanade ’81:

E(b) =

W (x)

∣∣∇I>b + It ∣∣2 dx ′ → min .
dE
db

= 0 ⇒ b = · · ·

• Affine motion:

E(p) =

W (x)

∣∣∇I(x ′)> S(x ′) p + It (x ′)∣∣2 dx ′
dE
dp

= 0 ⇒ p = · · ·

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 14/22

When can Small Motion be Estimated?

In the formalism of Lucas and Kanade, one cannot always
estimate a translational motion. This problem is often referred
to as the aperture problem. It arises for example, if the region
in the window W (x) around the point x has entirely constant
intensity (for example a white wall), because then ∇I(x) = 0
and It (x) = 0 for all points in the window.
In order for the solution of b to be unique the structure tensor

M(x) =

W (x)

(
I2x Ix Iy

Ix Iy I2y

)
dx ′.

needs to be invertible. That means that we must have
det M 6= 0.
If the structure tensor is not invertible but not zero, then one
can estimate the normal motion, i.e. the motion in direction of
the image gradient.
For those points with det M(x) 6= 0, we can compute a motion
vector b(x). This leads to the following simple feature tracker.

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 15/22

A Simple Feature Tracking Algorithm

Feature tracking over a sequence of images can now be done
as follows:

• For a given time instance t , compute at each point x ∈ Ω
the structure tensor

M(x) =

W (x)

(
I2x Ix Iy

Ix Iy I2y

)
dx ′.

• Mark all points x ∈ Ω for which the determinant of M is
larger than a threshold θ > 0:

det M(x) ≥ θ.

• For all these points the local velocity is given by:

b(x , t) = −M(x)−1
( ∫

Ix Itdx ′∫
Iy Itdx ′

)
.

• Repeat the above steps for the points x + b at time t + 1.

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 16/22

Robust Feature Point Extraction

Even det M(x) 6= 0 does not guarantee robust estimates of
velocity — the inverse of M(x) may not be very stable if, for
example, the determinant of M is very small. Thus locations
with det M 6= 0 are not always reliable features for tracking.

One of the classical feature detectors was developed by
Moravec ’80, Förstner ’84, ’87 and Harris & Stephens ’88.

It is based on the structure tensor

M(x) ≡ Gσ ∗ ∇I∇I> =

Gσ(x − x ′)
(

I2x Ix Iy
Ix Iy I2y

)
(x ′) dx ′,

where rather than simple summing over the window W (x) we
perform a summation weighted by a Gaussian G of width σ.

Harris and Stephens propose the following expression:

C(x) = det(M)− κ trace2(M),

and select points for which C(x) > θ with a threshold θ > 0.

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 17/22

Response of Förstner Detector

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 18/22

Wide Baseline Matching

Corresponding points and regions may look very different in
different views. Determining correspondence is a challenge.

In the case of wide baseline matching, large parts of the image
plane will not match at all because they are not visible in the
other image. In other words, while a given point may have
many potential matches, quite possibly it does not have a
corresponding point in the other image.

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 19/22

Extensions to Larger Baseline

One of the limitations of tracking features frame by frame is
that small errors in the motion accumulate over time and the
window gradually moves away from the point that was originally
tracked. This is known as drift.
A remedy is to match a given point back to the first frame. This
generally implies larger displacements between frames.

Two aspects matter when extending the above simple feature
tracking method to somewhat larger displacements:

• Since the motion of the window between frames is (in
general) no longer translational, one needs to generalize
the motion model for the window W (x), for example by
using an affine motion model.

• Since the illumination will change over time (especially
when comparing more distant frames), one can replace
the sum-of-squared-differences by the normalized cross
correlation which is more robust to illumination changes.

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 20/22

Normalized Cross Correlation
The normalized cross correlation is defined as:

NCC(h) =


W (x)

(
I1(x

′)− Ī1
)(

I2
(
h(x ′)

)
− Ī2

)
dx ′√∫

W (x)

(
I1(x ′)− Ī1

)2
dx ′


W (x)

(
I2
(
h(x ′)

)
− Ī2

)2
dx ′

,

where Ī1 and Ī2 are the average intensity over the window
W (x) (note that Ī2 depends on h). By subtracting this average
intensity, the measure becomes invariant to additive intensity
changes I → I + γ.

Dividing by the intensity variances of each window makes the
measure invariant to multiplicative changes I → γI.

If we stack the normalized intensity values of respective
windows into one vector, vi ≡ vec(Ii − Īi ), then the normalized
cross correlation is the cosine of the angle between them:

NCC(h) = cos∠(v1, v2).

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 21/22

Special Case: Optimal Affine Transformation

The normalized cross correlation can be used to determine the
optimal affine transformation between two given patches.
Since the affine transformation is given by:

h(x) = A x + d ,

we need to maximize the cross correlation with respect to the
2× 2-matrix A and the displacement d :

Â, d̂ = arg max
A,d

NCC(A,d),

where

NCC(A,d) =


W (x)

(
I1(x

′)− Ī1
)(

I2
(
Ax ′ + d)

)
− Ī2

)
dx ′

√√√√ ∫
W (x)

(
I1(x ′)− Ī1

)2
dx ′


W (x)

(
I2
(
Ax ′ + d)

)
− Ī2

)2
dx ′

,

Efficiently finding appropriate optima, however, is a challenge.

Estimating Point
Correspondence

Prof. Daniel Cremers

From Photometry to
Geometry

Small Deformation &
Optical Flow

The Lucas-Kanade
Method

Feature Point
Extraction

Wide Baseline
Matching

updated April 12, 2021 22/22

Optical Flow Estimation with Deep Neural Networks

There exist numerous algorithms to estimate correspondence
across images. Over the last years, neural networks have
become popular for estimating correspondence.

Dosovitskiy, Fischer, Ilg, Haeusser, Hazirbas, Golkov, van der
Smagt, Cremers and Brox, “FlowNet: Learning Optical Flow

with Convolutional Networks”, ICCV 2015.

From Photometry to Geometry
Small Deformation & Optical Flow
The Lucas-Kanade Method
Feature Point Extraction
Wide Baseline Matching

anm1:
1.9:
1.8:
1.7:
1.6:
1.5:
1.4:
1.3:
1.2:
1.1:
1.0:
anm0:
0.9:
0.8:
0.7:
0.6:
0.5:
0.4:
0.3:
0.2:
0.1:
0.0: