Motion and Optical Flow
We live in a moving world
• Perceiving, understanding and predicting motion is an important part of our daily lives
Copyright By PowCoder代写 加微信 powcoder
Motion and perceptual organization
• Even “impoverished” motion data can evoke a strong percept
G. Johansson, “Visual Perception of Biological Motion and a Model For Its Analysis”, Perception and Psychophysics 14, 201-211, 1973.
Motion and perceptual organization
• Even “impoverished” motion data can evoke a strong percept
G. Johansson, “Visual Perception of Biological Motion and a Model For Its Analysis”, Perception and Psychophysics 14, 201-211, 1973.
Seeing motion from a static picture?
http://www.ritsumei.ac.jp/~akitaoka/index-e.html
More examples
How is this possible?
• The true mechanism is yet to be revealed
• FMRI data suggest that illusion is related to some component of eye movements
• We don’t expect computer vision to “see” motion from these patterns
The cause of motion
• Three factors in imaging process – Light
– Object – Camera
• Varying either of them causes motion
– Static camera, moving objects (surveillance)
– Moving camera, static scene (3D capture)
– Moving camera, moving scene (sports, movie)
– Static camera, moving objects, moving light (time lapse)
Recovering motion
• Feature-tracking
– Extract visual features (corners, textured areas) and “track” them over multiple frames
• Opticalflow
– Recover image motion at each pixel from spatio-temporal image brightness variations (optical flow)
Two problems, one registration method
B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proceedings of the International Joint Conference on Artificial Intelligence, pp. 674–679, 1981.
Feature tracking
• Challenges
– Figure out which features can be tracked – Efficiently track across frames
– Some points may change appearance over time (e.g., due to rotation, moving into shadows, etc.)
– Drift: small errors can accumulate as appearance model is updated
– Points may appear or disappear: need to be able to add/delete tracked points
What is Optical Flow?
Definitions
optical flow is a velocity field in the image which
transforms one image into the next image in a
sequence [
flow field
Ambiguity of optical flow
flow (1): true motion
What is Optical Flow?
4 v4 I(t +1)
Optical Flow
Velocity vectors
Optical flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene -Wiki
Optical Flow Research: Timeline
Horn&Schunck Lucas&Kanade many
more methods
Seminal papers
Benchmark:
Barron et.al.
Benchmark:
Galvin et.al.
Optical Flow
I(x,y,t) I(x,y,t+1)
• Given two subsequent frames, estimate the point
translation
• Key assumptions of Lucas-Kanade Tracker
• Brightness constancy: projection of the same point looks the same in every frame
• Small motion: points do not move very far
• Spatial coherence: points move like their neighbors
The brightness constancy constraint
• Brightness Constancy Equation:
I ( x , y , t ) = I ( x + u , y + v , t + 1)
Take Taylor expansion of I(x+u, y+v, t+1) at (x,y,t) to linearize the right side: Image derivative along x Difference over frames
I(x+u,y+v,t+1)I(x,y,t)+ u+Iy v+ I(x+u,y+v,t+1)−I(x,y,t)=+Ix u+Iy v+It
I(x,y,t+1)
I u+I v+I 0 →Iu v +I =0 xyt t
The brightness constancy constraint Can we use this equation to recover image motion (u,v) at each
Iu v +I =0 t
• How many equations and unknowns per pixel? •One equation, two unknowns (u,v)
Optical Flow
How does this show up visually? Known as the “Aperture Problem”
Aperture Problem Exposed
Motion along just an edge is ambiguous
The aperture problem
Actual motion
The aperture problem
Perceived motion
Lucas and Kanade
B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In
Proceedings of the International Joint Conference on Artificial Intelligence, pp. 674–679, 1981.
• How to get more equations for a pixel?
• Spatial coherence constraint
• Assume the pixel’s neighbors have the same (u,v)
– If we use a 5×5 window, that gives us 25 equations per pixel
Least squares problem • Least squares problem:
Matching patches across images • Overconstrained linear system
Least squares solution for d given by
The summations are over all pixels in the K x K window
Criteria for Harris corner detector
Aperture problem
Flat regions
Errors in Lukas-Kanade
• What are the potential causes of errors in this procedure? T
– Suppose A A is easily invertible
– Suppose there is not much noise in the image
Our assumptions
• Brightness constancy is not satisfied
• The motion is not small
• A point does not move like its neighbors – window size is too large
– what is the ideal window size?
Improving accuracy Recall our small motion assumption
It-1(x,y) It-1(x,y)
This is not exact
– To do better, we need to add higher order terms back in:
– Lukas-Kanade method does one iteration of Newton’s method • Better results are obtained via more iterations
Iterative Refinement
• Iterative Lukas-Kanade Algorithm
1. Estimate velocity at each pixel by solving Lucas- Kanade equations
2. (t-1) towards I(t) using the estimated flow field
– use image warping techniques
3. Repeat until convergence
Tracking in the 1D case:
I(x,t) I(x,t+1)
Tracking in the 1D case:
I(x,t) Temporal derivative
Spatial derivative
I Assumptions:
xxtt Ix• t x=p
Brightness constancy Small motion
Tracking in the 1D case:
Iterating helps refining the velocity vector
Temporal derivative at 2
Can keep the same estimate for spatial derivative
v vprevious − t Converges in about 5 iterations
Optical Flow: Aliasing
Temporal aliasing causes ambiguities in optical flow because images can have many pixels with the same intensity.
I.e., how do we know which ‘correspondence’ is correct?
actual shift
nearest match is correct (no aliasing)
nearest match is incorrect (aliasing)
fine estimation.
estimated shift
To overcome aliasing:
Revisiting the small motion assumption
• Is this motion small enough?
– Probably not—it’s much larger than one pixel
– How might we solve this problem?
Reduce the resolution!
Coarse-to-fine optical flow estimation
run iterative L-K
warp & upsample
run iterative L-K
Gaussian pyramid of image 1 (t)
Gaussian pyramid of image 2 (t+1)
• Top Level
A Few Details
– Apply L-K to get a flow field representing the flow from the first frame to the second frame.
– Apply this flow field to warp the first frame toward the second frame.
– Return L-K on the new warped image to get a flow field from it to the second frame.
– Repeat till convergence.
• Next Level
– Upsample the flow field to the next level as the first guess of the flow at that level.
– Apply this flow field to warp the first frame toward the second frame.
– Rerun L-K and warping till convergence as above. • Etc.
Coarse-to-fine optical flow estimation
u=1.25 pixels u=2.5 pixels
u=5 pixels
u=10 pixels image 2 image I
Gaussian pyramid of image 2
Gaussian pyramid of image 1
Optical flow result
The Flower Garden Video
What should the optical flow be?
Optical Flow Results
* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003
Optical Flow Results
* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003
• Major contributions from Lucas, Tomasi, Kanade – Tracking feature points
– Optical flow
• Key ideas
– By assuming brightness constancy, truncated Taylor expansion leads to simple and fast patch matching across frames
– Coarse-to-fine registration
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com