CS计算机代考程序代写 matlab deep learning algorithm [06-30213][06-30241][06-25024]

[06-30213][06-30241][06-25024]
Computer Vision and Imaging &
Robot Vision
Dr Hyung Jin Chang
h.j.chang@bham.ac.uk
School of Computer Science

Today’s agenda
• Part 1
– Topic overview
– Introductions to computer vision
• Part 2
– Module overview:
• Logistics and requirements
– Camera and Image Formation
Hyung Jin Chang
Lecture 1 – 2 01/02/2021

Module Overview
Hyung Jin Chang
Lecture 1 – 3 01/02/2021

Introduction
• Teaching team:
– Dr Hyung Jin Chang [Module Lead] (h.j.chang@bham.ac.uk)
– Dr Yixing Gao (y.gao.8@bham.ac.uk)
– Dr Mohan Sridharan (m.sridharan@bham.ac.uk)
– Dr Masoumeh Mansouri (M.Mansouri@bham.ac.uk)
– Guest Lecturer
– Hector Basevi (H.R.A.Basevi@cs.bham.ac.uk)
– Hamid Dehghani (h.dehghani@cs.bham.ac.uk)
Hyung Jin Chang
Lecture 1 – 4 01/02/2021

Introduction
• Teaching Assistant
– Nora Horanyi (NXH840@student.bham.ac.uk) – Esha Dasgupta (EXD949@student.bham.ac.uk) – Guy Perkins (GXP575@student.bham.ac.uk)
– Ben Mellors (BXM678@student.bham.ac.uk)
Hyung Jin Chang
Lecture 1 – 5 01/02/2021

This Module
• SharedModule(2020/21)(20credits)
– Computer Vision & Imaging [06-30213]
– Computer Vision & Imaging (Ext.) [06-30241] – Robot Vision [06-25024]
• Lecture(online):2hours/week
– Recorded lecture video for each week will be uploaded on Monday (by 10 am) on Canvas
• Onlinesession(viaZoom)(willnotberecordedexceptthematlabtutorial) – Session A: Mon 12:00-13:00
• Matlab tutorial / Matlab exercise / Formative assessment solution etc. – Session B: Thu 13:00-14:00
• Lecture related Q&A
Hyung Jin Chang
Lecture 1 – 7 01/02/2021

This Module
• Office Hour (via Zoom) – Dr Hyung Jin Chang
• Monday 14:00-16:00 pm
– Dr Yixing Gao
• Friday 11:00 am -13:00 pm
– TA Office Hour • TBA
Hyung Jin Chang
Lecture 1 – 8 01/02/2021

This Module
• Canvas Module Page – Announcement
– Assignment
– Quiz
– Discussion
• Microsoft Teams
– Online discussion
– Q&A
– Teams link: https://teams.microsoft.com/l/team/19%3a4e9a696b1d1d451cb 58138a0409fbd13%40thread.tacv2/conversations?groupId=affb 6a0d-b824-4726-a3f8-64fa7a6be956&tenantId=b024cacf-dede- 4241-a15c-3c97d553e9f3
Hyung Jin Chang
Lecture 1 – 9 01/02/2021

Aims of This Module
• Giveanappreciationoftheissuesthatarisewhendesigning computational models that convert visual signals to structural and symbolic descriptions
• Provideanunderstandingofthestate-of-the-artmethodsand techniques for processing visual information
• Givehands-onexperienceofdesigning,implementingand testing computer vision algorithms in realistic scenarios
• Encourageindependentthoughtondeepscientificissues related to visual cognition
Hyung Jin Chang
Lecture 1 – 11 01/02/2021

Learning Outcomes of This Module
• On successful completion of this module, the student should be able to:
1. design,implementandtestsimplecomputervision algorithms
2. writeadetailedreportonacomputervisionproject
3. surveyandcriticallydiscusstheresearchliteratureinone subfield of computer vision
4. demonstrateanunderstandingofthemaincomputer vision methods and computational models
Hyung Jin Chang
Lecture 1 – 12 01/02/2021

Prerequisites
• Proficiency in linear algebra, calculus (differentiation, integration), probability, and statistics.
• Upper-divisionundergradcourse
• Datastructures,algorithms
• Programmingexperience
• ExperiencewithMatlab
• Please take a Quiz in the Canvas page!
Hyung Jin Chang
Lecture 1 – 13 01/02/2021

Topics to be covered
• Introduction to computer vision & human vision
• Image formation and colour
• Camera parameters
• Camera calibration
• Image noise
• Image filtering
• Image gradient
• Edge detection
• Matching
• Binary image analysis
• Image texture
• Image segmentation
• Fitting: voting and the Hough transform
• Image warping
• Image Stitching
• Local features: image matching and detection
• Multiple-view Geometry
• Active Imaging (3D imaging technologies)
• Indexing instances
• Image categorisation
• Face recognition
• Object recognition
• Motion
• Deep learning for computer vision I
• Deep learning for computer vision II
Hyung Jin Chang
01/02/2021

Topic Overview & Schedule
Hyung Jin Chang
01/02/2021

Literature
• R. Szeliski, Computer Vision – Algorithms and Applications, Springer 2010
– http://szeliski.org/Book/drafts/SzeliskiBook_2010 0903_draft.pdf
• D. Forsyth and J. Ponce, Computer Vision – A Modern Approach, Prentice Hall, 2002
– http://cmuems.com/excap/readings/forsyth- ponce-computer-vision-a-modern-approach.pdf
• S. Dickinson, A. Leonardis, B. Schiele, and M. J. Tarr (Editors),
Object Categorization: Computer and Human Vision Perspectives, Cambridge University Press, 2009
• K. Grauman and B. Leibe, Visual object recognition, Synthesis Lectures on Computer Vision #1
– https://pdfs.semanticscholar.org/5255/490925aa1e01ac0b9a55e93 ec8c82efc07b7.pdf
Hyung Jin Chang
Lecture 1 – 18 01/02/2021

Assessment
– Continuous Assessment (50%)
• Follow submission instructions given in assignment
• Submit to Canvas; no hard copy submissions
• Deadlines are firm. We’ll use Canvas timestamp. Even 1 minute late is late.
• Formative Assessment (0%)
– There will be two formative assessments. Correct solutions will be given to
students one week after the assignment is closed in the online session A.
• Summative Assessment (50%)
– There will be two summative assessments (25% each) in the assessment
weeks.
– Exam (online) (50%)
Hyung Jin Chang
Lecture 1 – 19 01/02/2021

Penalties for Late Submission of Work
• (a) If work is submitted late and no extension has been granted, then a penalty of 5% should be imposed for each day that the assignment is late until 0 is reached, for example, a mark of 67% would become 62% on day one, 57% on day two, and so on. The days counted should not include weekends, public and University closed days.
UNIVERSITY OF BIRMINGHAM
CODE OF PRACTICE ON TAUGHT PROGRAMME AND MODULE ASSESSMENT AND FEEDBACK
Hyung Jin Chang
01/02/2021

Miscellaneous
• Check Canvas page and MS Teams regularly for assignment files, notes, announcements, etc.
• No attendance check.
• Online sessions are NOT going to be recorded (except Matlab tutorials). Please attend the sessions!
Hyung Jin Chang
Lecture 1 – 25 01/02/2021

This is 20 credit module!
• What are the credits?
– How much a particular module is ‘worth’ to you within your degree.
– How much time you should spend studying for it.
• 1 credit = 10 hours of study
– I expected you to spend at least 200 hours for this module in order to succeed.
• STUDY HARD!
Hyung Jin Chang
Lecture 1 – 26 01/02/2021

Acknowledgement
This course is inherited from Prof. Ales Leonardis’ Robot Vision module.
Partially inspired by the Computer Vision course by • KristenGrauman@Univ.ofTexasatAustin
• DeviParikh@GeorgiaTech
• YongJaeLee@UCDavis
• MichaelS.Ryoo@IndianaUniv.
• Hyun-SooPark@Univ.ofMinnesota • Fei-FeiLi@StanfordUniv.
Hyung Jin Chang
Lecture 1 – 27 01/02/2021

Camera and Image Formation
Hyung Jin Chang
01/02/2021

Human Eye
Hyung Jin Chang
01/02/2021

Human Eye
Hyung Jin Chang
01/02/2021

Renaissance: Science into Art
Hyung Jin Chang
01/02/2021

Renaissance: Perspective Drawing
Raphael, “Schoool of Athens”, 1511
Hyung Jin Chang
01/02/2021

Camera Obscura: Da Vinci
… all objects illuminated by the sun will send their images through this aperture and will appear, upside down, on the wall facing the hole.
– Codex Atlanticus
Hyung Jin Chang
01/02/2021

Camera Obscura: Frisius
Hyung Jin Chang
01/02/2021

Nowadays
Hyung Jin Chang
01/02/2021

Capturing Moment: Drawing
Hyung Jin Chang
01/02/2021

Capturing Moment: Photographic Film
Hyung Jin Chang
01/02/2021

Capturing Moment: Digital Sensor
A digital camera replaces film with a sensor array
• Eachcellinthearrayislight-sensitivediodethatconverts photons to electrons
• http://electronics.howstuffworks.com/digital-camera.htm
Hyung Jin Chang
01/02/2021

Digital Images
Slide credit: Derek Hoiem
CMOS sensor
Hyung Jin Chang
01/02/2021

Digital Images
• Samplethe2Dspaceonaregulargrid
• Quantizeeachsample(roundtonearestinteger)
• Imagethusrepresentedasamatrixofintegervalues.
2D
Slide credit: Kristen Grauman, Adapted from Steve Seitz
1D
Hyung Jin Chang
01/02/2021

Digital Color Images: Primary color
Hyung Jin Chang
01/02/2021

Digital Color Images
Slide credit: Kristen Grauman
Hyung Jin Chang
01/02/2021

Digital Color Images
http://en.wikipedia.org/wiki/Bayer_filter
Slide by Steve Seitz
Estimate RGB at each cell from neighboring values
Hyung Jin Chang
01/02/2021

Digital Color Images
• Whytherearetwiceasmanygreenasredandblue pixels in Bayer Filter?
– in order to mimic the higher sensitivity of the human eye towards green light
– the green photosensors luminance-sensitive elements and the red and blue ones chrominance-sensitive elements. The luminance perception of the human retina uses M and L cone cells combined, during daylight vision, which are most sensitive to green light.
Original Scene
What Your Camera Sees (through a Bayer array)
Hyung Jin Chang
01/02/2021

Digital Color Images
Color images / RGB color space
Hyung Jin Chang
01/02/2021

Digital Color Images
Hyung Jin Chang
01/02/2021

Camera Model
Hyung Jin Chang
01/02/2021

Pinhole cameras
• Abstractcameramodel–aboxwithasmallholeinit • Pinholecamerasworkinpractice
– Inverted image
– Exactly one ray passes through each point in the plane of the plate, the
pinhole and some scene point.
– Pinhole perspective (also called central perspective) model
– It is convenient to consider a virtual image associated with a plane lying in
front of the pinhole at the same distance from it as the actual image plane.
Hyung Jin Chang
01/02/2021

Distant objects are smaller
– Obvious effect of perspective projection:
apparent size of the objects depends on their
distance.
– Images B’ and C’ of the posts B and C have
the same height, but A and C are really half the size of B.
Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

Parallel lines meet
The projections of two parallel lines lying in the same plane Π appear to converge on a horizon line H formed by the intersection of the image plane with the plane parallel to Π and passing through the pinhole.
Common to draw image plane in front of the focal point. Moving the image plane merely scales the image.
Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

MLPS-St. Paul International Airport
Indoor point at infinity
Parallel lines in 3D converge to a point in the image.

3D Parallel Line Projection
Camera plane
Ground plane

3D Parallel Line Projection
Camera plane
Ground plane

3D Parallel Line Projection
Camera plane
Ground plane

3D Parallel Line Projection
Camera plane
Ground plane

3D Parallel Line Projection
Camera plane
Ground plane

3D Parallel Line Projection
Camera plane
Ground plane

3D Parallel Line Projection
Camera plane
Vanishing point
Ground plane

3D Parallel Line Projection
Camera plane
Vanishing point
Ground plane

3D Parallel Line Projection
Camera plane
Vanishing point
1. Parallellinesin3Dmeetatthesamevanishingpointinimage.
Ground plane

3D Parallel Line Projection
Camera plane
Vanishing point
1. Parallellinesin3Dmeetatthesamevanishingpointinimage. 2. The3Draypassingcameracenterandthevanishingpointis
parallel to the lines.
Ground plane

Vanishing Point
Camera plane
Vanishing point
1. Parallellinesin3Dmeetatthesamevanishingpointinimage. 2. The3Draypassingcameracenterandthevanishingpoint
is parallel to the lines.
3. Multiplevanishingpointsexist.
Ground plane

Vanishing point

Vanishing point
Vanishing point
Multiple vanishing point

Vanishing point
Vanishing line for horizon
Vanishing point
Vanishing line: Horizon

The equation of projection
A coordinate system (O, i, j, k) is attached to the pinhole camera
– O coincides with the pinhole
– Vectors i and j form a basis for a vector plane parallel to the image plane Π’, which is
located at the positive distance f’ from the pinhole along the vector k.
– The line perpendicular to Π’ and passing through the pinhole is called the optical axis, and
the point C’ is called the image center (can be used as the origin of an image plane coordinate frame, and it plays an important role in camera calibration procedures).
Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

The equation of projection
• Cartesiancoordinates:
– SinceP’liesintheimageplane,wehavez’=f’
– SincethethreepointsP,O,andP’arecollinear,wehaveOP’=λOPfor some number λ
– Wehave,bysimilartriangles,that (x’, y’, z’) -> (f’ x/z, f’ y/z, f’)
– Ignorethethirdcoordinate
(please try to derive this yourself!)
Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

Weak perspective (scaled orthography)
• Issues
– Perspective effects, but not over the scale of individual objects: (x’, y’, z’) -> (f’ x/zo, f y/zo, f)
– Weak perspective projection
• All line segments in the plane Πo are projected with the same magnification m
• x’=-mx;y’=-my;m=-f’/zo
• collect points into a group at about the same depth, then divide each point by the depth of its group
– Advantages: easy
– Disadvantages: wrong
Can be used when the scene depth is small relative to the average distance from the camera: the magnification ‘m’ can be taken to be a constant.
Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

Orthographic Projection
When it is known that the camera always remains at a roughly constant distance from the scene, we can normalize the image coordinates so that m=-1. x’=x,y’=y
With all rays parallel to the k axis and orthogonal to the image plane Π’
Weak-perspective projection: acceptable model for many conditions Orthographic projection: usually unrealistic
Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

Size of the Pinhole
Pinhole too big
>> many directions are averaged, blurring the image
Pinhole proper size
>> dark but sharp image
Pinhole too small
>> diffraction effects blur the image
Generally, pinhole
cameras are dark, because a very small set of rays from a particular point
hits the screen.

Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

Size of the Pinhole

Hyung Jin Chang
01/02/2021

Cameras with Lenses
– Two reasons for using lenses
1. To gather light, because only the ideal pinhole projection can make a single ray of light would otherwise reach each point in the image plane, and real pinholes have a finite size
2. To keep the picture in sharp focus while gathering light from a large area
Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

The Law of Geometric Optics
Behavior of lenses is dictated by the law of geometric optics:
(1) Light travels in straight lines (light rays) in homogenous media
(2) When a ray is reflected from a surface, this ray, its reflection, and the surface normal are coplanar, and the angles between the normal and the two rays are complementary.
(3) When a ray passes from one medium to another, it is refracted (its direction changes)
Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

Snell’s Law
• r1 is the ray incident to interface between two transparent materials with indexes of refraction n1 and n2
• r2 is the refracted ray
• r1 and r2 and the normal to the interface are coplanar
• The angles α1 and α2 between the normal and the two
rays are related by
Hyung Jin Chang
01/02/2021

Paraxial Geometric Optics
• When the angles between these rays and the refracting surfaces of the lens are assumed to be small
• The Snell’s law becomes n1α1≈ n2α2
Hyung Jin Chang
01/02/2021

A Thin Lens
– Let us consider a lens with two spherical surfaces of radius R and index of refraction n. Assumptions (lens is surrounded by vacuum) and is thin.
– Rays passing through O are not refracted (from the paraxial form of Snell’s law)
– Rays passing through P are focused by the thin lens on the point P’ with depth z
– Rays parallel to the optical axis are focused on the focal point F’
1 – 1 = 1 !h#$# % = z’ z f
‘ is the focal length of the lens ((*+,)
Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

Real lenses: spherical aberration
Why? The paraxial refraction model is only an approximation.
Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

Real lenses: aberrations
Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

Other phenomena
• Chromatic aberration
– Light at different wavelengths follows different paths;
hence, some wavelengths are defocussed
– Machines: coat the lens
– Humans: live with it
• Scattering at the lens surface
– Some light entering the lens system is reflected off each
surface it encounters (Fresnel’s law gives details)
– Machines: coat the lens, interior
– Humans: live with it (various scattering phenomena are visible in the human eye)
• Geometric phenomena (Barrel distortion, etc.)
Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

Compound lenses: Vignetting effect
Vignetting effect in a two-lens system.
– The shaded part of the beam never reaches the
second lens (brightness drops in the image periphery). Computer Vision – A Modern Approach
Set: Introduction to Vision Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

Reference card: Camera models
• Perspective projection
• Weak-perspective projection
• Orthographic projection
• Snell’s law
• Paraxial (First-order) geometric optics • Thin lens equation
• Real lenses and aberrations
Suggested reading:
D. Forsyth and J. Ponce, Computer Vision – A Modern Approach (Chapter 1, Cameras)
Pinhole cameras (projections) Cameras with lenses (thin lenses, aberrations )
Hyung Jin Chang
01/02/2021

Geometric properties of projection
ü Points go to points
ü Lines go to lines
ü Planes go to whole image ü Polygons go to polygons ü Degenerate cases
– line through focal point to point
– plane through focal point to line
Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

Geometric properties of projection
ü Polyhedra project to polygons
– because lines project to lines
Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

Camera parameters (INTRINSIC AND EXTRINSIC)
• Inpractice,theworldandcameracoordinatesystemsare related by a set of physical parameters, such as
– the focal length of the lens
– the size of the pixels
– the position of the image center
– the position and orientation of the camera.
Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

Camera parameters
• Issue
– one unit in camera coordinates may not be the same as one unit in world coordinates
• intrinsic parameters – focal length, principal point, aspect ratio, angle between axes, etc.
– Camera’s coordinate system to a fixed world coordinate system and specify its position and orientation in space
• extrinsic parameters
“X% “U % “Transformation %”1 0 0 0%”Transformation %$Y ‘
$ V ‘ = $representing ‘$0 1 0 0’$representing ‘$ ‘ $’$ ‘$ ‘$ ‘$Z’
$W ‘ $intrinsic parameters’$0 0
#&# &# &# &$T’
1 0’$extrinsic parameters’$ ‘ #&
Computer Vision – A Modern Approach Set: Introduction to Vision
Slides by D.A. Forsyth
Hyung Jin Chang
01/02/2021

Next lecture
• Key topics
– Camera geometry – Camera parameters
– Image noise & filtering
Hyung Jin Chang
Lecture 1 – 88 01/02/2021

Question?
Hyung Jin Chang
Lecture 1 – 89 01/02/2021