程序代写代做代考 algorithm cache PowerPoint Presentation

PowerPoint Presentation

6: Projective Rendering

Dr. Hamish Carr

COMP 5812M: Foundations of Modelling & Rendering
Alberti’s Window
Place glass sheet in front of the eye
Draw what you see on the glass sheet
Then take the sheet with you

COMP 5812M: Foundations of Modelling & Rendering
Formally
Assume we have:
A triangle mesh M, with:
Vertex positions
Vertex normals
Texture coordinates (& a texture)
An eye E
An image plane Π made up of pixels

COMP 5812M: Foundations of Modelling & Rendering
The Synthetic Camera

COMP 5812M: Foundations of Modelling & Rendering

Synthetic Camera
Use Geometry to Place Camera
(View Transformation)
Use Geometry to Place Models
(Model Transformation)
Use Projection to Describe Camera
(Projection Transformation)
Use Projection to Describe Screen
(Viewport Transformation)

COMP 5812M: Foundations of Modelling & Rendering
Coordinate Systems
OCS is the Object Coordinate System
WCS is the World Coordinate System
VCS is the View Coordinate System
CCS is the Clipping Coordinate System
NDCS is the Normalized DCS
DCS is the Device Coordinate System

COMP 5812M: Foundations of Modelling & Rendering
Why so many?

Model
Matrix

View
Matrix

Object Model
World Model
View Model

Clipping

Projection
Matrix

Perspective
Division

Viewport
Transformation

Image Plane
Screen

COMP 5812M: Foundations of Modelling & Rendering

View Coordinates
Coordinates from camera’s viewpoint
Allows viewpoint to move in the world
Effectively, an extra rotation & translation

COMP 5812M: Foundations of Modelling & Rendering

View Frustum
For perspective, view volume is
a view frustum (a truncated pyramid)
For orthogonal, view volume is a box

COMP 5812M: Foundations of Modelling & Rendering
Clipping Coordinates
coordinates after perspective projection

z is the distance from the eye

COMP 5812M: Foundations of Modelling & Rendering
Normalized DCS
Normalized Device Coordinate System
Big mouthful, but simple idea
Divide CCS through by w
converts homogeneous coords to Cartesian
except for z, which is needed for z buffer
Independent of screen coordinates
I.e. range is [0,1]x[0,1]x[0,1]

COMP 5812M: Foundations of Modelling & Rendering
Device Coordinates (DCS)
Device coordinates are:
used for final render to screen
expressed in pixel coordinates
(x, y): position on image plane
z: distance in front of image plane
normalised from [0..1] to [0..255] or [0..65536]
I.e. pixel position & object depth

COMP 5812M: Foundations of Modelling & Rendering
FF’s Biggest Mistake

Model
Matrix

View
Matrix

Object Model
World Model
View Model

Clipping

Projection
Matrix

Perspective
Division

Viewport
Transformation

Image Plane
Screen

Modelview Matrix

COMP 5812M: Foundations of Modelling & Rendering

Projective Rendering
Fundamental trick behind video card
Project each triangle to image plane
Triangles always project as triangles
We need to rasterise (draw) lines & triangles

COMP 5812M: Foundations of Modelling & Rendering
Painter’s Algorithm
If we draw objects from back to front
the back objects will be occluded
i.e. we will see only the front object
We have to sort all objects for each image

COMP 5812M: Foundations of Modelling & Rendering
Worst Case
Three triangles
Red overlaps Green
Green overlaps Blue
Blue overlaps Red

Painter’s Algorithm fails!

COMP 5812M: Foundations of Modelling & Rendering
Depth (Z-) Buffering
For each pixel, set Z = -∞
For each triangle
Transform to image coordinates
Rasterise to fragments – possible pixels
For each fragment (X, Y, Z, R, G, B)
If depth(X,Y) < Z, discard Else depth(X,Y) = Z, colour(X,Y) = (R,G,B) COMP 5812M: Foundations of Modelling & Rendering Projective Pipeline COMP 5812M: Foundations of Modelling & Rendering State & Streams We assume a stream of vertices Used to construct primitives Everything else is state-based Textures, matrices, render options Vertex attributes State is applied to passing vertices And must be changed explicitly COMP 5812M: Foundations of Modelling & Rendering Input Stage Each set of vertices defines a primitive Attributes are applied first Because they are state-based Optimisations: Vertex Arrays Vertex Buffer Objects (VBOs) Tessellation (in CPU library) COMP 5812M: Foundations of Modelling & Rendering Geometry Stage Geometric transformations are applied Using homogeneous matrices Lighting calculated per vertex Gives one colour per vertex Colours interpolated during rasterization Programmable with vertex shaders, &c. COMP 5812M: Foundations of Modelling & Rendering Rasterisation Stage Primitives are rasterised into fragments Barycentric coordinates are computed Attributes are interpolated: colour after lighting texture coordinates (later) depth Not yet programmable with shaders COMP 5812M: Foundations of Modelling & Rendering Fragment Processing Each fragment is now processed separately in parallel textures are computed depth and other tests performed to see if we keep the fragment fragment is written to output image Programmable with fragment shaders COMP 5812M: Foundations of Modelling & Rendering 1st Gen Video Hardware Each stage uses different processors So we get a deeply pipelined architecture Vertex transformations can be parallel Lighting computations can be parallel Primitives can be rasterised in parallel Fragments can be processed in parallel Provided depth test is atomic COMP 5812M: Foundations of Modelling & Rendering Fixed Function Pipeline This is the original hardware design Deeply pipelined, massively parallel Now simulated by GP processors COMP 5812M: Foundations of Modelling & Rendering Modern Shaders Drop in replacements for particular stages Vertex Shaders Tessellation Control Shaders Tessellation Evaluation Shaders Geometry Shaders Fragment Shaders Compute Shaders COMP 5812M: Foundations of Modelling & Rendering Evolved Pipeline (Tessellation) Vertex Stream Geometry Shader Vertex Shader Post Transform Cache Rasteriser Fragment Stream Depth Test Fragment Shader Frame Buffer Image Input Texture Storage Compute Shader COMP 5812M: Foundations of Modelling & Rendering Problems What if we want to do things out of order? E.g. do depth test before fragment shader What if we want to preprocess all vertices? Instead of using the post-transform cache The pipeline model is breaking down Progressively replaced by arbitrary compute But still the basis of most libraries COMP 5812M: Foundations of Modelling & Rendering