程序代写代做代考 algorithm cse3431-lecture10-rasterization

cse3431-lecture10-rasterization

All transformations

Modeling
transformation

Viewing
transformation

Projection
transformation

Perspective
division

Viewport
transformation

OCS WCS VCS CCS

NDCS
DCS

M-1camMmod Mproj

Mvp

Line Rendering Algorithm
Compute Mmod
Compute M-1cam
Compute Mmodelview = M-1cam Mmod
Compute MO
Compute MP // disregard MP here and below for orthographic-only case
Compute Mproj = MOMP
Compute MVP
Compute M = MVP Mproj Mmodelview
for each line segment i between vertices Pi and Qi do 

P = MPi; Q = MQi 

drawline(Px/hP, Py/hP, Qx/hQ, Qy/hQ) // hP,hQ are the 4th coordinates of P,Q
end for

Keep what is visible
We can clip
1. in the WCS
What are the six plane equations?

2. in the CCS
Usually done here(!)

Clipping in homogeneous coordinates 4D
volume bounded by 3D planes

Still simple and efficient

3. In the NDCS
Singularity at Pz = 0

In any case we must clip against planes

3D Clipping

Orthographic view volume
Planes in viewing coordinates
Normals pointing inside
left: x – left = 0
right: -x + right = 0
bottom: y – bottom = 0
top: -y + top = 0
front: -z – near = 0
back: z + far = 0

Perspective View volume
Planes in viewing coordinates
Normals pointing inside
left: x + left*z/near = 0
right: -x – right*z/near = 0
top: -y – top*z/near = 0
bottom: y + bottom*z/near = 0
front: -z – near = 0
back: z + far = 0

Clipping in NDCS (Aside)
Normalized view volume
• Constant planes
• Lines in VCS lines NDCS

Problem
• Z coordinate loses its sign

Clipping in CCS (Aside)
We’ll define the clipping region in CCS by first looking at

the clipping region in NDCS:
-1 <= x/w <= 1 This means that in CCS, we have: -w <= x <= w Similarly for y,z Example (Aside) The perspective transformation creates W = -z Typo: they should have different z So far our Pipeline Modeling transformation Viewing transformation Projection transformation Perspective division Viewport transformation OCS WCS VCS CCS NDCS DCS (e.g. pixels) lookAt() ortho()
 frustrum() 
 perspective viewport() translate()… ModelView Matrix Clipping Rasterization • The rasterizer outputs the location of fragments, i.e. pixel size screen elements. We can consider a fragment as pixel in most cases but they are not exactly equivalent. • The programmable fragment shader computes the colors of the fragments and how they affect the corresponding pixel Representations for lines and Curves Line (in 2D) • Explicit • Implicit • Parametric Circle • Explicit • Implicit • Parametric Primitives must be rasterized • Mathematical form --> Set of finite size pixels

Rasterization

Line rasterization

Line rasterization
Desired properties
• Straight
• Pass through end points
• Smooth
• Independent of end point

order
• Uniform brightness
• Brightness independent of

slope
• Efficient

Straightforward Implementation
Line between two points

(x1, y1), (x2, y2)

y(x) = y1 +
y1 � y2
x1 � x2

(x� x1)

Straightforward Implementation
Line between two points (slope < 45) Round points then: DrawLine(int x1,int y1,int x2,int y2)
 {
 float y;
 int x; for (x=x1; x<=x2; x++) { 
 y = y1 + (x-x1)*(y2-y1)/(x2-x1)
 SetPixel(x, Round(y) ); }
 } Better Implementation How can we improve this algorithm? DrawLine(int x1,int y1,int x2,int y2)
 {
 float y;
 int x; for (x=x1; x<=x2; x = x + 1) { 
 y = y1 + (x-x1)*(y2-y1)/(x2-x1)
 SetPixel(x, Round(y) ); }
 } Better Implementation DrawLine(int x1,int y1,int x2,int y2)
 {
 float y,m;
 int x;
 dx = x2-x1 ;
 dy = y2-y1 ;
 m = dy/ (float) dx ; for (x=x1; x<=x2; x = x + 1) { y = y1 + m*(x-x1) ;
 SetPixel(x, Round(y) );
 } 
 } Even Better Implementation: Incremental DrawLine(int x1,int y1,int x2,int y2)
 {
 float y,m;
 int x;
 dx = x2-x1 ;
 dy = y2-y1 ;
 m = dy/ (float) dx ;
 y = y1 + 0.5 ; for (x=x1; x<=x2; x = x + 1) { 
 SetPixel(x, Floor(y) );
 y = y + m ; 
 } 
 } // y(x) = mx + d --> y(x+1) = y(x) + m

NE

E

Midpoint algorithm (Bresenham)
(ASIDE)

Line in the first quadrant ( 0 0 and dy/dx <= 1.0 ; • Current choice P = (x,y). • How do we chose next of P,
 P’= (x+1,y’) ? 
 Pixel Centers NE E Midpoint algorithm (Bresenham) (ASIDE) Line in the first quadrant ( 0 0 and dy/dx <= 1.0 ; • Current choice P = (x,y). • How do we chose next of P,
 P’= (x+1,y’) ? 
 If( F(M) = F(x+1,y+0.5) < 0) 
 M above line so E
 else 
 M below line so NE Pixel Centers NE E Midpoint algorithm (Bresenham) DrawLine(int x1, float y1, int x2, float y2, int color)
 {
 int x,y,dx,dy; 
 
 y = Round(y1) ;
 for (x=x1; x<=x2; x++) { 
 SetPixel(x, y );
 if (F(x+1,y+0.5)>0) {

y = y + 1 ; 

}

}

}

Can we compute F in a smart
way?

• We are at pixel (x,y) we evaluate F at M = (x+1,y+0.5) and
E=(x+1,y) or NE=(x+1,y+1) accordingly.

(Reminder: F(x,y) = xdy – ydx + c)

Can we compute F in a smart
way?

• We are at pixel (x,y) we evaluate F at M = (x+1,y+0.5) and
E=(x+1,y) or NE=(x+1,y+1) accordingly.

(Reminder: F(x,y) = xdy – ydx + c)

• If we chose E for x+1 the next criteria will be at M’:

F(x+2,y+0.5) = [(x+1)dy +dy] – (y+0.5)*dx +cà 

F(x+2,y+0.5) = F(x+1,y+0.5) + dy à 

FE = F + dy = F+ dFE

Can we compute F in a smart
way?

• We are at pixel (x,y) we evaluate F at M = (x+1,y+0.5) and
E=(x+1,y) or NE=(x+1,y+1) accordingly.

(Reminder: F(x,y) = xdy – ydx + c)

• If we chose E for x+1 the next criteria will be at M’:

F(x+2,y+0.5) = (x+1)dy +dy – (y+0.5)*dx +cà 

F(x+2,y+0.5) = F(x+1,y+0.5) + dy à 

FE = F + dy

• If we chose NE then the next criteria 

will be at M’’:

F(x+2,y+1+0.5) = 

F(x+1,y+0.5) + dy – dx à 

FNE = F + dy – dx

Can we compute F in a smart
way?

• We are at pixel (x,y) we evaluate F at M = (x+1,y+0.5) and
E=(x+1,y) or NE=(x+1,y+1) accordingly.

(Reminder: F(x,y) = xdy – ydx + c)

• If we chose E for x+1 the next criteria will be at M’:



FE = F + dy

• If we chose NE then the next criteria 

will be at M’’:



FNE = F + dy – dx

Criterion update
Update
FE = F + dy = F + dFE
FNE = F + dy – dx = F + dFNE

Starting value?
Line equation: F(x,y) = xdy-ydx+c
Assume line starts at pixel (x0,y0) 


Fstart = F(x0+1,y0+0.5) = (x0+1)dy -(y0+0.5)dx + c =
= (x0dy – y0dx + c )+ dy – 0.5dx = F (x0,y0) + dy – 0.5dx.
(x0,y0) belongs on the line so: F (x0,y0) = 0
Therefore:
Fstart = dy – 0.5dx

Criterion update (Integer
version)

Update
Fstart = dy –0.5dx

FE = F + dy = F + dFE

FNE = F + dy – dx = F + dFNE

Everything is integer except Fstart.
Multiply by 2 à Fstart = 2dy – dx

dFE = 2dy 

dFNE = 2(dy-dx)

Midpoint algorithm
DrawLine(int x1, float y1, int x2, float y2, int color)


{

int x,y,dx,dy,dE, dNE;

dx = x2-x1 ;

dy = y2-y1 ;

d = 2*dy-dx ; // initialize d

dE = 2*dy ;

dNE = 2*(dy-dx) ;

y = Round(y1) ;

for (x=x1; x<=x2; x++) {
 SetPixel(x, y, color );
 if (d>0) { // chose NE 

d = d + dNE ;

y = y + 1 ;

} else { // chose E 

d = d + dE ;

} 

} 

}

General form or a polynomial of degree n:

F (x) = anx
n + an�1x

n�1 · · ·+ a1x+ a0| {z }
Qn�1(x)

, an 6= 0

or

F (x) = anx
n +Qn�1(x), an 6= 0

Incremental algorithms for
polynomials (ASIDE)

F (x) = anx
n +Qn�1(x), an 6= 0

F (x+ d) = an(x+ d)
n +Qn�1(x+ d) = an(x+ d)

n + Pn�1(x)

= an

nX

k=0


n
k


xn�kdk + Pn�1(x)

= an
X

k


n

k!(n� k)!


xn�kdk + Pn�1(x)

= anx
n +

nX

k=1


n

k!(n� k)!


xn�kdk + Pn�1(x)

= anx
n +Rn�1(x) + Pn�1(x)

= anx
n +Gn�1(x)

Incremental algorithms for
polynomials

Polynomial forms

F (x) = anx
n +Qn�1(x), an 6= 0

F (x+ d) = anx
n +Gn�1(x)

First order di↵erences

�F = F (x+ d)� F (x+ d) = anxn +Qn�1(x)� anxn �Gn�1(x) = Dn�11 (x)

Second order di↵erences

�2F (x) = �F (x+ d)��F (x) = Dn�22 (x)

n-order Di↵erences

�nF (x) = �n�1F (x+ d)��n�1F (x) = D0n = c

N-order differences
(ASIDE)

N-order difference update
Computing the polynomial incrementally from the di↵erences

F (x) = anx
n +Qn�1(x), an 6= 0

F (x+ d) = anx
n +Gn�1(x)

F (x+ d) = F (x) +�1F (x)

�F (x+ d) = �F (x) +�2F (x)

�2F (x+ d) = F (x) +�F (x)

�n�1F (x+ d) = �n�1F (x) +�nF (x)

�nF (x+ d) = c

Example: y = x^2

The incremental algorithm to
compute y = x^2 (END ASIDE)

computePar(int d)

{

float y = 0 ; 

int x = 0 ;

DY = d^2 ; // at x = 0 

DDY = 2*d^2 ;

for( x = 0 ; x < X_MAX ; x++ ) { 
 printf(“d, %f\”, x,y) ; 
 y = y + DY ; 
 DY = DY + DDY ;
 } Polygons Collection of points connected with lines • Vertices: v1,v2,v3,v4 • Edges: 
 e1 = v1v2 
 e2 = v2v3 
 e3 = v3v4 
 e4 = v4v1 V1 V2 V3 V4 Polygons • Open / closed • Planar / non-planar • Filled / wireframe • Convex / concave • Simple / non-simple Triangles The most common primitive • Convex • Planar • Simple Reminder Plane equations Implicit Parametric Explicit Point normal form Plane equation Computing point normal form from 3 Points Implicit equation for the plane: F (P ) = N · P + D Parametric equation for the line from Pa to Pb: L(t) = Pa + t(Pb � Pa) Plug L(t) in F (P ) and solve for t = ti: N · [Pa + ti(Pb � Pa)] = �D If N · (Pa � Pb) = 0 then zero or infinite solutions (how?). Otherwise, ti = �D �N · Pa N · Pb �N · Pa = �F (Pa) F (Pb)� F (Pa) Finally, evaluate L(ti) for the intersection point Pi: Pi = Pa + �F (Pa) F (Pb)� F (Pa) (Pb � Pa) = PaF (Pb)� PbF (Pa) F (Pb)� F (Pa) Intersection of line and plane Polygons in OpenGL New versions ONLY TRIANGLES Vertices have attributes (position, normal, color, etc) – Arrays of floats: GLfloat positions[] = { ...} ; – Indexed Arrays (Element Arrays) Special functionality to store and interpret the arrays – Vertex Buffer Objects – Vertex Array Objects – We will see the details later Indexed Face Sets 
 OpenGL element arrays • The ordering of the vertices in the faces should be consistent (clock wise or counter-clockwise) • In OpenGL it defines the orientation (front or back) of the surface (not the normal! -- confusing, I know) Generic attributes (user defined) Commonly defined attributes • Position • Normal vector • Color • Texture coordinates • Position has slightly special status, in the sense that a vertex shader must output a position Vertex attributes Computing the normal of a polygon One way: N = (Vn-1-V0)x(V1-V0) Newell’s method Normalize to get unit normal V0 V1 Vn-1 Transformation of normal vectors Given an affine transformation M • Is the new normal the M-transformed version of the original normal, i.e. n’ = Mn? M S n v n’ v’=MvS’ Transformation of normal vectors Given an affine transformation M • If dot(n,v) = 0 does it mean that dot(Mn, Mv) = 0? • In other words is the new normal the M-transformed version of the original normal? • NOT in general – Non uniform scale – Shear M S n v n’ v’=MvS’ Transformation of normal vectors M S n v n’ v’=Mv So: n’ = M-T n The inverse transpose M-T of the Modelview Matrix must be given to the shaders for transforming normals S’ Transformation of normal vectors M S n v n’ v’=Mv Note: - If M is pure rotation then M-T= M - Vectors do not translate so we can and should 
 consider only the top left 3x3 part of the matrix 
 in this process S’ Normalization Unit normals may not stay unit after transformation. • Transformation includes scale or shear We can render triangles in three different ways • As points: that is, only the vertices • As lines: that is only the edges • As filled in: all interior points Polygon Rasterization (for OpenGL, only triangles) We can render triangles in three different ways • As points: that is, only the vertices
 Rasterize produces the screen coordinates
 of the vertices • As lines: that is only the edges
 Rasterize produces 3 lines using line
 scan conversion • As filled in: all interior points
 ...coming up Polygon Rasterization (for OpenGL, only triangles) Polygon Rasterization (for OpenGL, only triangles) Scan conversion shade pixels lying within a closed polygon efficiently. Algorithm • For each row of pixels define a scanline through their centers • intersect each scanline with all edges • sort intersections in x • calculate parity of intersections to determine in/out • fill the 'in' pixels Note During rasterization • Pixels centers are considered at integer values (n,k) • Therefore scanlines are of the of form: 
 y = k, k in (1,2,…, ) • Also, x = n, n in (1,2,...) y =1 y =2 y =3 x = 1 x = 2 x = 3 Special cases (ASIDE) • Horizontal edges can be excluded • Vertices lying on scanlines – Change in sign of yi-yi+1: count twice – No change: shorten edge by one scanline out out Efficiency? Many intersection tests can be eliminated by taking advantage of coherence between adjacent scanlines. • Edges that intersect scanline y are likely to intersect y+1 • x changes predictably from scanline y to y+1 y = mx+a à x = 1/m(y-a) à x(y+1) = x(y) + 1/m Attributes of Interior pixels? • We only have attributes for vertices • What about the other points of the triangle • E.g. Colors: • Most common approach: interpolation y2,I2 y1,I1 y3,I3 y2,I2 y1,I1 y3,I3 Forms and relationships • Parametric from • Using ratios (for y1 ≠ y2) • Choosing ta and tb we can get efficient incremental versions: • Similarly for x Interpolating information along a 2D line (x1,y1,I1) (x2,y2,I2) x = (1� t)x1 + tx2, t 2 [0, 1] y = (1� t)y1 + ty2, I = (1� t)I1 + tI2. I(ta)� I(tb) y(ta)� y(tb) = I1 � I2 y1 � y2 , 8ta, tb : ta 6= tb Iy+1 � Iy (y + 1)� y = I1 � I2 y1 � y2 ! Iy+1 = Iy + I1 � I2 y1 � y2 constant along line y y+1 Bilinear Interpolation of Information during scanconversion Color, Normal, Texture coordinates • Two levels of interpolation • Along edges (green) • Along scan-line(red) • Remember pixel centres at 
 integer values • First scan-line y = 0, second y = 1,… • Pixels along scalene y:
 (x1,y), ( x1+1,y), (x1+2, y), ….. • Incremental approach on both levels y+1,Ir,(y+1) y2,I2 y1,I1 y3,I3 y+1,Il,(y+1) y,Ir,yy,Il,y Bilinear Interpolation of Information during scanconversion Two levels of interpolation (x1,y1) (x2,y2) y-1 y y+1,Ir,(y+1) y2,I2 y1,I1 y3,I3 y+1,Il,(y+1) y,Ir,y Right edge (1, 2) Ir,y+1 � Ir,y (y + 1)� y = I1 � I2 y1 � y2 ! Ir,y+1 = Ir,y + I1 � I2 y1 � y2 Left edge (1, 3) Il,y+1 � Il,y (y + 1)� y = I1 � I3 y1 � y3 ! Il,y+1 = Il,y + I1 � I3 y1 � y3 Along a scan line Ix+1 � Ix (x+ 1)� x = Ir � Il xr � xl ! Ix+1 = Ix + Ir � Il xr � xl y,Il,y Bilinear Interpolation of Information during scanconversion Color, Normal, Texture coordinates (x1,y1) (x2,y2) y-1 y Ir y2,I2 y1,I1 y3,I3 IlRight edge (1, 2) Ir,y+1 � Ir,y (y + 1)� y = I1 � I2 y1 � y2 ! Ir,y+1 = Ir,y + I1 � I2 y1 � y2 Left edge (1, 3) Il,y+1 � Il,y (y + 1)� y = I1 � I2 y1 � y2 ! Il,y+1 = Il,y + I1 � I2 y1 � y2 Along a scan line Ix+1 � Ix (x+ 1)� x = Ir � Il xr � xl ! Ix+1 = Ix + Ir � Il xr � xl Ix Ix+1 Incremental interpolation during scanconversion Color, Normal, Texture coordinates (x1,y1) (x2,y2) y-1 y y+1,Ir,(y+1) y2,I2 y1,I1 y,Ir,y y3,I3 y+1,Il,(y+1) Constant along the line Ix Ix+1 How does WebGL support this? Vertex shader: • varying vec4 vcolor Fragment shader • varying vec4 vcolor Rasterizer knows to interpolate • vcolor Z-buffer algorithm Although part of the positions the z-value can be viewed as a special attribute of a vertex • for each polygon in model • project vertices of polygon onto viewing plane • for each pixel inside the projected polygon • calculate pixel colour • calculate pixel z-value • compare pixel z-value to value stored for pixel in z-buffer • if pixel is closer, draw it in frame-buffer and z-buffer • end • end Depth Test • gl.enable(gl.DEPTH_TEST) ; • gl.disable(gl.DEPTH_TEST) ; • void gl.depthFunc(func); • func specifies the depth comparison function:
 gl.NEVER, gl.LESS, gl.EQUAL,gl.LEQUAL,gl.GREATER,gl.NOTEQUAL,gl.GEQUAL,gl. ALWAYS • The default value is gl.LESS: test passes if the incoming depth value is less than the stored one. • Size of the z-buffer: canvas.height * canvas.width floats Z-fighting Common problem with depth test based systems • Intersections • Overlaps • Rendering
 highlights on 
 top of 
 geometry Polygon Offset • gl.enable(gl.POLYGON_OFFSET_FILL) ; 
 gl.enable(gl.POLYGON_OFFSET_LINE) ; gl.enable(gl.POLYGON_OFFSET_POINT) ; • void gl.polygonOffset(GLfloat factor, GLfloat units); • Offsetting the z-values before depth comparison • Useful for rendering hidden-line images, for applying decals to surfaces, and for rendering solids with highlighted edges • see online manual Depth Value Functions (aside) • void gl.depthRangef(gl.FLOAT nearVal, gl.FLOAT farVal); • After clipping and division by w, depth coordinates range from -1 to 1, corresponding to the near and far clipping planes. gl.depthRange specifies a linear mapping of the normalized depth coordinates in this range to window depth coordinates. Regardless of the actual depth buffer implementation, window coordinate depth values are treated as though they range from 0 through 1 (like color components). Thus, the values accepted by gl.depthRange are both clamped to this range before they are accepted. • The setting of (0,1) maps the near plane to 0 and the far plane to 1. With this mapping, the depth buffer range is fully utilized. Pixel Region filling algorithms Scan convert boundary Fill in regions 2D paint programs http://www.cs.unc.edu/~mcmillan/comp136/Lecture8/areaFills.html http://www.cs.unc.edu/~mcmillan/comp136/Lecture8/areaFills.html BoundaryFill boundaryFill(int x, int y, int fill, int boundary) {
 if ((x < 0) || (x >= raster.width)) return; 

if ((y < 0) || (y >= raster.height)) return; 

int current = raster.getPixel(x, y); 

if ((current != boundary) & (current != fill)) {

raster.setPixel(fill, x, y);
boundaryFill(x+1, y, fill,boundary); 

boundaryFill(x, y+1, fill, boundary);

boundaryFill(x-1, y, fill, boundary); 

boundaryFill(x, y-1, fill, boundary); 

}

}

Flood Fill

public void floodFill(int x, int y, int fill, int old)
{


if ((x < 0) || (x >= raster.width)) return;

if ((y < 0) || (y >= raster.height)) return;

if (raster.getPixel(x, y) == old) { 

raster.setPixel(fill, x, y);

floodFill(x+1, y, fill, old);

floodFill(x, y+1, fill, old);

floodFill(x-1, y, fill, old);

floodFill(x, y-1, fill, old);

}
}

Adjacency
4-connected
8 connected

Polygon clipping (2D): Aside
Sutherland-Hodgeman

for each side of clipping window
for each edge of polygon
output points based upon the 


following table

Example

Outcodes for trivial reject/accept
[Hill 389] A vertex outcode
consists of four bits: TBRL,
where:
T is set if y > top,
B is set if y < bottom, R is set if x > right, and
L is set if x < left. Trivial accept: all vertices are inside (all outcodes are 0000, bitwise OR) Trivial reject: all vertices are outside with respect to any given side(bitwise AND is not 0000)