1
Class 1: Overview
Computer Graphic Note
Jiechang Shi
Version: 1.1 Update: November 27, 2019
1. Frame Buffer
Pixel: One element of a frame buffer
Pixel depth: Number of bytes per-pixel in the buffer
Resolution: Width × Height
Buffer size: Total memory allocated for frame buffer
Exam Question: Given Resolution and Pixel depth, Asked Buffer size Z Value: For solving the hidden-surface removal(HSR) problem
1
2
Class 2,3: Rasterization
1. For each pixel on screen, the sample point of that pixel is the center point of that pixel, which have integer coordinates. For examples, (3, 2).
2. A simple problem: Rasterizing Lines
Program Description: Given two endpoints, P = (x0, y0) , R = (x1, y1) find the pixels that make up the line. Note that: Lines are infinitely thin so they rarely fall on pixel sample point.
A Feasible Description: Rasterize lines as closest pixels to actual lines, with 2 requirement
No Gap
Minimize error(distance to true line)
To make this question simplify: Only consider situation that |x1 − x0| ≥ |y1 − y0| ≥ 0 ∧ |x1 − x0| 0,
which means the −1 ≤ slope ≤ 1. Otherwise we just exchange x and y.
A basic Algorithm: k = y1−y0 , d = y0 − kx0, for each x0 ≤ x ≤ x1, y = ROUND(kx + d). This method by x1−x0
brute force is inefficient because of the multiplication and the function ROUND().
Basic Incremental Algorithm: for each x0 ≤ xi < xi+1 ≤ x1, yi+1 = yi + k, However, the successsive addition of a real number can lead to a cumulative error buildup!
Midpoint Line Algorithm:
For 0 ≤ k ≤ 1
For one approximate point P = (x, y), we only have 2 choices for the next point E = (x + 1, y) and
NE =(x+1,y+1),weshouldchoosetheonewhichisclosertok(x+1)+d
Calculate the middle point M = (x + 1, y + 1 ) 2
If the Intersection point Q is below M, take E as next, otherwise take NE as next. Note that: we consider this equation:
We assume a > 0
For a point (x, y)
f(x,y)=ax+by+c=(y1 −y0)x−(x1 −x0)y+(x1y0 −y1x0)
if f(x,y) = 0, (x,y) lies on the line.
if f(x,y) < 0, (x,y) lies upon the line.
if f(x,y)>0,(x,y)liesbelowtheline.
So we have to test f (M) = a(x + 1) + b(y + 1 ) + c = f (Former) + a + b 22
Assum a > 0, if f (M) > 0 choose NE otherwise choose E Update f(Former):
If we choose E, f (Former) = f (Former) + a IfwechooseNE, f(Former)= f(Former)+a+b
Note that:a and b are constant integer, so here is no cumulative error issuse 3. A harder problem: Triangles Rasterization
Why Triangle:
Triangles (tris) are a simple explicit 3D surface representation.
Convex and concave polygons (polys) can be decomposed into triangles.
Tris are planar and unambiguously defined by three vertex(verts) coordinates (coords).
Definition: Find and draw pixel samples inside tri edges and interpolate parameters defined at verts 4. Rasterizxation and Hidden Surface Removal(HSR) Algorithm Classes:
Image order rasterization: ray tracing/ ray casting traverse pixel, process each in world-space
2
transform rays from image-space to world-space Object order rasterization: scan-line / LEE
traverse triangles, process each in image-space transform objects from model-space to image-space
5. LEE Linear Expression Evaluation Algorithm:
We already discussed in Midpoint Line Algorithm that how to determine a point is on the left(up) or right(below) the line, just a quick review here:
Assume the lien have a positive slope
For an Edge Equation E, for point (x, y): E(x, y) = dY(x − X) − dX(y − Y)
if E(x, y) = 0, (x, y) lies on the line.
if E(x, y) < 0, (x, y) lies right(below) the line.
if E(x, y) > 0, (x, y) lies left(up) the line.
For Rasterization:
Compute LEE result for all three edges.
Pixels with consistent sign for all three edges are inside the tri.
Include edge pixels on left or right edges.
LEE need to check every pixel in the bounding box.
LEE is very good in parallel(SIMD) system.
Furthermore: Given 3 random verts how to find CW edge cycle
Determine L/R and Top/Bot edges for edge-pixel ownership
6. Scan Line Rasterizer: Sort vets by Y
Setup edge DDAs for edges
Sort edges by L or R(The long edge on left or right)
Start from Top Vertice, and switch DDA when hit the middle vertice.
7. Interpolate Z:
Ageneral3Dplaneequationhas4terms: Ax+By+Cz+D=0 (A,B,C)isthenormalofthatplane,so(X,Y,Z)0 ×(X,Y,Z)1 =(A,B,C) Then plug any vertex coord into equation and solve for D.
Given (A,B,C,D) and any point (x,y) can solve z
8. Used Z-buffer to remove hidden surfaces
Initial Z-buffer to MAXINT at the begining of every frame Interpolate vertex Z values to get Zpix
Only write new pixel to the buffer if Zpix < Zbu f f er Notice that Z should always bigger or equal to zero!
9. Hidden Line Removal(HLR):
Simple z-buffer does not work when the render only draws edge(outlines of polygons). Need edge-crossing and object sorting methods.
10. Painter’s Algorithm: render in order front to back A object is in front of another object means:
Z of all verts of one object is less than the other. This algorithm not work if Z-sort is ambiguous.
11. Warnock Algorithm:
Subdivide screen untril a leaf region has a simple front/back relationship.
Leaf regions have one or zero surfaces visible, and the smallest region is usually a pixel
3
Usually use quad tree subdivision 12. BSP-Tree:
View-Independent binary tree(pre-calculated) allows a view-dependent front-to-back or back-to-front traversal of surfaces.
Use Painter Algorithm to do back-to-front traversal.
Useful for transparency - full depth-sort of all surface.
13. Culling:
Culling with portals: pre-compute the invisible part.
Culling by View Frustum: Skip a triangle iff all its vertices are beyond the same screen edge! Pitfall: If the vertices are beyond different edge, some part of the tri might still in the screen. Image a giant tri that cover the whole screen.
Backface Culling: For closed(water-tight) objects, surfaces with oriented-normals facing away from the camera are never visible.
Pitfall: BF Culling only work for water-tight object!
Frustum: Only visible triangles are drawn into the frame buffer.
4
3
Class 4,5,6,7:Transformations
Linear transformations (Xforms) define a mapping of coordinates (coords) in one coordinate frame to another. Vb = XbaVa
Homogeneous Vector (V) is 4 × 1 columns (x, y, z, w)T Homogeneous Transforms (X) is 4 × 4 matrix
From Homogeneous Vector to 3D Vector:
x= x,y= y,z= z www
Why we use Homogeneous Vector?
We want to uniform the transform matrix including translation, scaling, rotation in the same form of matrix.
3.1
1.
2.
3.
Transformation Matrix
Translation:
Scaling:
Rotation, CCW:
T(tx,ty,tz) ⇒
S(sx,sy,sz) ⇒
1 0 0 tx
0 1 0 ty −1 ,T 0 0 1 tz
0 0 0 1
sx 0 0 0
0 sy 0 0 −1 ,S 0 0 sz 0
0 0 0 1
1 0 0 0
(tx,ty,tz) = T(−tx,−ty,−tz)
(sx,sy,sz) = S(s , s , s ) x y z
1 1 1
0 cosθ −sinθ 0 −1 T Rx(θ)⇒ ,Rx (θ)=Rx(θ)
0 sinθ cosθ 0 0 0 0 1
cosθ 0 sinθ 0
0 1 0 0 −1 T Ry(θ)⇒ ,Ry (θ)=Ry(θ)
−sinθ 0 cosθ 0 0 0 0 1
cosθ −sinθ 0 0
sinθ cosθ 0 0 −1 T Rz(θ)⇒ ,Rz (θ)=Rz(θ)
0 0 10 0 0 01
4. Pitfall: commutative property is for S,R only.
Here assume uniform scaling in all dimensions.
If S is not an uniform scaling matrix. S,R don’t have commutative property.
5
3.2 Spaces Transformation
1. NDC to Output Device
Xsp ⇒
xs 0 0 xs 2 2 0−ys 0 ys
2 2 0 0 MAXINT 0
0 0 0 1
Note that:
Output Device is RH coords and origin in upper left.X ∈ [0,xs),Y ∈ [0,ys),Z ∈ [0,MAXINT] NDC is LH coords and origin at screen center. X,Y ∈ [−1,1],Z ∈ [0,1]
2. Perspective Projection
WhatisdAndWhytherearetwo 1? d
1 0 0 0
0 1 0 0
Xpi ⇒ 0 0 1 0 d
0 0 1 1 d
Assume camera is on (0,0,−d), perspective(also image) plane is z = 0, a object in world space is (X,Y, Z)
Defined FOV(field of view) as the angle the camera can see.
Note: X ∈ [−1,1], so the distant(d) from Forcus point to view plane can be calculate by this equation:
1 =tan(FOV) d2
Futher More: The object project to view plane can be calculate by these equations:
X =x⇒x= X Z+d d Z+1
d
Y =y⇒y= Y Z+d d Z+1
d
Z =z⇒z= Z Z+d d Z+1
d
(x,y,z)=( X , Y , Z ) Z+1 Z+1 Z+1
ddd
Write this 3D vector to Homogeneous Vector: (x,y,z,w)=(X,Y,Z,Z +1)
d
We delete all vector that Z < 0, because they cannot project to the view plane.
For z >= 0, we define z′ = z d
(x,y,z′,w)=(X,Y,Z,Z +1)=Xpi∗(X,Y,Z,1) dd
Pitfall: Do Z interpolation in Perspective Plane!
Why we need the farest plane?
Asymptotic curve of Z vs. z, that z increase slower when Z is large. It might map different Z to the same z
Futher: We forcus on the range of Z now is z ∈ (−∞, d).
But in NDC we hope z ∈ [0, 1) So:
6
3.3 Camera Matrix
1. Assume camera position is c, camera look-at point is l, here c and l are both in world coordinate. And the world up vector is up
2. Camera Z-axis in world coordinate is
Z= cl
||cl||
3. Camera Y-axis in world coordinate is the orthogonal(vertical) part of world-up vector to Z-axis which is ′
6. Also we need to add the translation of the camera to the Matrix:
7. Now we can get the inverst matrix Xiw:
Xiw ⇒
Xx Xy Xz Yx Yy Yz
Zx Zy Zz
−X · c −Y · c
−Z · c
up =up−(up·Z)Z
′ Y= up
′ ||up ||
4. Camera X-axis in world coordinate is orthogonal to both Y and Z axises. So: X = Y × Z
5. Build the Xwi from camera space to world space:
X-axis vector [1, 0, 0] in camera space should be X in world space, also for Y,Z-axis vectors, So:
Xwi ⇒
Xx Yx Xy Yy
Xz Yz
Zx 0 Zy 0
Zz 0
Xwi ⇒
Zy
0 0 0 1
0 0 0 1
X x Xy
Y x
Z x
c x cy
Yy
Xz Yz Zz cz
0 0 0 1
8. In this proof we know that: If we know the X,Y,Z-axis in world coordinate for a specific space, we can easily build and inverst the translation from or to that space.
This method can also be used to proof the general rotation matrix.
9. Orbit a Model about a Point
The idea is the same as place camera.
Need to care about which space you current in!
7
4
Class8-10 Illumination and Shading
1. 2.
Global vs. Local Illumination
Global: Models indirect illumination and occlusions Local: Only models direct illumination
Irradiance
E =
I(x,ω)cosθdx
Ω
Note: I(x,ω) is the light intensity arriving from all directions and entering the hemisphere Ω over unit serface
area.
Also we only care the vertical(normal) part of the light, we dismiss all lights parallel to the surface by using cos θ
3. Simplified lighting:
Assume all lights are distant-point light.
Source have uniform intensity distribution
Neglect distance fallout
Direction to source is constant within scene
Using 2 parameters to define a light:
direction(x, y, z) vector from surface to light source intensity(r,g,b) of the light
4. specular reflection and diffuse reflection
Color shift by attenuation of RPG components for all reflection Specular Reflection Model(View-Dependent):
Lj(V) = Le · Ks · (V · R)spec R = 2(N · L)N − L
Note: V and R should be normalized.
Direction: Reflection occurs mainly in the “mirror” R direction, but there is some spread in similar directions V.
spec controls the distribution of intensity about R. Higher value of spec make the surface smoother Ks controls the color attenuation of Surface.
Diffuse Reflection Models(View-Independent):
Lj = Le · Kd(L · N)
Direction: All output directions are the same. But we only care vertical input light. L and N should be normalized.
Kd is the surtface attenuation component.
Ambient Light
Lj = La · Ka
Direction: All input and output directions are the same. Only one ambient light is needed and allowed. Complete Shading Equation:
Color=(K L ·(V·R)spec)+(K L ·(L·N))+(L ·K ) sedeaa
5. Detail about HW4(Lighting Implementation)
8
L denotes the direction to a infinity-far point-light source
E denotes the camera direction. If camera is far away, E is constant(In HW4.) N is specified at triangle vertices.
R must be computed for each lighting calculation (at a point).
Calculation of R:
R = 2 ( N · L ) N − L
Avoiding sqrt-root in this calculation
Choosing a Shading Space: Wee need all L, E, N, R in some affine(pre-perspective) space
Suggest use Image Space for HW4.
Model space is also a reasonable choice since Normal vectors are already in that space. This is most efficient!
Image Space Lighting (ISL)
Create a Transformation stack from model space to image space.
Need to normalized the Scale and delete translation for each matrix, only maintain the rotation, before push into this stack!
Check the sign of N · E and N · L:
Both positive: Compute lighting model.
Both negative: Flip normal(N) and compute lighting model.
Different sign: Skip it.
Check the sign of R · E: If negative, set to 0.
Check color overflow(> 1.0): Set to 1.
Compute Color at all pixels:
Per Face – flat shading
Per Vertex – interpolate vertex colors, Gouraud Shading(specular highlights are undersampled, aliased). Per Pixel – interpolate normals, Phong Shading (Expensive computation, but better sampling)
Set Shading Modes Parameter for different lighting calculation.
Pitfall in Phong Interpolation:
Need to normalize the interpolation normal vector.
4.1 Class10: Something More About Shading
1. Non-Uniform Scaling:
A non-uniform scaling alters the relationship between the surface orientation and the Normal Vector. So we cannot use the same matrix M for transformation of the Normals and the vertex coordinates.
We can fix this by using a different transformation Q = f (M) for transforming the Normals.
2. How to create a matrix for Normals:
In HW4, We create a matrix dismiss all scale matrix. For Detail:
As the definition of Normals:
After include the transform matrix:
N T · P = 0
( Q N ) T · ( M P ) = 0
9
By Definition of Matrix Multiplyer:
NT ·QT ·M·P=0
Since we already know NT · P = 0 we only need the inner part equal to identity matrix:
QT · M = I,Q = (M−1)T
Note that: If we only used uniform scaling: S = I after normalization.
If we compute Q for each M pushed on the Xim transform stack, the resulting Xn stack has Q and therefore allows non-uniform scaling.
3. Model Space Lighting(MSL):
Only need to transform Global lighting parameters once per models. Also need to transform Eye/camera direction into model space.
10
5 5.1
1.
2.
3.
Class 11-13: Texture Mapping Screen-Space Parameter Interpolation
In Z-buffer interpolation, we know that linear interpolation for z is wrong in image space, we need to interpolate in perspective space.
Accurate interpolation of RGB color or Normal vectors should also take perspective into account.
But we can ignore the color and normal interpolation error.
Interpolation for Texture Function: checkerboard Example: Using Linear Interpolation for u&v is also wrong!
4. How to compute perspective-correct interpolation of u, v at each pixel.
For each parameter P, we used Ps to denote the value in perspective space.
Note that: For Z interpolation V s = z
RescaleVzs toVzs ∈[0,Zmax]
Vz = Vz ·d Vz +1 Vz+d
d
V zs = V z · d · ( Z m a x ) = V z · Z m a x Vz + d d Vz + d
We can also get the invert equation:
Vz= Vzs·d Zmax −Vzs
For parameter from image space to perspective space:
Also we can get inver equation:
Ps= P = Pd Vz+1 Vz+d
d
P = Ps(Vz + d) d
We don’t have Vz but we already calculated Vzs in HW2, so we can used that:
P
Ps =
( z s+1)
Vs Zmax−Vz
5.
s Vzs
P=P ·Zmax−Vzs +1
Note that we only have Vzs and Zmax in this equation that we already know the value, we don’t need to care d and some other parameter.
′ Vzs
We used Vz = Zm a x −Vzs to simplify the equation:
Ps= P Vz′ + 1
P = Ps · (Vz′ + 1)
The Step for Parameter interpolation:
Get Vzp for each vertex.
Transform P to perspective space Ps for each vertex. Interpolate Vzp for each pixel.
Interpolate Ps for each pixel.
Transform Pz back to P by using Vzp for each pixel.
11
5.2 Texture
1. Scale u, v to Texture Image Size:
(u, v) coords range over [0, 1]
2D Image is a pixel array of xs − 1, ys − 1
But u ∗ (xs − 1) might not be Integer so we need to interpolate the color for non-Integer (u,v) coordinate from nearest 4 Integer point.
Color(p) = (1 − s)(1 − t)A + s(1 − t)B + stC + (1 − s)tD
2. For Phong Shading, using texture function f (u, v) to replace kd and ka
3. For Gouraud Shading, using f(u,v) to replace all ks, kd and ka
4. Procedural Texture
5. Bump Texture:
Alter normals at each pixel to create bump.
Normal Perturbation(P): N′ = N + P, P should be in the same space as N. But P should not be in model space.
Better spaces for P are Surface Coordinates, Tangent Space.
6. Surface Coordinates:
7. Normal Space encodes the surface normal rather than perturbation.
8. Noise Texture:
Perlin Noise:(Ref(Chinese): https://www.cnblogs.com/leoin2012/p/7218033.html Input: (x,y,z)for3Dand(u,v)in2D
Output: double value between 0 and 1
We have 2 Pseudo Random Grid for each Integer Point(x,y,z are integers):
Noise Matrix(d): The color of Point for noise
Gradient Matrix(g): A random unit vector for each Point
For each input vector(u, v), if (u, v) isn’t Integer, we found 4-corners Integer Point: (i, j), (i + 1, j), (i, j + 1), (i + 1, j + 1)
For each Integer Point, we use dot product of distant vector(from (u, v) to Integer Point) and gradient vector to get the noise value.
In perlin noise every interpolation is in 1-D. So 2-D need first interpolate y-axis(twice) and then interpolate x-axis. 3-D need 7 interpolation. We used linear-interpolation in slides but we can use Fade function(easy curves) for better interpolation.
Turbulence: Sum noise with diminishing ampitude:
i=0 1 turbulence(x) =
9. Evironment(Reflection) Mapping:
Basic Idea: During rendering, compute the reflection of Eye vector(not the light vector)
Ignore the position of surface point in scene. We assume all points are on center point of the scene. Light and scenery are all merge into environment texture.
No object inter-reflection or shadow
Blur texture to simulate diffuse reflection
Sharp texture to simulate specular reflection
10. Cube Map:
Transform each Eye reflection vector R back to world space
k 2i
|noise(2i x)|
12
Find Max component: indicated which face of cube it would intersect
Compute intersection of R with cube face:
Move all Reflection ray tail to center of cube. Rescale the max component(for exampley) of vector R to 1.0.
The other 2 component(x, z) indicate the texture-pixel.
11. Refraction Map:
Use Snell’s law to compute refraction vector
Color aberration simulated with f (λ) refraction angle for multiple color bands
5.3 Implementation Of Texutre(HW5)
1. Step1: Texture coordinates: surface point → (u, v) Input: vertex in image space
Output: (u,v)
2. Step2: (u, v) → RGB color
Input: (u, v) Output: RGB color from image LUT
3. Interpolation of (u, v) need to be in perspective space.
4. Interpolation of 4-corner for non-Integer (u, v) is needed.
13
6 6.1
1. 2. 3.
4.
6.2
1. 2.
6.3
1. 2.
3.
Class14-16 Antialiasing The Source of Aliasing
Quantization error arise from insufficient accuracy of sample
Aliasing error arise from insufficient samples
Nyquist Theorem: Sample at least twice the rate of highest frequency present in the signal.
f(t) filtered for cutoff freq ωF (Remove high frequencies before sampling)
Sample Rate 1 is greater than 2ωF T0
Reconsturct(interpolate) with sinc function
Solution: Band-limit the input signal before sampling.
Implement Antialiasing(HW6)
Antialiasing by jitter supersampling
Sample a pixel several with different center and weight
Texture Antialiasing
Sample Rate Mismatch: Texture sampling rate generally does not match screen pixel sample rate (Texel:Pixel ratio)
Projected texture in screen image should sample near same rate(1:1) to texture map.
1 Texel: Many Pixel: No aliasing problem, But blur. Fix by using higher resolution textures.
1 Pixel: Many Texel: Aliasing problem. Fix by sample rate is twice highest freq in texture. Mip Map: Pre-compute filtered version of texture image at octave scale/size intervals. Using average color of 2 × 2 texels.
The space cost is only 33% more.
Scale for each level of Mip Map:
Scale = dU = dV dX dY
Pixel Scale is more complex since it is non-axis-aligned(after rotation and projection):
PixelScale = (dU, dU, dV, dV) dX dY dX dY
An approach to match Scale and PixelScale is choosing the highest PixelScale component.(Blur is better than aliasing!)
3D Interpolation: If the Pixel Scale is between 2 Texel Scales, we need to interpolate between 2 texture samples.
Anisotropic Interpolation: Combine more than 2 × 2 pixel in each texture samples.
Summed-Area Table(SAT):Compute a texture table T so that each texel has sum of all texels above and left SAT provide an approximation approach to get the avagerage color in O(1).
Note that: the texels sample might not be axis-aligned since the pixel to texel projection. But the different is
strictly less than 1 . 2
4.
5. 6.
14
7
Final Exam Review
1.
Shading Equation:
2. 3. 4. 5.
Know the meaning for every terms:
Ks, Ka, Kd
Le, La
s
N,L,R,E
Equation1: R = 2(N · L)N − L
Shading Mode:(Flat, Gouraud, Phong)
Texture
Calculate the normal: With Non-translate and Non-scale Matrix To Image Space Other Topic:
Enviroment Shading
Color=(K L ·(E·R)spec)+(K L ·(L·N))+(L ·K ) sedeaa
15
8 BRDF
8.1
What is BRDF
Basic Idea of BRDF: The reflect rate for a surface is based on input vector and camera vector. f(l,v) = dL0(v)(ForNonPointLight) = Lo(v) (ForPointLight)
l: input light vector
v: camera vector
L0: Output Radiance(Color)
El: Input Irradiance(Color*π in Unity)
θi: angle between input vector and surface normal
Propertiesof f(l,v) f(l,v) ≥ 0
f(l,v)= f(v,l)
R(l) = φ f(l,v)cosθ0dω ≤ 1, Discussed later
Reference for this section(In Chinese): https://blog.csdn.net/yjr3426619/article/details/81098626
dE EL cosθi
How to calculate color based on BRDF: In Point Light, Point Camera:
Directional-hemispherical reflectance: Definition:
L0(v)= f(lk)×Elcosθik k
φ
R(l) =
f(l,v)cosθ0dω
8.2 8.2.1
φ is the hemispherical in surface normal half
θ0 is the angel between camera view and surface normal ω is the space angle
Basically, R(l) is the sum of the (directional) energy of all output light in hemisphere. The output energy should be equal or less than the input energy so:
R(l) ≤ 1
How to calculate BRDF From Lambertian Model
Lambertian Model: Only Diffuse Color, No Specular Color. All output direction are the Same. In this model, we assume the output energy is part of input energy. So We can build this equation:
So:
R(l) = Cdif f
φ
Cdif f = R(l) =
f(l,v)cosθ0dω
16
We know f (l, v) and Cdi f f are both constant number:
cosθ0dω = Cdif f π
f(l,v) ·
f(l,v) = Cdif f
φ
8.2.2
A little bit harder: Phong Model
We assume the ambient light equal to 0 then the shading equation is:
L0 =(cosθi ∗Cdiff +(cosαr)spec ∗Cspec)×BL
first part is Lambertian model
αr is the angel between reflection vector and camera vector And we have shading equation in BRDF format:
L0(v)= f(l,v)×Elcosθik So we have a basic equation for f (l, v) that:
f(l,v)=Cdiff +(cosαr)spec∗Cspec π π ∗ cosθi
Notices that: If we need to normalize this equation, let Rspec(l) = Cspec
First: When θi = π , f (l, v) = + inf, which is implausible, so we times cosθi and a constant k:
Second:
fspec(l,v) = k ∗ (cosαr)spec ∗Cspec π
2
k∗ fspec(l,v)cosθ0dω=k∗Cspec/π∗ φφ
(cosαr)speccosθ0dω
Cspec =R(l)=
It’s very hard to do this calculation, by experience we can get:
k = spec + 2 2
So in Phong Model:
f(l,v)=Cdiff +(spec+2)Cspec∗(cosαr)spec π 2π
8.2.3
A little bit more harder: Blinn-Phong Model
We define H is the Halfway-Vector(Normalize the vector of the middle point) of input vector and camera
vector.
Shading equation:
H=L+V |L +V|
L0 =(cosθi ∗Cdiff +(cosβ)spec ∗Cspec)×BL β is the angel between H and surface normal vector.
Follow the step in Phong Model:
And:
k = spec + 8 8
f(l,v)=Cdiff +(spec+8)Cspec∗(cosβ)spec π 8π
17
8.2.4
Microfacet Model in Disney Paper
Equation First:
f(l,v)=dif fuse+ D(θh)F(θd)G(θl,θv) 4cosθl cosθv
Microfacet Model: The surface is an aggregate of many microfacet, each microfacet have different normal and you can only see part of the microfacet, and some microfacet might be blocked by others.
How many microfacets you can see is define by D(θh)
How many microfacets is blocked by other microfacets is define by G(θl,θv)
The fraction of reflaction energy in the total input energy is define by F(θd) 1 is a normalize factor.
8.2.5
BRDF With Physics Based Render
In Disney’s paper Section 5.3 5.6, it present how roughness can affect diffuse function and G function. Diffuse Function:
4cosθl cosθv
di f f use = baseColor (1 + (FD90 − 1)(1 − cosθl)5)(1 + (FD90 − 1)(1 − cosθv)5) π
9
D Function:
F Function:
FD90 = 0.5 + 2 ∗ cosθ2d ∗ roughness DGTR = c
FSchlick = F0 + (1 − F0)(1 − cosθd)5 Reference: Microfacet Models for Refraction through Rough Surfaces
αg = (0.5 + roughness/2)2
Next Step:
Build the BRDF shading function in unity. Find a better diffuse,D,F,G Function
Put some new variable into diffuse,D,F,G
G Function: Using GGX Function
(α2cos2θh + sin2θh)γ α = roughness2
18
10 finalEquation
fdisneyBRDF(L,N,V)=(1−metallic)(basecolor(fd ∗(1−subsurface)+ fss ∗subsurface)+ fsh) π
+ Fs∗Gs∗Ds +clearcoat∗ Fc∗Gc∗Dc 4 ∗ N · L ∗ N · V 4 4 ∗ N · L ∗ N · V
fd =(1+(FD90 −1)(1−N·L)5)(1+(FD90 −1)(1−N·V)5) FD90 = 0.5 + 2 ∗ cos2(N · H) ∗ roughness
fss =1.25(((1+(Fss90 −1)(1−N·L)5)(1+(Fss90 −1)(1−N·V)5))( 1 −0.5)+0.5) N · L + N · V
Fss90 = cos2(N · H) ∗ roughness
fsh = (white ∗ (1 − sheenTint) +
Ctint = baseColor lum(baseColor)
baseColor lum(baseColor)
∗ sheenTint) ∗ sheen ∗ (1 − N · H)5
Fs =Cs +(1−Cs)(1−N·H)5
Cs = (1 − metallic) ∗ 0.08specular((1 − specularTint)white + specularTint
baseColor lum(baseColor)
)2 + (N · H)2)2
) + metallic ∗ baseColor
1
N·V+ αG2 +(N·V)2−αG(N·V)
Gs=
αG = 1+roughness2
2
Ds = 1
H ·Y roughness2∗a
π ∗ roughness4 ∗ (( H ·X + roughness2/a
Dc =
Fc =0.04+0.96∗(1−V·H)5 (0.1 − 0.09clearCoatGloss)2 − 1
a = 1 − 0.9 ∗ anisotropic
2π ln(0.1 − 0.09clearCoatGloss) ∗ ((0.1 − 0.09clearCoatGloss)2 ∗ (N · V)2 + (1 − (N · H)2))
1
N·V+ (0.5+roughness∗0.5)4 +(N·V)4 −(0.5+roughness∗0.5)2 ∗(N·V)2
Gc=
19
10.1 FullyExpent
fdisneyBRDF(L,N,V) = (1−metallic)(basecolor (((1+((0.5+2∗cos2(N·H)∗roughness)−1)(1−N·L)5)(1+ π
((0.5+2∗cos2(N·H)∗roughness)−1)(1−N·V)5))∗(1−subsur f ace)+(1.25(((1+(cos2(N·H)∗roughness−1)(1−N· L)5)(1+(cos2(N·H)∗roughness−1)(1−N·V)5))( 1 −0.5)+0.5))∗subsur f ace)+((white∗(1−sheenTint)+
N · L + N · V
baseColor ∗ sheenTint) ∗ sheen ∗ (1 − N · H)5)) + (1 − metallic) ∗ 0.08specular((1 − specularTint)white +
lum(baseColor)
specularTint baseColor )+metallic∗baseColor+(1−(1−metallic)∗0.08specular((1−specularTint)white+
lum(baseColor)
baseColor 5
1
+ ( N · V ) 2 − 2 ( N · V )
specularTintlum(baseColor))+metallic∗baseColor)(1−N·H) ∗
1+roughness2 ∗ )/(4∗N·L∗N·V)+clearcoat ∗
( 1 π∗roughness4∗(( 2√H ·X
N · V +
+ 2√H ·Y )2+(N ·H)2)2
4
1+roughness4 2
roughness 1−0.9∗anisotropic roughness 1−0.9∗anisotropic (0.04+0.96∗(1−V·H)5)/(N·V+(0.5+roughness∗0.5)4 +(N·V)4 −(0.5+roughness∗0.5)2 ∗(N·V)2)∗
((0.1 − 0.09clearCoatGloss)2 − 1)/(2π ln(0.1 − 0.09clearCoatGloss) ∗ ((0.1 − 0.09clearCoatGloss)2 ∗ (N · V)2 + (1 − (N · H)2))))/(4 ∗ N · L ∗ N · V)
20