程序代写代做代考 GPU algorithm cse3431-lecture13-raytracing

cse3431-lecture13-raytracing

Z-buffer algorithm
for each polygon in model
project vertices of polygon onto viewing plane
for each pixel inside the projected polygon
calculate pixel colour
calculate pixel z-value
compare pixel z-value to value stored for pixel in z-buffer
if pixel is closer, draw it in frame-buffer and z-buffer
end
end


Z-buffer Graphics Pipeline

Modeling
transformation

Viewing
transformation

Projection
transformation

Perspective
division

Viewport
transformation

OCS WCS VCS CCS

NDCS
DCS

Rasterization

Global Illumination
Rendered by
LuxRender

Copyright
Simon Wendsche

Light Transport Equation (LTE)

• Integral over directions

Physics-based Rendering

!i
!o

n

Lo(p,!0) = Le(p,!o) +

Z

S2
f(p,!o,!i)Li(p,!i)|!i · n|d!i

Lo : Outgoing radiance at p in direction !o

Le : Emitted radiance at p in direction !o

Li : Incident radiance at p from direction !i

n : Unit normal at p

f : Bidirectional scattering function

S2 : The unit sphere centered at p p

Bidirectional Scattering Distribution
Function
• BRDF: Bidirectional Reflectance Distribution Function

• BTDF: Bidirectional Transmittance Distribution
function

BSDF

✓r✓i

fr(p,!o,!i,�) = fr(p,�i, ✓i,�r, ✓r,�)

Examples of a few simple diffuse
BRDFs (wikipedia)

BSSRF

p
p’

!i

!o

L(p,!0) =

Z

A

Z

S2
S(p0,!i, p,!o) cos ✓id!idA

• Bidirectional scattering-surface distribution function S()

• We now need to integrate over surface area as well

Rendering Equation
Simple but difficult to solve
• High dimensionality

– b is a function of 6 parameters,
– ρ is a function of 9 parameters
– and we have not even used a variable for color

(wavelength of light)

Solutions based on
• simplifying assumptions
• stochastic sampling
• FEM

One solution
Ray-tracing

Open source ray tracers
• Povray: www.povray.org
• LuxRender, http://www.luxrender.net
• MegaPov

http://www.luxrender.net

Raytracing
Offline high quality rendering
• Real-time versions for limited complexity scenes

Tuned for specular and transparent objects
• Partly physics, optics

Main idea:
• A pixel should have the color of the object point that

projects to it.

Forward and Backward methods
Forward: from light sources 

to eye

Backward: from eye to light 

sources

eye

eye

Our version
There are different variants of the back
wards RT method that make different
assumptions and simplifications

We will study a specific version that is
• Efficient
• Simple to implement
• Educational

Original Backwards Raytracing
Turner Whitted 1979

Assumptions
• Reflection and Refraction only on


ideal direction cones
• Lambertian diffuse surfaces


(no custom BRDFs)
• Diffuse objects do not receive


reflected light

for each pixel on screen
• determine ray from eye through pixel
• find closest intersection of ray with an object
• compute direct illumination (shadow ray)
• trace reflected and refracted ray, recursively
• calculate pixel colour, draw pixel

end

http://en.wikipedia.org/w/index.php?title=Turner_Whitted&action=edit&redlink=1

Scene

Light

Eye

SC

SA

SD

SB

SA shiny, 

transparent

SB,SD diffuse,

opaque

SC shiny, 

opaque

Three sources of light

The light that point PA emits to the eye comes from:


– light sources

– other objects (reflection)

– other objects (refraction)

Light

Eye

SC

SA

SD

SB

SA shiny, transparent

SB,SD diffuse,opaque

SC shiny, opaque

PA

Directly from a light source
Local illumination model:

I = Ia+Idiff+Ispec

Light

Eye

SC

SA

SD

SB

SA shiny, transparent

SB,SD diffuse,opaque

SC shiny, opaque

Reflection
What is the color that is reflected to PA

and that PA reflects back to the eye?
What ever comes from the direction

that eventually reflects towards the
eye.

Light

Eye

SC

SA

SD

SB

SA shiny, transparent

SB,SD diffuse,opaque

SC shiny, opaque

Pc

n

Reflection
What is the color that is reflected to PA

and that PA reflects back to the eye?
What ever comes from the direction

that eventually reflects towards the
eye.

That direction gets light from point Pc
What is the color of PC ?

Light

Eye

SC

SA

SD

SB

SA shiny, transparent

SB,SD diffuse,opaque

SC shiny, opaque

Pc

n

Reflection
What is the color of Pc?
Just like PA : raytrace PC i.e compute

the three contributions from
• Light sources
• Reflection
• Refraction

Light

Eye

SC

SA

SD

SB

SA shiny, transparent

SB,SD diffuse,opaque

SC shiny, opaque

Pc

Refraction
Transparent materials

How do we compute the refracted
contribution?

We raytrace the refracted ray.
1. Lights
2. Reflection
3. Refraction

Light

Eye

SC

SA

SD

SB

SA shiny, transparent

SB,SD diffuse,opaque

SC shiny, opaque

What have we ignored ?

What have we ignored ?
Diffuse objects do not receive light from
other objects.

Three sources of light together

The color that the pixel is assigned comes from:

light sources

other objects (reflection)

other objects (refraction)

It is more convenient to trace the 

rays from the eye to the 

scene (backwards)

Light

Eye

SC

SA

SD

SB

SA shiny, transparent

SB,SD diffuse,opaque

SC shiny, opaque

Backwards Raytracing Algoritm
For each pixel construct a ray: eyeà pixel
raytrace( ray )
P = compute_closest_intersection(ray)


color_local = ShadowRay(light1, P)+…

+ ShadowRay(lightN, P)

color_reflect = raytrace(reflected_ray )

color_refract = raytrace(refracted_ray )

color = color_local

+ kre*color_reflect

+ kra*color_refract

return( color )

How many levels of recursion do we
use?

The more the better.
Infinite reflections at the limit.

Stages of raytracing
We need to do the following:
• Set the camera and the image plane
• Compute a ray from the eye to every pixel and trace it

in the scene
• Compute object-ray intersections
• Cast shadow, reflected and refracted rays at each

intersection

Setting up the camera

Set the viewport and resolution
in Camera Coordinates

Image parameters
• Width 2W, Height 2H
• Number of pixels nCols x nRows

Camera parameters (view point)
• Camera coordinate system (eye, u,v,n)
• Image plane at n = -N

Pixel coordinates in camera
coordinate system

Lower left corner of pixel P(r,c) has
coordinates in 

camera space:

Lower left corner

Ray through pixel:

Ray through pixel

Camera coordinates : P (r, c) = (uc, vr,�N)
Wolrd coordinates : P (r, c) = eye�Nn + ucu + urv

ray(r, c, t) = eye + t(P (r, c)� eye)

ray(r, c, t) = eye + t

�Nn + W (

2c
nCols

� 1)u + H(
2r

nRows
� 1)v

Ray-object intersections
Objects
• Spheres, cylinders, surfaces, etc
• Any object that we can mathematically compute

intersections with lines

For us
• Only affine transformed spheres

Can I use homogeneous coordinates?

This is a quadratic equation in t

Ray – Canonical sphere
intersection

Pray(t) = S + ct

Sphere(P ) = |P |� 1 = 0

Sphere(ray(t)) = 0 )
|S + ct|� 1 = 0 ) (S + ct)(S + ct)� 1 = 0 )
|c|2 t2 + 2(S · c)t+ |S|2 � 1 = 0

Solving a quadratic equation

Note!
Can I use homogeneous coordinates?

Note!
Can I use homogeneous coordinates?

Must solve the equation in cartesian space

First intersection?

Ray(t)

t=0

t= ∞
Intersections

First intersection?

t1 < t2 Ray(t) t=0 t= ∞ Intersections Transformed primitives? • That was a canonical sphere. • Where does S+ct hit the transformed sphere G = T(F) ? P P’ Linear transformation P P’ Linear transformation T-1 P P’ Inverse transformed ray • Drop 1 and O to get S’+c’t. So ..for each object • Inverse transform ray and get S’+c’t. • Find the intersection t, th, between inv-ray and canonical object. • Use th in the untransformed ray S+ct to find the intersection. Final Intersection r̃(t) = M�1 0 BB @ Sx Sy Sz 1 + 1 CC A tM �1 0 BB @ cx cy cz 0 1 CC A = S̃ 0 + c̃0t Shadow ray • For each light intersect shadow ray with all objects. • If no intersection is found apply local illumination 
 at intersection. • If in shadow no contribution. Lights Reflected ray Raytrace the reflected ray Rayrf(t) Ray(t) N a a P Refracted ray Raytrace the refracted ray Snell’s law N Add all together • color(r,c) = color_shadow_ray + Kre*colorre + Kra*colorra Efficiency issues Computationally expensive • avoid intersection calculations – Voxel grids – BSP trees – Octrees – Bounding volume trees • optimize intersection calculations – try recent hit first – reuse info from numerical methods Summary: Raytracing Recursive algorithm Function Main
 for each pixel (c,r) on screen
 determine ray rc,r from eye through pixel
 ray.setDepth(1) 
 color(c,r) = raytrace(rc,r ) end for end function raytrace(r) 
 if (ray.depth() > MAX_DEPTH) return black

P = closest intersection of ray with all objects

if( no intersection ) return backgroundColor

clocal = Sum(shadowRays(P,Lighti))

cre = raytrace(rre)
cra = raytrace(rra)
return (clocal+kre*cre+kra*cra)
end

Advanced concepts
Participating media
Transculency
Sub-surface scattering (e.g. Human skin)
Photon mapping

Photon Mapping
(slides courtesy of Shawn Singh)

Create a sampling of the illumination
– For scenes that do not change can be pre-computed
– View independent

Photon Mapping

• “Photons” are
sample points
traced from light
sources

• For every ray, a
“radiance estimate”
is computed using
photons

Camera

Light Source

Scene Geometry

RaysPhotons

K Nearest Neighbor
Radiance Estimate

Resoures
• Real-time Ray Tracing

Our implementation based on [Wald 2004]

• GPU Photon Mapping
[Purcell et al 2003], [Larsen and Christensen 2004]

• Specialized Hardware
[Steinhurst 2007], [Singh 2007]

• Improve performance of each k-nearest neighbor query
[Ma and McCool 2002], [Gunther et al 2004], [Wald et al. 2004]

• Improve coherence of queries
[Steinhurst et al. 2005], [Havran et al. 2005]

• Reverse photon mapping
[Havran et al 2005]

A Useful Analogy
Ray Tracing

• Each ray performs a
search through scene
geometry

• Each search finds many
ray-triangle intersection
operations

• SIMD packets

Photon Mapping

• Each query point
performs a search
through photon map

• Each search finds many shader operations

• SIMD packets ???

Raytracing summary

View dependent
Computationally expensive
Good for refraction and reflection effects