程序代写代做代考 data structure c/c++ GPU cuda Hive ME-C3100 Computer Graphics, Fall 2016 Lehtinen / Kemppinen, Ollikainen, Granskog

ME-C3100 Computer Graphics, Fall 2016 Lehtinen / Kemppinen, Ollikainen, Granskog
Programming Assignment 5: Ray Tracing
Due Sun Dec 4th at 23:59.
In this assignment, you will first implement a basic ray tracer. As seen in class, a ray tracer sends a ray for each pixel and intersects it with all the objects in the scene. Your ray tracer will support orthographic and perspective cameras as well as several primitives (spheres, planes, and triangles). You will also have to support several shading modes and visualizations (constant and Phong shading, depth and normal visualization). Then you will extend the implementation with recursive ray tracing with shadows and reflective materials.
Requirements (maximum 10p) on top of which you can do extra credit
1
1. 2. 3. 4. 5. 6. 7. 8. 9.
Generating rays; ambient lighting (1p) Visualizing depth (0.5p)
Perspective camera (0.5p)
Phong shading; directional and point lights (3p) Plane intersection (0.5p)
Triangle intersection (1p) Shadows (1p)
Mirror reflection (1p) Antialiasing (1.5p)
Getting Started
This time in addition to being a graphical OpenGL application, the assignment can be run as a console application that takes arguments on the command line, reads a scene file and outputs a PNG image. There’s an interactive preview mode that lets you fly around the scene and ray trace an image from at any point, but for any test you want to repeat, it’s more convenient to use a test script. We supply such scripts in the exe folder; you can copy and modify them to make your own. As usual, you also get an example.exe binary whose results you can compare to yours. The scene files are plain text, so you can
1

easily open and read them to know exactly what a particular scene is supposed to be and what features it uses. You won’t have to write any loading code.
The interactive preview lets you move around using the WASD keys and look around with the mouse by dragging with the right mouse button down. There are buttons and sliders for most of the important ray tracer and camera settings. Pressing enter renders an image using the ray tracer from the current viewpoint and using all of the current settings. Pressing space toggles between the ray traced image and interactive preview. If you click anywhere on the ray traced image, a debug visualisation ray is traced and its path (and intersection normals) appears as line segments in the preview – this is especially useful when debugging geometry intersections, reflections and refractions.
The scripts work as follows: render default.bat renders a single scene with the default settings. You can also drag the scene files on the bat file to start it. render options.bat also renders a single scene, but it takes a set of arguments so you can change the rendering settings. render all uses these and contains manually tweaked details for some scenes. You can add or change some of the calls in it to render a specific scene in higher resolution or add a test scene of your own, for example. The example folder includes versions of the scripts that call the model solution and render into the same folder as your code but with different file names so you can compare the results.
If you plan on doing extra credit, it’s a very good idea to create a test scene where your new feature is clearly visible (and maybe have two render calls to it; one with and one without the feature). These already exist for some of the recommended extras. Also add the scenes to your render all.bat.
The batch files assume that you have an exe file compiled in x64 release mode. Switch that on in visual studio, or edit the exe path accordingly (.bats are just text files).
Note that this assignment is more work than the previous ones. Please start as early as possible.
2 Application structure
The render function in main.cpp is the main driver routine for your application.
The base code is designed to render individual scanlines in parallel on multiple processor cores by using OpenMP. It will speed up the rendering by a lot and we recommend you use it. It is initially disabled to prevent any surprises to you; to enable it, uncomment the line#pragma omp parallel forinsidemain.cppandgotoProject -> Properties -> Configuration -> C/C++ -> Language and set OpenMP support on. If you choose to enable multithreading, you have to keep your own code thread-safe. All the requirement code should be thread-safe if you write it naturally, but with some extras like Film/Filter, special care will be needed to keep them thread-safe. If you get strange bugs, first try disabling multithreading to rule it out as a cause.
The different shape primitives are designed in an object hierarchy. A generic Object3D class serves as the parent class for all 3D primitives. The individual primitives (such
2

as Sphere, Plane, Triangle, Group, and Transform) are subclassed from the generic Object3d.
We provide you with a Ray class and a Hit class to manipulate camera rays and their intersection points, and an abstract Material class. A Ray is represented by its origin and direction vectors. The Hit class stores information about the closest intersection point and normal, the value of the ray parameter t and a pointer to the Material of the object at the intersection. The Hit data structure must be initialized with a very large t value (such as FLT MAX). It is modified by the intersection computation to store the new closest t and the Material of intersected object.
3 Detailed instructions
R1 Generating rays; ambient lighting (1p)
Our first task is to cast some rays into a simple scene with a red sphere and an ortho- graphic camera (r1+r2 01 single sphere).
The virtual Camera class is derived by OrthographicCamera and PerspectiveCamera. The Camera class has two pure virtual methods:
• virtual Ray generateRay(const Vec2f& point) = 0; • virtual float getTMin() const = 0;
The first is used to generate rays for each screen-space coordinate, described as a Vec2f. The direction of the rays generated by an orthographic camera is always the same, but the origin varies. The getTMin() method will be useful when tracing rays through the scene. For an orthographic camera, rays always start at infinity, so tmin will be a large negative value.
An orthographic camera is described by an orthonormal basis (one point and three vec- tors) and an image size (one float). The constructor takes as input the center of the image, the direction vector, an up vector, and the image size. The input direction might not be a unit vector and must be normalized. The input up vector might not be a unit
3

vector or perpendicular to the direction. It must be modified to be orthonormal to the direction.
The third basis vector, the horizontal vector of the image plane, is deduced from the direction and the up vector (hint: remember your linear algebra and cross products).
The origin of the rays generated by the camera has a range of the whole image plane.
The screen coordinates of the plane vary from (−1,−1) → (1,1). The corresponding world coordinates (where the origin lives) vary from center − (size ∗ up)/2 − (size ∗ horizontal)/2 → center + (size ∗ up)/2 + (size ∗ horizontal)/2.
The camera does not know about screen resolution. Image resolution is handled in your main loop. For non-square image ratios, just crop the screen coordinates accordingly.
Implement the normalizedImageCoordinateFromPixelCoordinate method in Camera as well as OrthographicCamera::generateRay. Because the base code already lets you intersect rays with Group and Sphere objects, you should now see the sphere as a flat white circle. To complete the requirement, head to RayTracer::traceRay and add one line of code that creates ambient lighting for the object using the ambient light of the scene and the diffuse color of the object. After this is done, you should see the sphere in its actual color:
Figure 1: r1+r2 01 single sphere: colors, depth, and normals
There is also another test scene with five spheres that end up overlapping each others in the picture frame:
Figure 2: r1+r2 02 five spheres: colors, depth
4

R2 Visualizing depth (0.5p)
In the render function, implement a second rendering style to visualize the depth t of objects in the scene. Depth arguments to the application are given as -depth 9 10 depth file.png, where the two numbers specify the range of depth values which should be mapped to shades of gray in the visualization (depth values outside this range should be clamped) and the filename specifies the output file. The depth rendering can be performed simultaneously with normal output image rendering.
See the ready-made scripts in exe folder for details and good depth values for a few scenes. Feel free to fill your own.
Note that the base code already supports normal visualization. Try the visualization of
surface normals by adding another input argument for the executable -normals to specify the output file for this visualization. This may prove useful when debugging
shading and intersection code.
R3 Perspective camera (0.5p)
To complete this requirement, implement the generateRay method for PerspectiveCamera. Note that in a perspective camera, the value of tmin has to be zero to correctly clip
objects behind the viewpoint.
Hint: In class, we often talk about a “virtual screen” in space. You can calculate the location and extents of this “virtual screen” using some simple trigonometry. You can then interpolate over points on the virtual screen in the same way you interpolated over points on the screen for the orthographic camera. Direction vectors can then be calculated by subtracting the camera center point from the screen point. Don’t forget to normalize! In contrast, if you interpolate over the camera angle to obtain your direction vectors,
5

your scene will look distorted – especially for large camera angles, which will give the appearance of a fisheye lens. (The distance to the image plane and the size of the image plane are unnecessary. Why?)
Figure 3: r3 spheres perspective: a familiar scene again with a perspective camera (colors and normals)
R4 Phong shading; directional and point lights (3p)
• Provide an implementation for DirectionalLight::getIncidentIllumination to get directional lights.
• Implement diffuse shading in PhongMaterial::shade.
• Extend RayTracer::traceRay to take the new implementations into use. The class variable scene (in RayTracer) is a pointer to a SceneParser. Use the SceneParser to loop through the light sources in the scene. For each light source, ask for the incident illumination with Light::getIncidentIllumination.
Diffuse shading is our first step toward modeling the interaction of light and materials. Given the direction to the light L and the normal N we can compute the diffuse shading as a clamped dot product:
􏰎L·N ifL·N>0
0 otherwise
If the visible object has color cobject = (r,g,b), and the light source has color clight = (Lr, Lg, Lb), then the pixel color is cpixel = (rLrd, gLgd, bLbd). Multiple light sources are handled by simply summing their contributions. We can also include an ambient light with color cambient, which can be very helpful for debugging. Without it, parts facing away from the light source appear completely black. Putting this all together, the formula is:
cpixel = cambient ∗ cobject + 􏰃􏰠clamp(Li · N) ∗ clight ∗ cobject􏰡 i
Color vectors are multiplied term by term. (The framework’s vector multiplication is already implemented as element-wise multiplication, so c object * c pixel is enough).
d=
6

Note that if the ambient light color is (1, 1, 1) and the light source color is (0, 0, 0), then you have constant shading.
• Implement Phong shading in PhongMaterial::shade • Implement PointLight::getIncidentIllumination
Directional lights have no falloff. That is, the distance to the light source has no impact on the intensity of light received at a particular point in space. With point light sources, the distance from the surface to the light source will be important. The getIllumination method in PointLight, which you should implement, will return the scaled light color with this distance factored in.
The shading equation in the previous section for diffuse shading can be written I = A + 􏰃 Di,
where
i
A = cambient ∗ cobject diffuse
Di = clighti ∗ clamp(Li · N) ∗ cobject diffuse
i.e., the computed intensity was the sum of an ambient and diffuse term. Now, for Phong shading, you will have
I = A + 􏰃􏰠Di + Si􏰡 i
where Si is the specular term for the ith light source:
Si = clighti ∗ ks ∗ (clamp(v · ri))q
Here, ks is the specular coefficient, ri is the ideal reflection vector of light i, v is the viewer direction (direction to camera), and q is the specular reflection exponent. ks is the specularColor parameter in the PhongMaterial constructor, and q is the exponent parameter. Refer to the lecture notes for obtaining the ideal reflection vector.
R5 Plane intersection (0.5p)
Implement the intersect method for Plane.
With the intersect routine, we are looking for the closest intersection along a Ray, parameterized by t. tmin is used to restrict the range of intersection. If an intersection is found such that t > tmin and t is less than the value of the intersection currently stored in the Hit data structure, Hit is updated as necessary. Note that if the new intersection is closer than the previous one, both t and Material must be modified. It is important
7

Figure 4: r4 diffuse ball, r4 diffuse+ambient ball: only diffuse, and both diffuse and ambient shading, respectively
Figure 5: r4 colored lights: three different directional lights shading a white sphere
Figure 6: r4 exponent variations: spheres with different specular exponents. The light is coming from somewhere in the top right.
that your intersection routine verifies that t >= tmin. tmin depends on the type of camera (see below) and is not modified by the intersection routine.
You can look at Group and Sphere intersection code to see how those are implemented.
Then implement the intersect method for Plane. Test with r5 spheres plane. The R4 point light circle scene includes also a plane. Note that the preview approximates the plane with a finite square; it will look different than the ray traced plane.
8

Figure 7: r4 exponent variations back: spheres with different specular exponents, back side. Note that you’ll have to disable shadows if rendering from GL window to get this look.
Figure 8: r4 point light circle, r4 point light circle d2: constant and quadratic attenua- tion, respectively. Note that this scene requires transforms as well as sphere, plane and triangle intersections. The light on the plane will look the same without transforms as well, so pay attention to that.
Figure 9: r5 spheres plane: a familiar scene again with a plane as a floor. Colors, depth, and normals.
R6 Triangle intersection (1p)
Implement the intersect method for Triangle. Simple test scenes are e.g. the plain cube scenes, and more complicated ones (many many triangles) are the bunnies. The R4 point light circle includes also boxes that are composed by triangles, so you can try it
9

too; it verifies also the shading result.
Use the method of your choice to implement the ray-triangle intersection: general poly- gon with in-polygon test, barycentric coordinates, etc. We can compute the normal by taking the cross product of two edges, but note that the normal direction for a trian- gle is ambiguous. We’ll use the usual convention that counter-clockwise vertex ordering indicates the outward-facing side. If your renderings look incorrect, just flip the cross product to match the convention.
Figure 10: r6 bunny mesh 200 and … 1000: Bunnies consisting of 200 and 1000 triangles, respectively.
Figure 11: r6 cube orthographic, r6 cube perspective: a cube (consisting of just triangles) with the two different cameras
R7 Shadows (1p)
Next, you will add some global illumination effects to your ray caster. Once you cast secondary rays to account for shadows and reflection (plus refraction for extra credit), you can call your application a ray tracer. For this requirement, extend the implementation of RayTracer::traceRay to account for shadows. A new command line argument -shadows will indicate that shadow rays are to be cast.
To implement cast shadows, send rays from the hit point toward each light source and test whether the line segment joining the intersection point and the light source intersects an object. If so, the hit point is in shadow and you should discard the contribution of that light source. Recall that you must displace the ray origin slightly away from the surface, or equivalently set tmin to some ε. You might also want to add the shadow ray to the debug visualisation vector to make it visible among the other rays.
10

Figure 12: r7 simple shadow: light coming from the above
Figure 13: r7 colored shadows: three different lights casting somewhat intersecting shad- ows
R8 Mirror reflection (1p)
To add reflection (and refraction) effects, you need to send secondary rays in the mirror (and transmitted) directions, as explained in lecture. The computation is recursive to account for multiple reflections and/or refractions.
In traceRay, implement mirror reflections by sending a ray from the current intersection point in the mirror direction. For this, you should implement the function:
Vec3f mirrorDirection(const Vec3f& normal, const Vec3f& incoming);
Trace the secondary ray with a recursive call to traceRay using a decremented value for the recursion depth. Modulate the the returned color with the reflective color of the material at point.
We need a stopping criterion to prevent infinite recursion – the maximum number of bounces the ray will make. This argument is set like so: -bounces 5. When you make a recursive traceRay call, you need to remember to decrement the bounce value.
The ray visualisation might be very useful for debugging reflections. In the preview application, click on the rendered result and fly around the scene to see how the rays have bounced.
The parameter refr index in traceRay is the index of refraction for a material, needed for extra credit.
11

Figure 14: r8 reflective sphere: shown here four different levels of reflections (0, 1, 2, and 3 bounces) with weight 0.01
R9 Antialiasing (1.5p)
Next, you will add simple anti-aliasing to your ray tracer. You will use supersampling and filtering to alleviate jaggies and Moire patterns.
For each pixel, instead of directly storing the colors computed with RayTracer::traceRay
into the Image class, you’ll compute lots of color samples (computed with RayTracer::traceRay) and average them.
You are required to implement simple box filtering with uniform, regular, and jittered sampling. To use a sampler provide one of the following as additional command line arguments:
• -uniform samples • -regular samples • -jittered samples
The box filter has already been implemented. You should implement the sampling in
• UniformSampler::getSamplePosition • RegularSampler::getSamplePosition • JitteredSampler::getSamplePosition
In your rendering loop (render in main.cpp), cast multiple rays for each pixel as specifed by the argument (the innermost for loop iterates as many times as the num samples specifies). If you are sampling uniformly, the sample rays should be dis- tributed in a uniform grid pattern in the pixel. If your jittering samples, you should add a random offset (such that the sample stays within the appropriate grid location) to the uniform position. To get the final color for the pixel, simply average the resulting samples.
12

Figure 15: r9 sphere triangle 200×200
Figure 16: r9 sphere triangle, just 9×9 resolution: none, uniform, regular, and jittered sampling (sample count 9)
4 Extra Credit
Some of these extensions require that you modify the parser to take into account the extra specification required by your technique. Make sure that you create (and turn in) appropriate input scenes to show off your extension.
4.1 Recommended
• Implement refraction through transparent objects. See handout and code comments around where reflection is implemented. For full points, make sure you handle total internal reflection situations. (1-2p)
• Add simple fog to your ray tracer by attenuating rays according to their length. Allow the color of the fog to be specified by the user in the scene file. (1p)
• Add other types of simple primitives to your ray tracer, and extend the file format and parser if necessary. At least provide implementations for Transform and Box; there are skeletons for them in the base code. (3p)
• Make it possible to use arbitrary filters by filling in the addSample function of the class Film. Its function is to center a filter function at each incoming sample, see which pixel centers lie within the filter’s support, and add the color, modulated by the filter weight, into all those pixels. Further, accumulate the filter weight in the
13

4th color channel of the image to make it easy to divide the sum of weights out once the image is done. Demonstrate your filtering approach with tent and Gaussian filters. Be aware that the trivial Filter/Film implementation is not thread-safe. You have to disable OpenMP, or for more points, figure out how to keep your code thread-safe and make it concurrent. (1-3p)
4.2 Transparency
Given that you have a recursive ray tracer, it is now fairly easy to include transparent objects in your scene. The parser already handles material definitions that have an index of refraction and transparency color. Also, there are some scene files that have transparent objects that you can render with the sample solution to test against your implementation.


4.3
Enable transparent shadows by attenuating light according to the traversal length through transparent objects. We suggest using an exponential on that length. (1.5p)
Add the Fresnel term to reflection and refraction. (1p)
Advanced Texturing
So far you’ve only played around with procedural texturing techniques but there are many more ways to incorporate textures into your scene. For example, you can use a texture map to define the normal for your surface or render an image on your surface.
• Image textures: render an image on a triangle mesh based on per-vertex texture coordinate and barycentric interpolation. You need to modify the parser to add textures and coordinates. Some features you might want to support are tiling (normal tiling and with mirroring) and bilinear interpolation of the texture. (2-4p)
• Bump and Normal mapping: perturb (bump map) or look up (normal map) the normals for your surface in a texture map. This needs the above texture coordinate computation and derivation of a tangent frame, which is relatively easy. The hardest part is to come up with a good normal map image. Produce a scene demonstrating your work. (2-3p)
• Isotropic texture filtering for anti-aliasing using summed-area tables or mip maps. Make sure you compute the appropriate footprint (kernel size). This isn’t too hard, but of course, requires texture mapping. (Medium)
• Adding anisotropic texture filtering using EWA or FELINE on top of mip-mapping (a little tricky to understand, easy to program). (Easy)
14

4.4 Advanced Modeling
Your scenes have very simple geometric primitives so far. Add some new Object3D subclasses and the corresponding ray intersection code.
• Combine simple primitives into more interesting shapes using constructive solid geometry (CSG) with union and difference operators. Make sure to update the parser. Make sure you do the appropriate things for materials (this should enable different materials for the parts belonging to each primitive). (4-5p)
• Implement a torus or higher order implicit surfaces by solving for t with a numerical root finder. (2-3p)
• Raytrace implicit surfaces for blobby modeling. Implement Blinn’s blobs and their intersection with rays. Use regula falsi solving (binary search), compute the appro- priate normal and create an interesting blobby object (debugging can be tricky). Be careful for the beginning of the search, there can be multiple intersections. (4-6p)
4.5 Advanced Shading
Phong shading is a little boring. I mean, come on, they can do it in hardware. Go above
and beyond Phong. Check http://people.csail.mit.edu/addy/research/ngan05 brdf supplemental doc.pd for cool parameters.
• Cook-Torrance or other BRDF (2p).
• Bidirectional Texture Functions (BTFs): make your texture lookups depend on the viewing angle. There are datasets available for this http://btf.cs.uni-bonn.de/. (3p)
• Write a wood shader that uses Perlin Noise. (2p)
• Add more interesting lights to your scenes, e.g. a spotlight with angular falloff.
(1p)
• Replace RGB colors by spectral representations (just tabulate with something like one bin per 10nm). Find interesting light sources and material spectra and show how your spectral representation does better than RGB. (3-4p)
• Simulate dispersion (and rainbows). The rainbow is difficult, as is the Newton prism demo. (3-4p)
15

4.6 Global Illumination and Integration
Photons have a complicated life and travel a lot. Simulate interesting parts of their voyage.
• Add area light sources and Monte-Carlo integration of soft shadows. (4-5p)
• Add motion blur. This requires a representation of motion. 3 points if only the camera moves (not too difficult), 3 more points if scene objects can have independent motion (more work, more code design). We advise that you add a time variable to the Ray class and update transformation nodes to include a description of linear motion. Then all you need is transform a ray according to its time value.
• Depth of field from a finite aperture. (2-3p)
• Photon mapping. (Hard)
• Distribution ray tracing of indirect lighting (very slow). Cast tons of random sec- ondary rays to sample the hemisphere around the visible point. It is advised to stop after one bounce. Sample uniform or according to the cosine term (careful, it’s not trivial to sample the hemisphere uniformly). (3-5p)
• Irradiance caching (Hard).
• Path tracing with importance sampling, path termination with Russian Roulette,
etc. (Hard)
• Metropolis Light Transport. Probably the toughest of all. Very difficult to debug, took a graduate student multiple months full time. (Very Hard)
• Raytracing through a volume. Given a regular grid encoding the density of a participating medium such as fog, step through the grid to simulate attenuation due to fog. Send rays towards the light source and take into account shadowing by other objects as well as attenuation due to the medium. This will give you nice shafts of light. (Hard)
4.7 Interactive Editing
• Allow the user to interactively model the scene using direct manipulation. The basic tool you need is a picking procedure to find which object is under the mouse when you click. Some coding is required to get a decent UI. But once you have the mouse click, just trace a ray to find the object. Then, using this picker for translating objects, and for scaling and rotation. Allow the user to edit the radius and center of a sphere, and manipulate triangle vertices. All those are easy once you’ve figured out a good architecture but requires a significant amount of programming. (up to 7p)
16

4.8 Nonlinear Ray Tracing
We’ve had enough of linearity already! Let’s get rid of the limitation of linear rays.


Mirages and other non-linear ray propagation effects: Given a description of a spatially-varying index of refraction, simulate the non-linear propagation of rays. Trace the ray step by step, pretty much an Euler integration of the corresponding differential equation. Use an analytical or discretized representation of the index of refraction function. Add Perlin Noise to make the index of refraction more interesting. (Hard)
Simulate the geometry of special relativity. You need to assign each object a ve- locity and to take into account the Lorentz metric. I suggest you recycle your transformation code and adapt it to create a Lorentz node that encodes velocity and applies the appropriate Lorentz transformation to the ray. Then intersection proceeds as usual. Surprisingly, this is not too difficult; that is, once you remember how special relativity works. In case you’re wondering, there does exist a symplectic raytracer
http://yokoya.naist.jp/paper/datas/267/skapps 0132.pdf
that simulates light transport near the event horizon of a black hole. (Hard)
Multithreaded and Distributed Ray Tracing
4.9
Raytracing complicated scenes takes a long time. Fortunately, it is easy to parallelize since each camera ray is independent. We already provide an easy OpenMP implementation for distributing the load on local processor cores, but you can take it further.
• Create a raytracer running on the GPU. Since replicating all requirement features would be a huge amount of work, you can make your GPU raytracer separate and give it only a smaller amount of functionality. You can use your choice of API – CUDA, OpenCL, GLSL shaders. We make an exception here and allow you to use technology that is not supported on the classroom computers; if necessary, we’ll call you in to demonstrate the code you submitted. (Medium/Hard)
• Distribute the render job to multiple computers in a brute force manner. Split the image into one sub-region per machine, send them off to individual machines for rendering and collect the results when done. (Hard)
4.10 Acceleration Techniques
• Use a Bounding Volume Hierarchy to accelerate your raytracer. (Hard)
17

4.11
More anti-aliasing

5

• •

Add blue-noise or Poisson-disk distributed random sampling and discuss in your README the differences you observe from random sampling and jittered sampling. (2p)
Submission
Make sure your code compiles and runs both in Release and Debug modes on Visual Studio 2015. Comment out any functionality that is so buggy it would prevent us seeing the good parts. Check that your README.txt (which you hopefully have been updating throughout your work) accurately describes the final state of your code.
Fill in whatever else is missing, including any feedback you want to share. We were not kidding when we said we prefer brutally honest feedback.
Package all the code, project and solution files required to build your submission, the README.txt and any screenshots, logs or other files you want to share into a ZIP archive.
Sanity check: look inside your ZIP archive. Are the files there? (Better yet, unpack the archive into another folder, and see if you can still open the solution, compile the code and run.)
Submit your archive in MyCourses folder ”Assignment 5”.
18