COMP5822M Coursework 2
1 Getting started 2
2.1 PrepandDebug . . . . . 3
2.2 Modified Blinn-Phong . 3
Copyright By PowCoder代写 加微信 powcoder
2.3 Physically-inspired model 4
2.4 Animated light position 6
2.5 Multiple light sources . . 7
2.6 Custom PBR model . . . 7
3 Submission & Marking 7 A Demo Receipt 9
Coursework 2 focuses on shading, specifically lighting models. You will mainly write shaders in GLSL (al- though some setup in Vulkan is still required to get the necessary data into place). CW 2 utilizes the material covered by the first four exercises, the lectures, and (optionally) from the previous coursework. In CW 2, you will:
• Implement modified Blinn-Phong shading for reference
• Implement a physically-based shading model
• Animate the light source, and potentially shade with multiple light sources
• Implement a custom PBR model with some of your own choices
Before starting work on the CW, make sure to study this document in its entirety and plan your work. Pay in particular attention to Section 3, which contains information about submission and marking.
If you have not completed Exercises 1.1 through 1.4 and CW 1, it is heavily recommended that you do so before attacking CW 2. When requesting support for CW 2, it is assumed that you are familiar with the material demonstrated in the exercises! As noted in the module’s introduction, you are allowed to re-use any code that you have written yourself for the exercises and previous courseworks in the CW submission. It is expected that you understand all code that you hand in, and are able to explain its purpose and function when asked.
While you are encouraged to use version control software/source code management software (such as git or subversion), you must not make your solutions publicly available. In particular, if you wish to use Github, you must use a private repository. You should be the only user with access to that repository.
You are encouraged to build on the code provided with the coursework. If you have completed the exercises, you will already be familiar with the code and have some of the missing parts. If you wish to “roll your own”, carefully read the additional instructions that were presented in CW 1 (the same rules apply). In particular, do pay attention to the section that states that you must get an OK from the module instructor before submitting/demo:ing!
CW 2 uses the same project template as CW 1. See CW 1 and the exercises for details on this template.
(a) NewShip.obj (b) materialtest.obj Figure 1: Starting point, prior to implementing any shading.
1 Getting started
CW 2 focuses on shading, and to keep thing simple, the sample models use only solid per-material colors (no textures). Nevertheless, you have two choices for working with CW 2. If you have completed CW 1 successfully, you can continue to work on on your code base from it (but make sure to keep the CW 1 solution around, and submit it separately!). In this case, download the cw2.zip and transfer your CW 1 solution there. Make sure to update your code to load one of the CW 2 models instead (see Figure 1; models are located in the assets/cw2 directory).
Warning: CW 2 uses a slightly different set of model.{hpp,cpp}, where the MaterialInfo struct has been updated to match the CW 2 requirements. This will likely require minor updates to your CW 1 code.
Alternatively, you can start with the cw2-alt.zip project. This project contains a pre-built DLL (Windows) / shared object (Linux) that implements a basic Vulkan rendering infrastructure to help you get going with CW 2 regardless of your status in CW 1.
Regardless of your choice, you make sure you see something akin to Figure 1 when you run your program (you may have to move the camera a bit, depending on where you set the initial location – the models are centered around the origin).
The total achievable score for CW 2 is 30 marks. CW 2 is split into the tasks described in Sections 2.1 to 2.6. Each section lists the maximum marks for the corresponding task.
Do not forget to check your application with the Vulkan validation enabled. Incorrect Vulkan usage -even if the application runs otherwise- may result in deductions. Do not forget to also check for synchronization errors via the Vulkan configurator tool discussed in the lectures and exercises.
General submission quality
Up to 4 marks are awarded based on general submission quality. Submission quality will be determined from several factors. Examples include overall readability/structure of your code (e.g., consistent style, naming and commenting), good coding practices, and how easy it is to build your code. Additionally, carefully read Section 3. Minor errors with respect to the instructions there may result in deductions of submission quality.
Your submission should solve the tasks presented in the coursework. Inclusion of large amounts of unneces- sary code may result in deductions.
If you go digging into the material definitions of the OBJ files, you may find a few non-standard things. The provided OBJ files use a PBR extension, which defines a few new PBR-related material properties. You can find the details here. The OBJ loader that CW 2 uses, tinyobj, supports this extension.
Additionally, if you go digging into the source code (and later, into the solutions), you will find some more weirdness. In particular, the Pr (=roughness) material parameter’s value is used as shininess instead. Further, the authors of the models intended to use the standard specular color, Ks, as a “reflectivity” parameter. However, we will not be interpreting it that way.
COMP5822M – Coursework 2 3
(a) Normal (n) (b) View direction (v) (c) Light direction (l)
Figure 2: Visualization of the (normalized) normal vector, view direction and light direction. The visualization uses the materialtest.obj object, and the vectors are visualized with their world space values. You can use the fact that red maps to the x coordinate, green to y and blue to z to verify that the displayed values make sense – e.g., do the normals point into the direction you expect at each point?
2.1 Prep and Debug
Before you start implementing either shading model, you will need to do some prep work. First, decide which space you want to perform your shading computations in. The document will refer to this space as the shading space. The recommendation is to perform shading in world space, but you can pick a different space if you wish (make sure you’ve studied all tasks before deciding).
All shading should be done per fragment, i.e., in CW 2 we will not perform any per-vertex shading. Make sure you get the necessary data into the right place.
• Make sure you pass the normals through the vertex shader to the fragment shader. (You may continue to assume that the models are defined in world space, and hence omit the model-to-world transform.)
• Make sure that the fragment shader has access to the fragment’s position in the shading space.
• The fragment shader will need to have access to the camera’s position in the shading space. (Note: you can just extend the per-scene transform information to include the camera’s position and make sure it is
accessible in both the SHADER_STAGE_VERTEX and SHADER_STAGE_FRAGMENT shader stages.)
• The fragment shader will need to have information about the scene’s lighting data (e.g., a per-scene ambient light value and information about one or more light sources. Light sources are defined by their
position and color).
• The fragment shader will need to get the correct material information.
For now, place the light source at the coordinates [0, 9.3, −3] in world space, and give a color of [1.0, 1.0, 0.8]. Screen shots in this document will use those settings. Information about the light must be passed to the relevant shaders using an uniform buffer/interface block (i.e., do not hard-code the light’s properties in the shaders).
Decide on a reasonable setup with descriptor sets and descriptor bindings. Introduce additional uniform buffer objects as necessary. Recall the std140 layout rules (see Lecture 15’s slides if you need a quick refresher).
It is a good idea to verify that things are working as they should. Change your shader to visualize the per- fragment normals, view direction and light direction. Make sure the values behave as expected (e.g., should they change with the camera position or not?). You can compare your results to the screenshots in Figure 2.
In your submission’s README, document the following: • Your choice of shading space
• Your choice of descriptor set layout and descriptor bindings. Motivate this choice briefly.
2.2 Modified Blinn-Phong
Now, it’s time to implement the modified Blinn-Phong shading for reference. The modified version includes an extra normalization term that makes the model somewhat energy conserving when it comes to the specular highlights. This is a first step towards a more physically-inspired shading model.
You may be asked to briefly show the visualizations of the above values (and/or other debug visu- alizations) in the live version of your program in the demo session.
(a) NewShip.obj (b) materialtest.obj
Figure 3: Shading with the modified Blinn-Phong model. In the material test, there is no observable difference between the metals and dielectrics (non-metals). This is expected, as the Blinn-Phong model not distinguish between metals and non-metals. (Instead, one could try to tweak the material specular to emulate the look of metallic materials; however, the author of the models here has clearly not done so.)
The modified Blinn-Phong model is defined as follows:
Lo = cemit
+ cambient ⊗ cdiff
+ (n · l)+ cdiff ⊗ clight
+ αp +2 (n·h)αp (n·l)
cspec ⊗clight
diff + p (n·h)αp cspec (n·l) ⊗clight, π8++
=cemit +cambient ⊗cdiff +
• cambient •n
Material shininess
Material emissive color
Material diffuse color
Material specular color
Light color
Scene ambient light color
Surface normal (normalized)
Light direction (normalized), pointing towards the light
View direction (normalized), pointing towards the camera/viewer
Half vector (normalized), computed from the light and view directions
The ⊗ operator denotes a element-wise multiplication of two vectors/tuples, and (a · b)+ denotes a “clamped” dot-product, (a · b)+ = max (0, (a · b)).
Implement the modified Blinn-Phong model (Equation (1)) and use it for shading.
Compare your output to Figure 3. If you need to debug your shaders, see Figure 4 for visualization of some of
the intermediate values (also check that the various values are behaving as they should!).
Important: In your submission, make sure to include a separate pair of shaders (vertex and fragment) that implement the Blinn-Phong shading. (If you use the CW 2 project template, use the BlinnPhong.vert and BlinnPhong.frag shaders).
2.3 Physically-inspired model
You will now implement a slightly more complicated shading model, specifically a physically-inspired model that has a Lambertian diffuse component and a specular component based on a microfacet BRDF using the
The factor (αp + 2)/8 is the normalization factor mentioned earlier. Additionally, the specular term is multiplied with (n · l)+. You can (optionally) read more about the derivation of this modified model in the course notes for Crafting Physically Motivated Shading Models for Game Development by Hoffman [2010], where the reasons for the additional terms are explained in detail. The version presented there includes an additional Fresnel-term that is omitted here. Computation of the nor- malization factor involves some simplification and assumptions, meaning that you can occasionally find different variants of it.
COMP5822M – Coursework 2 5
(a) (n · l)+ (b) (n · h)+ (c) Blinn-Phong specular term Figure 4: Visualization of the some of the components of the modified Blinn-Phong model.
Blinn-Phong normal distribution function.
To introduce the model, we will start with the Rendering Equation, as shown in the lectures:
We are dealing with discrete point lights, which are the only points in space that emit light, and ambient light. Consequently, we can sum over the light sources instead of integrating over a hemisphere (where the ambient light approximates all indirect illumination):
Lo =Le + Lambient + fr clight,n (n · ln)+
For now, we’ll focus on a single light source, meaning we can drop the sum (and the index n) all together:
Lo =Le + Lambient + fr clight (n · l)+ (2)
The general form of the equation is already somewhat reminiscent of the modified Blinn-Phong model intro- duced earlier. In short, the BRDF (fr) describes how much of the incoming light (clight) is reflected towards the camera/viewer.
For the BRDF, we will use a general microfacet model for isotropic materials [Burley, 2012]: fr (l,v) = Ldiffuse + D(n,h) F(l,h) G(n,l,v),
4 (n·v)+ (n·l)+
where Ldiffuse is the diffuse contribution, and the specular contribution is constructed from the Fresnel term F ,
the normal distribution function D (Figure 5a) and the masking function G (Figure 5b).
In theory, we need to differentiate between metals and dielectrics (non-metals), as these behave quite differ- ently. Metals only reflect on the surface, meaning that they have a zero diffuse contribution. Additionally, the (specular) reflection from a metal is tinted. In contrast, dielectrics have both a diffuse and specular contribu- tion, where only the diffuse one is colored. The specular reflection tends to have the same spectrum/color as the incoming light.
In practice, it is possible to merge the two cases into a single approximative method. The underlying idea revolves around the observation that the amount of specular base reflectivity. F0, of dielectrics is relatively sim- ilar for most materials. The value 0.04 is frequently used as an approximation across the board. For dielectrics, the material’s color then determines the diffuse color. For metals, the material’s color is instead used to control specular base reflectivity:
F0 =(1−M)[0.04,0.04,0.04]+Mcmat
M describes the metalness of the material. In reality, a material is either a metal (M = 1) or a dielectric (M = 0). However, in computer graphics, we may encounter cases where this isn’t true – for example, when the material properties of a metal and dielectric are interpolated between. The above formula deals with non- binary metalness.
The amount of specular reflection is given by the Fresnel term F , which we evaluate using the Schlick approx-
fr Li (ω) (n·l) dω
F (v) = F0 + (1 − F0) (1 − h · v)5 .
(a) NDF (b) Geometry term
Figure 5: Illustration of the D and G components of the microfacet model. The NDF, D(ω), describes the distribution of microfacets. It specifically tells us the density of facets that are oriented such that their normal points in the direction ω. The masking function G(ωi,ωo) describes self-shadowing where microfacets block each other. © Graphics Group. Used with permission.
For the diffuse term we will just use a simple Lambertian. Only light that that wasn’t reflected specularly will participate in the diffuse term (hence the 1 − F term). Additionally, as previously mentioned, metals have a zero diffuse contribution. We model this with the 1 − M term:
Ldiffuse = cmat ⊗([1,1,1]−F(v)) (1−M). π
For the normal distribution function D, we will use the Blinn-Phong distribution. As the name indicates, it is very similar to to the Blinn-Phong specular term from the previous exercise
D (h) = αp + 2 (n · h)αp . 2π+
The masking term from the Cook-Torrance model [Cook and Torrance, 1982] that you should use looks as
More complex diffuse reflectance models exist. However, for now, the Lambertian is sufficient. Other models can be quite a bit more expensive and may only contribute with minor improve- ments [Karis, 2013].
(n·h)(n·v) (n·h)(n·l) G(l,v)=min 1,min 2 v·h ,2 v·h .
For the ambient term, Lambient, you can assume a constant ambient illumination and just modulate it with the material’s color:
Lambient = cambient ⊗ cmat This model relies on a different set of material parameters:
• cemit • cmat
Material shininess Material metalness Material emissive color Material color
(Other quantities such as the various vectors and the light color are the same as in Section 2.2.)
Implement the shading model for one light source (Equation (2)). Use a separate pair of shaders (vertex and
fragment – if you use the provided project template, use the included PBR.vert and PBR.frag shaders). Compare your output to Figure 6. If you need to debug your shaders, see Figure 7 for visualization of some of
the intermediate values (also check that the various values are behaving as they should!).
2.4 Animated light position
Animate the light position. For example, make the light orbit around the Y axis above the model(s). Make sure that the animation is frame-rate independent (e.g., like the camera controls in CW 1).
Let the user start/stop the animation by pressing spacebar.
In the submission’s README, document what light path you choose.
COMP5822M – Coursework 2 7
(a) NewShip.obj (b) materialtest.obj
Figure 6: Shading with the PBR model. In the material test, there’s now a clear difference between the metals and the dielectrics. Metals benefit immensely from e.g. image-based lighting approaches, due to their lack of diffuse contribution. Note that we no longer have any variation in the “reflectivity” direction, as the corresponding parameter is not used by the PBR method.
(a) F (b) D (c) G (d) PBR specular term Figure 7: Visualization of the some of the components of the PBR model used in CW 2.
2.5 Multiple light sources
Extend the application to support multiple (≥ 3) light sources.
Make sure that the light sources are placed in reasonable places (e.g., not inside the models and not on top of each other). Give the each light a separate color, to make sure that the different lights can be distinguished from each other.
Let the user select the number of active light sources via the number keys (use the number keys in the row at the top of the keyboard).
2.6 Custom PBR model
Pick one of the following terms/factors in the PBR shading model to improve on:
• Diffuse term
• Normal distribution (D)
• Masking/geometry term (G)
Replace that term/factor with a different compatible term (e.g., replace Section 2.3’s choice of D with a different normal distribution function). You must use an established model/function (i.e., you can’t just come up with your own). This will likely require some research online.
In the submission’s README, detail which term/factor you chose to replace, and describe the new choice (us- ing words and equation, not code). Cite the source from where it stems. Describe how your chosen term/factor differs from the one’s selected in Section 2.3.
3 Submission & Marking
In order to receive marks for the coursework, follow the instructions listed herein carefully. Failure to do so may result in zero marks.
Submission specifics are unchanged from CW 1, but are included here again to avoid any misun- derstandings. If you studied the corresponding section in CW 1, you should already be familiar with the following.
Your coursework will be marked once you have
1. Submitted project files as detailed below on Minerva. (Do not send your solutions by email!)
2. Successfully completed a one-on-one demo session for the coursework with one of the instructors. De- tails are below – in particular, during this demo session, you must be able to demonstrate your under- standing of your code. For example, the instructor may ask you about (parts of) your submission and you must be able to identify and explain the relevant code.
3. Ifdeemednecessary,participatedinanextendedinterviewwiththeinstructor(s)whereyouexplainyour submission in detail.
You do not have to submit y
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com