Computer Graphics
COMP3421/9415 2021 Term 3 Lecture 18
What did we learn last lecture?
Shadow Mapping
¡ñ Rendering depth from the light’s perspective
¡ñ Determine whether light can reach a particular fragment
¡ñ Also a lot of sampling issues and how to fix them
Deferred Rendering
¡ñ Lighting in post processing
¡ñ Rendering lights as volumes (geometry)
¡ñ Big efficiency benefits
What are we covering today?
Optimisation
¡ñ What are our goals?
¡ñ How has optimisation driven graphics?
¡ñ Culling non-visible parts of the scene
Wrapping up the course
¡ñ Where are we now?
¡ñ Where can we go?
Graphics Goals
Perfect Graphics?
Can we recreate reality?
¡ñ A Turing Test for Graphics?
¡ñ If we believe, is that enough?
¡ñ Will we stop developing Graphics once
we’ve tricked enough people?
¡ñ Example of Deepfake:
Image credit: Lucasfilm/Disney
An idea that mimics physics in lighting
¡ñ Can it be the answer to our question?
¡ñ With enough rays, will we match reality?
¡ñ How close is it to “perfect”?
¡ñ How long does it take to get there?
¡ñ Nvidia realtime ray tracing demo:
Real Time Image credit: Unreal Engine
Voxels
Shapes without Polygons
¡ñ Polygons are integral to our systems
¡ñ Everything we’ve taught has assumed polygon
based shapes
¡ñ These are inherently unrealistic!
¡ñ Voxels attempt to bring pixelation to 3D spaces
¡ñ Tech showcase:
Image credit:
Can any of these techniques function in realtime?
¡ñ Human belief in persistency
¡ñ We hope for at least 60 frames per second (but will accept 24)
¡ñ Framerate dropping breaks us out of believing
¡ñ Realistic visual techniques must also maintain framerate
¡ñ A constant struggle for algorithm optimisation and hardware
development
What have we learnt this term?
Everything is an approximation
¡ñ Tricks that take less time than simulation
¡ñ Angle calculations instead of real lighting
¡ñ Polygons instead of real surfaces
¡ñ What do we have to do because our hardware isn’t capable?
¡ñ How much work goes into maintaining frame rate?
Optimisation
The Balancing Act
Maximum Visual Quality
¡ñ Lighting
¡ñ Multiple Lights, different shaders
per light/material
¡ñ Voxels or extremely high polygon
count
¡ñ Transparency, Reflections
¡ñ High quality motion effects,
animations etc
Frames per Second
¡ñ Blinn-Phong lighting approximation ¡ñ Deferred Rendering
¡ñ Low polygon count
¡ñ Simple effects or outright tricks so
that humans don’t expect effects! ¡ñ Intelligent removal of non-visible
elements
Optimisations in this course
Optimisations hidden in techniques we taught
¡ñ Polygons
¡ð Low poly count approximates curves etc
¡ð Linear interpolation between vertices instead of real data
¡ñ Textures and Maps
¡ð Surface data instead of genuine geometry
¡ñ Depth Buffer
¡ð Approximation of depth instead of actually measuring visibility
¡ñ Key Framed Animation
¡ð Not genuine movement, interpolating between positions
Optimisations in this course
¡ñ Blinn-
¡ð Ambient Lighting in Blinn-Phong is just a rough guess
¡ð Specular Lighting is also a rough estimate
¡ñ Lightmapping
¡ð Attempting to push work into pre-processing
¡ñ Reflections
¡ð Using rendering instead of real reflection calculations
¡ñ Shadow Mapping
¡ð Slightly inaccurate depth mapping instead of tracing real light paths
¡ñ Deferred Rendering
¡ð Careful removal of calculations for non-visible or irrelevant fragments
Realtime Graphics and Optimisation
It’s more than Optimisation
¡ñ So many of our techniques have been designed specifically
¡ñ Efficiency first, quality second!
¡ñ We are ruled by the 1/60th of a second limitation
¡ñ and the specific optimisations of GPU hardware
Culling
An optimisation we haven’t covered!
¡ñ Removing polygons from our render path
¡ñ Any polygon that isn’t visible should not waste processing power
¡ñ This is known as “culling”
Back Face Culling
Remember vertex winding?
¡ñ Anti clockwise is the front face
¡ñ Clockwise is the back of the polygon
¡ñ For solid objects, the back face shouldn’t exist
¡ñ So we only render triangles that appear counter clockwise to the camera
¡ñ Removes roughly 50% of polygons from rendering
Images credit: learnopengl.com
Frustum Culling
We don’t want to render what’s outside the camera’s view
¡ñ The frustum is conveniently made up of planes
¡ñ Easy to tell which side of a plane an object is
¡ñ Usually use “bounding boxes” for complex
objects to simplify calculations
¡ñ If something is outside the frustum, we can cull
it, making it not render
¡ñ Objects on the border can either be fully
rendered or “clipped” into visible parts
Objects in red are culled, green are rendered
Break Time
What do you want to make?
¡ñ Sometimes it’s about the feeling . . .
¡ñ Sometimes it’s a game
¡ñ Sometimes it’s CG effects for a short film
¡ñ Or this was just so that you could learn what was behind the games/films
you love
¡ñ No matter what, this course will hopefully have given you a chance to
learn something that you can take further . . . if you want.
¡ñ Never stop creating!
Where are we now?
What did we learn this term?
An introduction to Computer Graphics
¡ñ Approximate simulation of human vision
¡ñ Polygon Rendering as a basis
¡ñ Maths that supports it
¡ñ Blinn (with maps)
¡ñ Graphics as a medium for art
What else did we learn?
The extras that make the graphics pop
¡ñ Visual effects like: ¡ð Reflections
¡ð Transparency
¡ð Shadows
¡ð Post Processing Effects
¡ñ An introduction, but not necessarily a full education
¡ñ Would take a lot more study to have full mastery over these
¡ñ For example, one could do an entire PhD on algorithms for shadowing
Where do we go from here?
Marc’s List of Possibilities
Things we haven’t covered (this was just off the top of my head):
¡ñ Anti Aliasing and Anisotropy
¡ñ Geometry Shader
¡ñ Particle Systems
¡ñ Tesselation
¡ñ Physically Based Rendering (PBR)
¡ñ Alternative lighting techniques (cel-shading, edge detection effects)
¡ñ HUDs and GUIs
¡ñ VR – Stereoscopy
¡ñ Rendering to non-flat monitors (curved frustums)
¡ñ Advanced Transparency
¡ñ Advanced Animation Techniques (Inverse Kinematics)
¡ñ Physics simulation for realistic animation
¡ñ Applying Machine Learning to Graphics Techniques
Any% Speed Run
What’s next in Graphics?
¡ñ Let’s try to cover some of the things in Marc’s list
¡ñ Very quickly!
Anti Aliasing and Anisotropy
Eliminating the “jaggies”
¡ñ Jagged edges where diagonal lines are made into pixels (aliasing)
¡ñ or awkward sampling of a texture on an angle to the view (anisotropy)
¡ñ Both are generally corrected using multi-sampling techniques
Images credit: learnopengl.com
Image credit: Wikipedia users Lampak and THOMAS
Geometry Shader and Particle Systems
Between the Vertex and Fragment Shaders
¡ñ Draws Geometry in the shader
¡ñ We can specify vertices, the shader can add extra verts
and make shapes
Particle Systems
¡ñ Visual effect for things like smoke, fire and other volumetric substances
¡ñ Usually made up of hundreds or thousands of rectangles rotated to aim at the camera (billboarding)
¡ñ These rectangles can be created in the geometry shader or can be reused geometry
The OpenGL 4.x Shader pipeline Image credit:
Adding Geometry data to objects
¡ñ A shader that works alongside the vertex shader
¡ñ Able to subdivide triangles and create new vertices
¡ñ Often used to add detail to curved surfaces
¡ñ Also useful for terrain systems
Image credit: Nvidia
Physically Based Rendering
Realism in surface details
¡ñ An attempt to render complicated surfaces more realistically
¡ñ Uses multiple buffers
¡ñ Originally used for metallics
¡ñ Surface microfacets (roughness)
¡ñ Reflectance and Radiance (techniques for how
light reflects)
¡ñ Other ideas like fresnel and subsurface scatter
Images credit: learnopengl.com
Stylistic Rendering
Cel Shading, Edge detection etc
¡ñ Modification of lighting algorithm to an “on-off” or two tone scheme
¡ñ Use of post processing kernels to detect edges of objects and colour them black
¡ñ Classic anime or comic feel
¡ñ Not limited to a hand drawn feel!
¡ñ Many interesting effects are possible!
Image credit: Mihoyo
HUDs and GUIs
2D overlays over the final frame
¡ñ The tech is easy! Blend a texture with the final framebuffer
¡ñ Difficulty is in design
¡ñ Useful information with minimal distraction
In-game GUI elements
¡ñ Overlays, GPS paths, 3D highlighting and info
¡ñ A lot of interesting possibilities for making UI
exist in the game itself!
Dead Space (EA 2008)
Image credit: (UI Designer)
VR – Stereoscopy
It’s all already in 3D right?
¡ñ Two eyes mean two screens, possibly on one framebuffer
¡ñ Simple post processing:
¡ð Render two different cameras, offset from the centre
¡ð Render textures write to two different sides of the final
framebuffer (or two separate framebuffers depending on
your tech)
¡ð Uses asymmetric frustums to avoid near/far plane clipping
issues
¡ð Some warping of final image to match custom lenses in
headsets
Rendering to Curved Monitors
Also used for curved projection screens etc
¡ñ Multiple virtual cameras
¡ñ Approximate the curve with multiple renders
¡ñ Perspective shift issues in the transition
between cameras
Advanced Transparency
Order Independent Transparency
¡ñ We learnt that transparent objects need to be rendered last (and sorted)
¡ñ Our entire pipeline is awkward for transparency
¡ñ Some techniques include:
¡ð Rendering to a 3D framebuffer then blending together afterwards
¡ð Hardware optimised sorting of depth of objects
¡ð Depth Peeling (using multiple Z buffers to be able to render at different depths without
necessarily discarding objects behind)
Advanced Animation
Inverse Kinematics
¡ñ What if our animations aren’t pre-baked with keyframes
¡ñ But they’re reliant on geometry in the scene
¡ñ Pressing buttons, opening doors, picking up objects, walking on stairs
¡ñ (Lucky us, Robotics research can advance this field for us)
¡ñ Inverse Kinematics is:
¡ð The hand goes here, what do all the joints back to the shoulder need to do?
¡ñ Potentially very complex mathematical solutions, compounded by the number of joints
Physics Simulation
Animation based on physical rules
¡ñ Particularly liquids
¡ñ Also fluid movement for cloth and hair
¡ñ Useful for realtime destruction of objects
¡ñ Attempt to have realistic simulation of gravity and collisions
¡ñ As well as wind and tensile force
¡ñ Similar to lighting
¡ð Very hard to accurately simulate
¡ð Most techniques are fast approximations
Machine Learning for Graphics
What can we do with the new Deep Learning hotness?
¡ñ Realtime application of art styles
¡ñ Human-like animation based on learnt movement patterns
¡ñ Ray tracing needing less total rays while AI predicts likely colours in gaps
¡ñ Deep Learning Super Sampling (DLSS, the new hype word)
¡ð Applies ray tracing on a lower resolution frame
¡ð Uses a trained deep learning algorithm to super sample to a higher resolution output
¡ð Should result in a high quality output while only needing to process a much lower number
of pixels
What did we learn this term?
Computer Graphics
¡ñ We started with the idea of tricking human vision
¡ñ We built up a lot of technology to do this!
¡ñ From primitives to multiple renders and post processing
¡ñ We learnt how digital art and technology drives our algorithms
¡ñ And we developed those algorithms into realtime implementations
Thanks for coming along on the journey!