R. Mukundan Department of Computer Science and Software Engineering University of Canterbury, .
Motivation
The ability to program the graphics hardware allows you to achieve a wider range of rendering effects that give optimal performance.
Copyright By PowCoder代写 加微信 powcoder
Traditional lighting functions and the fixed functionality of the graphics pipeline are fine only for ‘common things’. They have now been removed from the core profile.
Developers have more freedom to define the actions to be taken at different stages of processing.
Downside: The user needs to specify the computations to be performed at each stage.
Requires significantly more code. COSC363
Primitives (3D)
Traditional OpenGL (OpenGL-1,2) Fixed Function Pipeline
Pixels (2D)
Vertex data
Vertices in clip coords
Prim. Assembly, Clipping, Vewport Trans. Rasterization
Transformation and Lighting
Fragment ops and Texturing
Frame Buffer
User control ends here
Transformed vertices not
accessible.
Fixed lighting model
Limited vertex attributes
Primitives cannot be added, modified or removed
Fragments cannot be accessed, modified or removed (other than by using alpha and stencil tests
Only default frame buffer
Modern OpenGL (OpenGL-4) Programmable Pipeline
Graphics Processing Unit (GPU)
Vertex Shader
Fragment Shader
Application
GPU Memory
OpenGL 4 State Machine
OpenGL-4 Shader Stages
Optional shader stages
Vertex Shader
Tesselation Control Shader
Primitive Generator
Tesselation Evaluation Shader
Geometry Shader
.vert .cont
.eval .geom
Frame Buffer
Fragment Tests
Fragment Shader
Rasterization
Prim. Assembly, Clipping, Viewport Trans.
OpenGL Context, Version: Example
int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB); glutInitWindowSize(500, 500); glutCreateWindow(“A Triangle”); glutInitContextVersion (4, 2); glutInitContextProfile (GLUT_CORE_PROFILE);
const GLubyte *version = glGetString(GL_VERSION); const GLubyte *renderer = glGetString(GL_RENDERER); const GLubyte *vendor = glGetString(GL_VENDOR);
cout << "OpenGL version: " << version << endl; cout << "OpenGL vendor: " << vendor << endl;
cout << "OpenGL renderer: " << renderer << endl;.
Primitive Drawing (OpenGL 1)
(Immediate Mode Rendering)
void display() {
... glBegin(GL_TRIANGLES);
glVertex3f(x1, y1, z1);
glVertex3f(x2, y2, z2);
glVertex3f(x3, y3, z3);
Deprecated!
System Memory
App/Client Memory
Graphics Memory
Graphics Processor
Primitive Drawing (OpenGL 4)
(Non-Immediate Mode Rendering)
void initialise() {
glBufferData(...);
glBufferSubData(...);
System Memory
App/Client Memory
Graphics Memory
glDrawArrays(GL_TRIANGLES,0,3);
Graphics Processor
Organising Data
Vertex coords
Vertex color Vertex normal
Vertex Buffer Object (vbo[0]) Vertex Buffer Object (vbo[1]) Vertex Buffer Object (vbo[2])
glBufferData(..)
glBufferData(..)
glBufferData(..)
Graphics Memory
Vertex Array Object (vao)
Vertex Buffer Objects
A vertex buffer object (VBO) represents the data for a particular vertex attribute in video memory.
Creating VBOs:
1. Generate a new buffer object “vbo”
2. Bind the buffer object to a target
3. Copy vertex data to the buffer
GLuint vbo;
glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(verts), verts, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, NULL);
Multiple VBOs
GLuint vbo[2];
glGenBuffers(2, vbo); //Two VBOs
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]); //First VBO glBufferData(GL_ARRAY_BUFFER, sizeof(verts), verts,
GL_STATIC_DRAW); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glBindBuffer(GL_ARRAY_BUFFER, vbo[1]); //Second VBO glBufferData(GL_ARRAY_BUFFER, sizeof(cols), cols,
glEnableVertexAttribArray(0);
GL_STATIC_DRAW); glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(1);
Packing Several Attributes in 1 VBO
GLuint vbo;
glGenBuffers(1, &vbo); //Only 1 vbo
glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(verts)+sizeof(cols),
verts, GL_STATIC_DRAW); glBufferSubData(GL_ARRAY_BUFFER, sizeof(verts), sizeof(cols), cols);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, NULL); glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0,
(GLvoid *)sizeof(verts));
Vertex Coords
Vertex Colors
sizeof(verts)
Vertex Array Object
A vertex array object (VAO) encapsulates all the state needed to specify vertex data of an object.
Creating VAOs:
Generate a new vertex array object “vao” Bind the vertex array object (initially empty) Create constituent VBOs and transfer data
glGenVertexArrays(1, &vao); glBindVertexArray(vao); ...
glGenBuffers(3, vbo);
Bind the VAO representing the vertex data Render the collection of primitives using
glDrawArray() command:
glBindVertexArray(vao); glDrawArrays(GL_TRIANGLES, 0, 3);
Primitive Type
Start index in the enabled arrays
Drawing Using Vertex Indices
Mesh data is often represented using vertex indices to avoid repetition of vertices
PolygonalLine: 3013 4124
The VBO for indices is defined using GL_ELEMENT_ARRAY as the target.
Rendering of the mesh is done using the command glDrawElements(..)
Download and install
glew (http://glew.sourceforge.net)
Run the following programs: Version.cpp
Draw1.cpp
Draw2.cpp Uses shader code
Draw3.cpp Simple.vert, Simple.frag Draw4.cpp
Discuss any issues using class forum COSC363
Vertex Shader
The vertex shader will execute once of every vertex.
The position and any other attributes (normal, colour, texture coords etc) of the current vertex, if specified, will be available in the shader.
Positions and attributes of other vertices are not available.
A vertex shader normally outputs the clip coordinates of the current vertex, and also performs lighting calculations on the vertex.
gl_Position is a built-in out variable for the vertex shader. A vertex shader must define its value.
The Vertex Shader Vertex data
Myshader.vert
Programmable Pipeline
Clip Coordinates
Per-vertex colour (optional)
glBindVertexArray(vao); glDrawArrays(GL_TRIANGLES, 0, 3);
Application
Vertex Shader: Example
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, NULL); glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, NULL);
#version 330
layout (location = 0) in vec4 position; layout (location = 1) in vec4 color;
out vec4 theColor;
void main() {
gl_Position = position; theColor = color;
Simple.vert
Vertex and Fragment Shaders
Frame Buffer
uniform variables
Vertex Shader
uniform variables
Fragment Shader
Vertex Shader
Primitive Assembly Clipping Rasterization
Fragment Shader
in variables
out variables
in variables
out variables
Rasterization is the process of scan-converting a primitive into a set of fragments.
A fragment is a pixel-sized element that belongs to a primitive and could be potentially displayed as a pixel.
The number of fragments generated for a primitive depends on the projected area of the primitive in the screen coordinate space.
Primitive Fragments (in 3D space) (in 2D space)
Fragment Shader
A fragment shader is executed for each fragment generated by the rasterizer.
A fragment shader outputs the colour of a fragment and optionally the depth value.
Several colour computations (texture mapping, colour sum etc.), and depth offsets can be performed inside a fragment shader.
A fragment shader can also discard a fragment.
A fragment shader has the built-in in variable gl_FragCoord and built-in out variables gl_FragColor and gl_FragDepth
Vertex Shader
Simple.vert
Fragment Shader: Example
Fragment Shader
Simple.frag
#version 330
layout (location = 0) in vec4 position;
layout (location = 1) in vec4 color;
out vec4 theColor;
void main() {
gl_Position = position;
theColor = color; }
#version 330
in vec4 theColor;
void main() {
gl_FragColor = theColor; }
GLSL Aggregate Types
Vector Types: vec2, vec3, vec4
vec2 posn2D;
vec3 grey, norm, color, view;
vec4 posnA, posnB;
float zcoord, d;
posnA = vec4(-1, 2, 0.5, 1);
posnB = vec4(posnA.yxx, 1); //Same as (2, -1, -1, 1)
norm = normalize(vec3(1));
view = vec3(1.6);
d = dot(norm, view);
zcoord = posnA.z;
color = vec3(0.9, 0.2, 0.2);
grey = vec3(0.2, color.gb); //(0.2, 0.2, 0.2)
Component Accessors: (x,y,z,w), (r,g,b,a) (s,t,p,q)
//(.577, .577, .577)
//(1.6, 1.6, 1.6)
GLSL – Aggregate Types
Matrix Types: mat2, mat3, mat4 mat2 matA, matB, matC;
mat3 scale, identity; float det;
vec2 v1, v2, v3, v4; v1 = vec2(-6, 4);
v2 = vec2(3);
matA = mat2(3, 0, -2, 5);
matB = mat2(v1, v2);
matC = matA * transpose(matB); //Product matrix
v3 = matC[1];
v4 = matA * v3; identity = mat3(1.0); scale = mat3(3.0);
det = determinant(matC); matC = inverse(matC);
//Second column of matC
//3.0 along diagonal
//1st Column = (3, 0) //v1, v2 column vectors
Defining Transformations
We will need to define transformations and projections using our own functions!
The GLM (GL Mathematics) library written by provides functionality similar to the deprecated functions.
GLM is a header-only library that can be downloaded from http://glm.g-trunc.net
#include
#include
Defining Transformations
The Model-view-projection matrix must be made available in the vertex shader for transforming vertices to clip coordinates.
Uniform variables provide a mechanism for transferring matrices and other values from your application to the shader.
Uniform variables change less frequently compared to vertex attributes. They remain constant for every primitive.
Important matrices:
Model-View Matrix (VM)
Model-View-Projection Matrix (PVM)
Model-View-Projection Matrix Old Version
glFrustum(…) gluPerspective(…) glOrtho(…)
gluLookAt(…)
glTranslatef(…) glRotatef(…) glScalef(…)
x yc Projection View Transformation y
zc Matrix Matrix Matrix z w 1 c
Vertex Position in Clip Coordinates
Vertex Position in World Coordinates
Application
Defining Transformations
GLuint matrixLoc;
matrixLoc = glGetUniformLocation(program, “mvpMatrix”);
void display()
float cdr = 3.14159265/180.0; //degrees to radians conversion glm::mat4 proj = glm::perspective(60*cdr, 1.0f, 100.0f, 1000.0f); glm::mat4 view = glm::lookAt(glm::vec3(0.0, 0.0, 150.0),
glm::vec3(0.0, 0.0, 0.0),
glm::vec3(0.0, 1.0, 0.0)); glm::mat4 matrix = glm::mat4(1.0); //Identity matrix
matrix = glm::rotate(matrix, angle, glm::vec3(0.0, 1.0, 0.0)); glm::mat4 prodMatrix = proj*view*matrix; glUniformMatrix4fv(matrixLoc, 1, GL_FALSE, &prodMatrix[0][0]);
Output in Clip- Coordinates
Fragment Shader
Tetrahedron.frag
Defining Transformations
Vertex Shader
Tetrahedron.vert
#version 330
layout (location = 0) in vec4 position;
uniform mat4 mvpMatrix;
void main() Coordinates
gl_Position = mvpMatrix * position;
Input in World-
void main() {
gl_FragColor = vec4(0.0, 1.0, 1.0, 1.0); }
Lighting Calculations
Lighting calculations are usually performed in eye-coordinate space.
Model-View Matrix
yeView Transformationy
ze Matrix Matrix z
Le Light l = Le
Camera (0,0,0)
ne v = Pe
Transformation of Normal Vector
When primitives (or objects) are transformed by a matrix M, their surface normal vectors undergo a transformation by a
-T matrix M .
Shear transformation Scale transformation x = x + ky x = kx
Transformation of Normal Vector
v n Consider a vector V = x , and its normal vector N = x
In matrix notation,
v n y y
The vectors are perpendicular: vxnx + vyny + vznz = 0.
v v vn 0 nz
VTN = 0. x y z y
Let V be transformed using matrix A, and the normal using matrix B. After the transformation, the vectors will remain perpendicular only if (AV)T (BN) = 0.
Transformation of Normal Vector
The previous equation gives VTATBN = 0 VT (ATB) N = 0.
But, VTN = 0.
Therefore, ATB = I (identity matrix). Hence, B = (AT)1
The transformation applied to the normal is the inverse- transpose of the transformation applied to the vectors (or points).
For lighting calculations, we need to multiply the normal vectors by the inverse-transpose of the model-view matrix.
Lighting Calculations
TorusDraw.cpp
Lighting calculations are performed in eye-coordinates.
We compute the following (using GLM) in our application: Model-View matrix (VM)
Light’s position in eye coordinates: Le = VML
Inverse transformation matrix for the normal (VM)T
void display() {
glm::mat4 prodMatrix1 = view*matrix;
glm::mat4 prodMatrix2 = proj*prodMatrix1;
glm::vec4 lightEye = view*light;
glm::mat4 invMatrix = glm::inverse(prodMatrix1); glUniformMatrix4fv(matrixLoc1, 1, GL_FALSE, &prodMatrix1[0][0]); glUniformMatrix4fv(matrixLoc2, 1, GL_FALSE, &prodMatrix2[0][0]); glUniformMatrix4fv(matrixLoc3, 1, GL_TRUE, &invMatrix[0][0]); glUniform4fv(lgtLoc, 1, &lightEye[0]);
Lighting Calculations (Vertex Shader)
Inside the vertex shader, we add the code to output the colour value using the Phong-Blinn model.
Vertex shader:
Torus.vert
layout (location = 0) in vec4 position; layout (location = 1) in vec3 normal; uniform mat4 mvMatrix;
uniform mat4 mvpMatrix;
uniform mat4 norMatrix;
uniform vec4 lightPos; //in eye coords
out vec4 theColour;
void main() {
vec4 white = vec4(1.0); //Light’s colour (diffuse & specular) vec4 grey = vec4(0.2); //Ambient light
Continued on next slide
Lighting Calculations (Vertex Shader)
vec4 posnEye = mvMatrix * position; //point in eye coords vec4 normalEye = norMatrix * vec4(normal, 0);
vec4 lgtVec = normalize(lightPos – posnEye);
vec4 viewVec = normalize(vec4(-posnEye.xyz, 0));
vec4 halfVec = normalize(lgtVec + viewVec);
vec4 material = vec4(0.0, 1.0, 1.0, 1.0); //cyan vec4 ambOut = grey * material;
float shininess = 100.0;
float diffTerm = max(dot(lgtVec, normalEye), 0); vec4 diffOut = material * diffTerm;
float specTerm = max(dot(halfVec, normalEye), 0); vec4 specOut = white * pow(specTerm, shininess);
gl_Position = mvpMatrix * position;
theColour = ambOut + diffOut + specOut; }
Fragment shader:
Torus.frag
TorusDraw.cpp
in vec4 theColour;
void main() {
gl_FragColor = theColour; }
Select an active texture unit
Load texture image
Set texture parameters
Create a Sampler2D variable in the fragment shader. Assign this uniform variable the index of the texture unit.
Application:
glGenTextures(1, &texID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texID); loadTGA(“myImage.tga”);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); …
GLuint texLoc = glGetUniformLocation(program, “txSampler”); glUniform1i(texLoc, 0);
Texture coordinates are stored in a vertex buffer object:
glBindBuffer(GL_ARRAY_BUFFER, vboID[0]); glBufferData(GL_ARRAY_BUFFER, (indx) * sizeof(float), verts,
GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glBindBuffer(GL_ARRAY_BUFFER, vboID[1]); glBufferData(GL_ARRAY_BUFFER, (indx) * sizeof(float), normals,
glEnableVertexAttribArray(0);
GL_STATIC_DRAW); glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glBindBuffer(GL_ARRAY_BUFFER, vboID[2]); glBufferData(GL_ARRAY_BUFFER, (indx) * sizeof(float), texCoords,
GL_STATIC_DRAW); glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(2); // texture coords
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboID[3]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, …
glEnableVertexAttribArray(1);
The vertex shader passes the texture coords of each vertex to
the fragment shader.
Vertex shader:
layout (location = 0) in vec3 position; layout (location = 1) in vec3 normal; layout (location = 2) in vec2 texCoord;
out vec4 diffRefl;
out vec2 TexCoord;
void main() {
gl_Position = mvpMatrix * vec4(position, 1.0);
… //lighting calculations
diffRefl = … TexCoord = texCoord;
The fragment shader receives the interpolated texture coordinates for each fragment, and uses a Sampler2D object to retrieve the colour values from texture memory.
Fragment shader:
uniform sampler2D txSampler; in vec4 diffRefl;
in vec2 TexCoord; void main()
vec4 tColor = texture(txSampler, TexCoord); gl_FragColor = diffRefl * tColor;
Multi-Texturing
Texture Unit 0
Texture Unit 1
glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE2D, tex[0]); texLoc1 = glGetUniformLocation
(program, “tex1”);
glUniform1i(texLoc1, 0);
glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE2D, tex[1]); texLoc2 = glGetUniformLocation
(program, “tex2”);
glUniform1i(texLoc2, 1);
Texture Coordinates
glBindBuffer(GL_ARRAY_BUFFER, vboID[2]);
glBufferData(GL_ARRAY_BUFFER, num* sizeof(float), texC, GL_STATIC_DRAW); glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 0, NULL); glEnableVertexAttribArray(2);
Fragment Shader:
uniform sampler2D tex1;
uniform sampler2D tex2;
in vec4 diffRefl; in vec2 TexCoord;
void main() {
Multi-Texturing
vec4 tColor1 = texture(tex1, TexCoord); vec4 tColor2 = texture(tex2, TexCoord);
gl_FragColor = diffRefl*(0.8*tColor1+ 0.2*tColor2);
Alpha Texturing
A textured image of a tree should appear as being part of the surrounding scene, and not part of a rectangular ‘board’.
Fragment Shader
Alpha Texturing
Use the alpha channel of an image (if available) to transfer only those pixels on the object.
uniform sampler2D texTree;
in vec2 TexCoord;
void main() {
vec4 treeColor = texture(texTree, TexCoord); if(treeColor.a == 0) discard;
gl_FragColor = treeColor;
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com