CSCI-396 Jeff Bush
In general: data on the GPU
Two major types of buffers in WebGL:
▪ Array (element) buffers: ▪ 1-dimensional data
▪ VBO: Array buffers are vectors of 1-4 integers or floats each ▪ IBO: Array element buffer are only single integers
▪ Image buffer or just buffer: ▪ 2-dimensional data
▪ Each value corresponds to a pixel
▪ Usually bytes, but other formats used in special situations
▪ Used for final color display, depth information, and textures
The buffers are on the GPU: ▪ Front and back color buffers
▪ Under control of the local window system ▪ Off-screen buffers
▪ Under control our control
▪ Depth buffer
▪ Stencil buffer (holds masks)
▪ Texture buffers
▪ Hold image (and other data) that can be incorporated into the vertex/fragment processing
Typically 24-bit RGB or 32-bit RGBA
▪ 24-bit: 3x 8-bit (byte) channels for R, G, and B
▪ 32-bit: 4x 8-bit (byte) channels for R, G, B, and A
▪ Less common: floating-point buffers and single channel buffers
All buffers are unformatted and uncompressed
Images can be:
▪ compressed or uncompressed ▪ indexed or RGB
▪ big or little endian
▪…
WebGL lacks any sort of direct in-memory
manipulation of image formats
▪ But the browser can load many image formats and hand off the pixel data to WebGL
Modern graphic cards can render over 10 million triangles per second
That is still insufficient to render many phenomena like clouds, grass, terrain, skin, …
▪ To render a flat photograph you would need to have one triangle for every pixel
▪ To render a video you would need to have one triangle for every pixel for every frame!
▪ 1080p video at 60 hz is 124,416,000 triangles!
Start with a light-gray colored sphere ▪ Too simple
Replace sphere with a more complex shape ▪ Too many triangles to model every single dimple
Take a picture of a real golf ball and “paste” it onto the simple sphere
▪ Known as texture mapping – mapping a texture (image) onto the surface of an object
Doesn’t work with lighting in the dimples
▪ Need to change the local “shape”
▪ Bump mapping – mapping a texture that tells the normals to distort slightly in the dimples
Texture Mapping
▪ Use an image to fill colors inside triangle
▪ Replacement for using vertex or uniform colors Bump Mapping
▪ Alter normal vectors during rendering process Environment/Reflection Mapping
▪ Use image of the environment for texture maps ▪ Simulates a highly reflective/specular surface
Geometric Model Texture Mapped
Mapping techniques are implemented at the end of the rendering pipeline
▪ Very efficient because few polygons make it past the clipper, culler, and depth-testing
▪ Images and geometry flow through separate pipelines that join during fragment processing
Fragment Shader:
▪ Need a uniform sampler2D variable
▪ Need a vec2 containing the texture coordinate
▪ Need to acquire the color from the sampler: texture(uTexture, vTexCoord)
Returns a vec4
Need to then get the texture coordinates into the fragment shader
▪ First demo: use an attribute in the vertex shader and then pass it through to the fragment shader
Need to load the texture onto the GPU
▪ First demo: will use createCheckerboardImage() to create the image and then you must write the loadTexture() function
▪ Next slide
Need to set the sampler2D uniform to an integer based on
which texture the GPU is loaded on
▪ How to set an integer uniform of 1 value? Set this uniform to 0.
Create the texture buffer (returns a buffer reference): gl.createTexture();
Set the activate texture index: gl.activeTexture(gl.TEXTURE0);
Bind the created texture buffer to the active texture: gl.bindTexture(gl.TEXTURE_2D, texture);
Load the image data into the texture buffer: gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA,
gl.UNSIGNED_BYTE, img);
Set basic filtering options:
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER,
gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER,
Cleanup and return
gl.NEAREST);
The texture color in the fragment shader can be used like any other color
Combine it with the materialColor defined in the shader
▪ What is the result?
▪ How did you combine the colors?
What happens when you filter with gl.LINEAR instead of gl.NEAREST?
▪ What do the terms linear and nearest stand for you think?
Concept is simple
▪ Map an image to a surface
Implementing is difficult
▪ There are 3 or 4 coordinate systems involved…
Optional: Parametric coordinates
▪ May be used to model curves and surfaces
Texture coordinates
▪ Used to identify points in the image to be mapped
Object or World Coordinates
▪ Conceptually where the mapping takes place
Window Coordinates
▪ Where the final image is really produced
Map from 𝑥, 𝑦, 𝑧 coordinates in 3D space to (𝑠, 𝑡) coordinates on the 2D surface of an object
▪ Given a point on an object, we want to know which point in the texture to use
▪Function: 𝑠,𝑡 = 𝑓 𝑥,𝑦,𝑧
▪ Such functions are difficult to find in general
𝑡
𝑠
(𝑥, 𝑦, 𝑧)
We can read pixel data from the color framebuffer with gl.readPixels
▪ Only gets 8-bit RGBA values
▪ This can be a bit slow as the data must be copied off of the GPU and packed/unpacked
gl.readPixels(x,y,w,h,format,type,out)
Region of frame Data format buffer to read and type
Array to save data to
let img = new Uint8Array(512*512*4); gl.readPixels(0,0, 512,512, gl.RGBA,
gl.UNSIGNED_BYTE, img);
We can also render directly to a texture buffer instead of the color buffer
We don’t need to read the data into JS to use it – instead we can use the data directly from the shaders
Much faster as the data stays completely on the GPU
One solution to the mapping problem is to first map the texture to a simple intermediate surface
Example: map to cylinder
Parametric Cylinder:
𝑥 = 𝑟 cos 2𝜋𝑢
𝑦 = 𝑟 sin 2𝜋𝑢 𝑧 = 𝑣Τh
which maps a rectangle in 𝑢, 𝑣 space to a cylinder of radius 𝑟 and height h in world coordinates
To map from texture space: 𝑠 = 𝑢 and 𝑡 = 𝑣
Parametric Sphere:
𝑥 = 𝑟 cos 2𝜋𝑢
𝑦=𝑟sin 2𝜋𝑢 cos 2𝜋𝑣
𝑧 = 𝑟 sin 2𝜋𝑢 sin(2𝜋𝑣)
which maps a rectangle in 𝑢, 𝑣 space to a sphere of radius 𝑟 in world coordinates (however there will be a region of distortion near the poles of the sphere)
Spheres are frequently used in environmental maps
Easy to use with simple orthographic projection
Also frequently used in environment maps
Map from intermediate object to actual object ▪ Normals from intermediate to actual
▪ Normals from actual to intermediate
▪ Vectors from center of intermediate
Point sampling of the texture can lead to aliasing errors
A better but slower option is to use area averaging
preimage
Three steps to applying a texture 1. Specify the texture
▪ read or generate image
▪ assign to texture
▪ enable texturing
2. Assign texture coordinates to vertices
▪ proper mapping function is left to application 3. Specify texture parameters
▪ wrapping, filtering
Define a texture image from an array of texels (texture elements) in CPU memory
Load an image (such as PNG or JPEG) ▪ Scanned image
▪ Generate by application code
WebGL only supports 2D texture maps ▪ Unlike desktop OpenGL it is always enabled
▪ Desktop OpenGL supports 1-4D texture maps
gl.texImage2D(target, level, internalFormat, format, type, image);
target type of texture usually gl.TEXTURE_2D level usedformipmapping coveredlater,fornowuse0
format image format e.g. gl.RGBA
type image data type e.g. gl.UNSIGNED_BYTE image theimagedata
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img);
We can compute an image in JS code using the ImageData type:
let img = ImageData(512, 512);
▪ Creates a 512×512 RGBA ‘image’ that has all pixels initially black
▪ Just a wrapper around an Uint8Array
▪ Get the underlying array with img.data
▪ It is a 1D array so you have to use a bit of math to use it as a 2D image
In the fragment shader (or vertex shader for other purposes) add the global:
uniform sampler2D texture;
Then we can get a specific value of the texture using:
texture2D(texture, texCoord); ▪ Where texCoord is a vec2
Lets make a checkerboard cube!
As we have seen, the texture coordinates go from 0.0 to 1.0 in the 𝑠 and 𝑡 dimensions
▪ These are internally converted to a value from 0 to the width or height of the image in the 𝑥 and 𝑦 dimensions
What happens if we give a texture coordinate outside of the 0.0 to 1.0 range?
▪ Wrapping
What happens if we ask for a pixel that is in-
between two pixels?
▪ Sampling and Mipmapping
The texture options are set as follows: gl.texParameteri(gl.TEXTURE_2D,
name, value);
Where name and value are the name and value of the option
to set
This will set the option for the currently active
and same-type texture only
▪ For example, if gl.TEXTURE0 is active and we give gl.TEXTURE_2D then it only effects the #0 texture of 2D type
The options for wrapping are:
▪ gl.TEXTURE_WRAP_S for the 𝑠 dimension
▪ gl.TEXTURE_WRAP_T for the 𝑡 dimension The possible values are:
▪ gl.REPEAT (default)
▪ gl.MIRRORED_REPEAT ▪ gl.CLAMP_TO_EDGE
t
s
gl.REPEAT gl.CLAMP_TO_EDGE
texture
wrapping wrapping
Frequently texels and pixels don’t line up:
▪ Magnification: a single texel covers more than one pixel
▪ Minification: a single pixel covers more than one texel Have two choices for getting a value:
▪ Linear: weighted average of 4 nearest values ▪ Slight blurring/smoothing of data
▪ Nearest: uses just the nearest value ▪ Will preserve sharp edges
For magnification the option is gl.TEXTURE_MAG_FILTER:
▪ gl.LINEAR (default) ▪ gl.NEAREST
A little more complex because there can be mipmapping
Option is gl.TEXTURE_MIN_FILTER, values are:
▪ gl.LINEAR
▪ gl.NEAREST
▪ gl.NEAREST_MIPMAP_NEAREST
▪ gl.LINEAR_MIPMAP_NEAREST
▪ gl.NEAREST_MIPMAP_LINEAR (default)
▪ gl.LINEAR_MIPMAP_LINEAR The last 4 use mipmapping
Multiple scaled versions of the image are stored in the same texture object
▪ The level 0 version is the fullsize version
▪ Each level higher halves the size of the image in each
dimension
▪ Manually:
gl.texImage2D(gl.TEXTURE_2D, 1, …)
▪ Auto: gl.generateMipmap(gl.TEXTURE_2D)
▪ Reduces interpolation errors in smaller textured objects
First part says how to look up a value once a mipmap is selected and second part is how to select the mipmap itself
Examples:
gl.LINEAR_MIPMAP_NEAREST
▪ Picks the mipmap nearest to the correct minification level then chooses the value based on linear interpolation
gl.NEAREST_MIPMAP_LINEAR
▪ Picks the two mipmaps nearest to the correct minification level, averages them together, then chooses the value based on nearest value
If the texture’s dimensions are not a power of two (e.g. 65×63 image) then several options are not available:
▪ gl.TEXTURE_WRAP_S and T must be gl.CLAMP_TO_EDGE
▪ gl.TEXTURE_MAG_FILTER and MIN must be gl.NEAREST
▪ Mipmapping cannot be used at all
In general all images should be made to be powers of two
since they will work faster and all the options can be used
▪ However, this isn’t always possible and you can still use NTOP images but the default wrapping and sampling options do not support NPOT textures so they must be set
Lets play with the various options available to use for wrapping and sampling
The value we get from texture2D is like any other color
▪ It can be used in our lighting system or modified with other colors
Modify the program to support basic lighting with the texture providing the ambient and diffuse colors
▪ Make the specular color white ▪ Make the light color white ▪𝑘𝑎 =0.2,𝑘𝑑 =𝑘𝑠 =1.0
▪ Light position = 1,1, −1
Then modify it so the ambient and diffuse colors are
green times the texture color
We can load more than one texture onto the GPU
A single output pixel can use more than one texture to generate its color
load_texture takes an optional argument for which texture to load into
▪ Default is 0, max is very large (my test showed 35660 in the Atom Browser)
The sampler2D uniform needs to be set to an integer value for which texture it should use
▪ Just like any other integer uniform
Make a new function that creates a sinusoidal image where each pixel is colored as:
4𝜋 𝑖 + 𝑗 𝑡𝑒𝑥𝑆𝑖𝑧𝑒
▪ Remember that the image data needs to be bytes, not floats, so the equation above needs to be adjusted
Create this texture and load as texture 1
Modify the fragment shader to use 2 textures,
set the second texture to #1
Use both textures to get a color for the ambient
and diffuse colors
0.5 + 0.5 sin
Instead of using a single 2D image as the texture we can use 6 different 2D images – one for each face of the cube
▪ WebGL will make a texture “volume” that is a solid cube
▪ Texture coordinates now must be vec3 since it is grabbing
from a volume
Given a new function: load_cubemap_texture
▪ Need to give it 6 images:
▪ X-positive,X-negative,Y-positive,…
▪ Optionally the texture number to use
Need to update shaders to use 3D textures…
sampler2D → samplerCube
texture2D → textureCube
The texture coordinates need to be vec3s
instead of vec2s:
▪ Update vTexCoord and fTexCoord (twice)
▪ Update the texture coordinates to (right to left): -1,-1,-1 1,-1,-1 1,1,-1 -1,1,-1 -1,1,1 1,1,1 1,-1,1 -1,-1,1
Since a cubemap is a solid “volume” of texture we can texture a sphere fairly easily with it
Switch to drawing a sphere:
▪ Use the unit_sphere function
▪ Add false argument to calc_normal (since not a TRIANGLE_STRIP anymore)
▪ Switch drawElements to use TRIANGLES
Texture coordinates are a bit more of a problem…
▪ We could just use the coordinates of the vertices as the texture coordinates
▪ Instead of actually allocating these in JS and sending them over just modify the vertex shader to convert the vPosition to a vec3 and forward that onto the fragment shader
Besides creating our image data algorithmically we can load image files
Adds some complications…
▪ Loading external files (images, 3D models, …) in
JS cannot be done synchronously
▪ Means that we have to tell it to load and then at some point in the future it will call one of our functions when it is done
▪ We cannot render until that other function is called
// Create a new image object
let img = new Image();
// Add a listener for its load event img.addEventListener(‘load’, function () {
// Load the texture onto the GPU
// this is the Image object load_cubemap_texture(this,this,this,this,this,this);
// Render the scene now that we have all the data
render(); });
// Tell it where to get the image, starts loading now img.src = ‘my_image.png’;
Render the cube with the WebGL_white image
What is wrong?
The image isn’t black and white It is white and transparent
▪ The transparent pixels aren’t a consistent color ▪ For example vec4(1.0, 0.0, 0.0, 0.0) is a
completely transparent RED pixel
▪ We are discarding the transparency information right
now
▪ We can’t deal with transparency yet (we will talk about
blending soon)
For the moment we will simply do the following
in the fragment shader:
color = color.a*color;
Also try loading both the red and white WebGL logos
Half of the faces use one while the other half use the other ones
Switch the cube to a sphere Cube map still works!
▪ Mostly, there is ‘distortion’ though… where?