程序代写代做代考 cse3431-lecture11-textures

cse3431-lecture11-textures

Pasting textures (2D images) on surfaces

Texture Mapping

Conceptual Steps Involved
Texture to Object Mapping
• User defines where the texture maps onto an object’s surface
• In our pipeline this happens at the vertex level

Texture to Screen Mapping (through object)
• The rendering system has to project the texture in some way

to the screen
• That is, each pixel on the screen has to get the right piece of

the texture

• With programmable OpenGL we can manipulate textures at
the fragment level, i.e during the second step

The two steps as coordinate
system transformations

Systems and 

transformations 

involved
• (s,t) : 2D Texture space
• (sx,sy) : 2D Screen space
• sTw: world to screen (viewing and projection)
• wTt: texture to world

(sx,sy) = sTw(wTt(s,t))

User Defined Viewing+Projection

Approach one: Texture to Screen

(sx,sy) = sTw(wTt(s,t))

For each pixel covered by the
texture we would have to calculate
coverage (overlap)

Approach two: Screen to texture

(s,t) = tTw(wTs(sx,sy))

For each texel covered by the pixel’s projection we
would have to compute coverage

We also need to invert the projection process

OpenGL Approach: Screen to
texture

In programmable OpenGL

For each fragment we compute texture coordinates with which we can fetch
texels.

For each fragment we can fetch as many texels as we want (texture lookup).
For simple cases, one texture lookup per pixel might be enough.

We keep texture coordinates per vertex and the rasterize interpolates them
for intermediate pixels : (s,t)

Fetching and using a single texel corresponds to what?

s

t

Approach two: Screen to texture
How do we address texture
minification,magnification?

• Filtering, we will discuss it later

s

t

2D Textures image abstraction

(0,0) (1,0)

(1,1)(0,1)

They are always assigned the shown
parametric coordinates (s,t).

(0.5,1)

s

t

From texture to world (object)
To apply a texture to an object we have to
find a correspondence between (s,t) and
and some object coordinate system.
• Mapping via a parametric representation of the object

space (points).
• By hand.

• Notice: we want to map a 2D image on the surface of
the object

• Most often we need to parametrize the object in 2D

Mapping from texture to a 2D
parametric form of the object space

Linear transformation
Texture space (s,t) to object space
parameterization (u,v)
u = u(s,t) = au s + bu t + cu
v = v(s,t) = av s + bv t + cv
s in [0,1]
t in [0,1]

Example: Image to a quadrilateral
Chose a convenient
object 2D system

Origin P1
x axis P2-P1
y axis P4-P1

Now parameterize it
A point P on the object in u-v coordinates is:

Px(u) = P1x(1-u) + P2xu = P1x + u (P2x – P1x)
Py(v) = P1y(1-v) + P4yv = P1y + v (P4y – P1y) 


with u,v in [0,1]
A couple of sanity checks (in (P1, x,y) coordinate system) :

P(0,0) = P1, P(1,1) = (P2x, P4y) = P3 , P(1,0) = P2, P(0,1) = P4

P1
P2

P3
P4

x

y

Example: Image to a quadrilateral

Then the mapping is simply
u = u(s,t) = s
v = v(s,t) = t
s,t in [0,1]

P1
P2

P3
P4

x

y

Use only left part
s = 0.5u
t = v
u,v in [0,1]
so for

v3 = P(u=1,v=1) the TexCoordinates are (s,t) = (0.5,1)

Example 2: Piece of an image to a
quadrilateral

v1 v2

v3
v4

0.5 s

t

Parameterizing the Cylinder
Cylinder has height h, radius r, centered at 0
2D parameterization of Object Space

In (u,v) space quarter cylinder is a quad

Advanced example: 2D Parameterization
of a curve surface

y

z

x

Now map (u,v) to (s,t)
We chose

Mapping a square texture to the
quarter cylinder

y

z

(0,1)

s

t

x
(1,0)

(pi/2 ,0)

(0,h)

u

v

(s, t) = (0, 0) ! (u, v) = (0, 0)
(s, t) = (1, 1) ! (u, v) = (⇡/2, h)
that is

u = s⇡/2, v = ht

or inverted

s = 2u/⇡, t = v/h

Example: Wrapping textures on polygonal
approximations of curved surfaces

However, we only 

have polygons in the 

graphics pipeline

How does that work with the
graphics pipeline?

Only vertices go down the graphics pipeline.
Texture coordinates for interior points of polygons?

Calculate texture coordinates by interpolation along
scanlines.

Rendering the texture
Scanline in screen space
• Generating s,t coordinates for each pixel

Interpolation of texture
coordinates

Problem
Perspective foreshortening
• Scanconversion takes equal steps along scanline (screen

space), specifically by 1 i.e. x, x + 1, x+2 ,…
• Equal steps in screen space are not equal steps in world

space

Reminder: Inbetween points

Perspective divisiona

b

r

A’

B’
R’

A

BR

M

R(g) = (1� g)A+ gB, g 2 < r = MR, where M is a WebGL perspective transformation After perspective division (NDCS): R0(f) = (1� f)A0 + fB0, f 2 < How do g, f relate? First step Viewing to homogeneous space (4D) A BR M a b r Second step Perspective division A BR M a b r Perspective division A’ B’ R’ r = (1� g)a+ gb r = (r1, r2, r3, r4) a = (a1, a2, a3, a4) b = (b1, b2, b3, b4) 9 >>>>=

>>>>;

! R01 =
r1
r4

=
(1� g)a1 + gb1
(1� g)a4 + gb4

Putting all together

A

BR

M
a

b

r

Perspective division

A’

B’
R’

Relation between the fractions

R can be texture coordinates, color etc.THAT MEANS: For a given f in screen space and A,B in viewing
space we can find the corresponding R (or g) in viewing space
using the above formula.

“A,B” can be texture coordinates, position, color, normal etc.

R01(g) =
lerp(a1,b1,g)
lerp(a4,b4,g)

R01(f) = lerp(
a1
a4
, b1
b4
, f)

9
>=

>;
! g =

f

lerp( b4
a4
, 1, f)

substituting this in R(g) = (1� g)A+ gB yields

R1 =
lerp(A1

a4
, B1
b4
, f)

lerp( 1
a4
, 1
b4
, f)

A BR

M
a

r

Perspective division
B’

R’

Rendering images incrementally
A maps to a (homogeneous)
B maps to b
C maps to c
D maps to d
For scanline y and two edges:

Once we have sleft and sright another hyperbolic interpolation
fills in the scanline

C

D c
d

sleft sright

Viewing Screen Space

Interpolation along the scanline

C

D
c

d

(xleft,y) (xright,y)

(x,y)

What are the f, and h’s?

Interpolation along the scanline

c
d

Example: Checkerboard image
on a flat quad in the x-y plane

• Left would be correct for a trapezoid that is parallel to
the image plane

• You can think of it as follows: 

Linear interpolation pastes the image on the projected
object, hyperbolic pastes the image on the object
before the projection

Pipeline with hyperbolic
interpolation

What does the texture do?
Textures are accessed in the fragment shader
• vec4 texColor = texture(texID, vec2(TexCoord)) ;

• and we can do what we like them ( assign them as the
fragment’s color, blended them with other values, anything)

Texture mapping in OpenGL
Creating a texture
void glTexImage2D(
GLenum target, // must be GL_TEXTURE_2D
GLint level,
GLint internalformat, // e.g. GL_RGB
GLsizei width,
GLsizei height,
GLint border,
GLenum format, // e.g. GL_RGB
GLenum type, // e.g. GL_UNSIGNED_BYTE
const GLvoid *pixels // size powers of 2 !!
);

Need to load an image. Various libraries exist for that

MUST MATCH!!

Textures Have Many Parameters
Dealing with out of range tex coordinates
void gl.texParameterf(
GLenum target, // e.g. GL_TEXTURE_2D
GLenum pname, // GL_WRAP_S
GLint param // value e.g. GL_CLAMP
);

e.g.
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S,
GL_REPEAT);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T,
GL_REPEAT); 


6

Wrapping Mode

Clamping: if s,t > 1 use 1, if s,t <0 use 0 Wrapping: use s,t modulo 1 gl.texParameteri(gl.TEXTURE_2D, 
 gl.TEXTURE_WRAP_S, gl.CLAMP ) gl.texParameteri( gl.TEXTURE_2D, 
 gl.TEXTURE_WRAP_T, gl.REPEAT ) texture s t gl.CLAMP wrapping gl.REPEAT wrapping Angel and Shreiner: Interactive Computer Graphics 7E © Addison-Wesley 2015 Texture filtering Texture images consist of pixels (texels) Therefore: • Magnification: a pixel on the screen covers only part of a texel ( a texel stretches to cover multiple pixels) • Minification: a pixel on the screen covers more than one texels ( a texel is squeezed to fit in an area smaller than a pixel) Solution: Filtering Texture filtering in OpenGL glTexParametei(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST) ; glTexParametei(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST) ; GL_TEXTURE_MAG_FILTER: GL_NEAREST or GL_LINEAR GL_TEXTURE_MIN_FILTER: GL_NEAREST, GL_LINEAR,
 GL_NEAREST_MIPMAP_NEAREST, 
 GL_LINEAR_MIPMAP_NEAREST, 
 GL_LINEAR_MIPMAP_LINEAR, 
 Filtering textures Addresses texture minification,magnification • vec4 texColor = texture(s,t) returns what really? s t FILTERING • GL_NEAREST: no filtering, return the texture element closest ( in Manhattan distance) to the texture coordinates provided • GL_LINEAR: Returns the weighted average of the four texture elements that are closest to the texture coordinates provided Filtering textures Addresses texture minification,magnification • vec4 texColor = texture(s,t) returns what? • Nearest: returns the color a single pixel square s t Addresses texture minification,magnification • vec4 texColor = texture(s,t) returns what ? • Linear: returns the average of the nearest four texels • They capture better how the pixel actually covers texels Filtering textures s t 9 Mipmapped Textures • Mipmapping allows for prefiltered texture maps of decreasing resolutions • Lessens interpolation errors for smaller textured objects • Declare mipmap level during texture definition gl.texImage2D(gl.TEXTURE_*D, level, … ) Angel and Shreiner: Interactive Computer Graphics 7E © Addison-Wesley 2015 10 Example point sampling mipmapped point sampling mipmapped linear filtering linear filtering Angel and Shreiner: Interactive Computer Graphics 7E © Addison-Wesley 2015 Texture Coordinate Transforms Texture coordinates are, in fact, 2D coordinates in texture space and they can be transformed with affine transformations Texture Objects Copying an image from main memory to video memory is very expensive (gl.texImage2D). • Create texture names • Bind (create) texture objects to texture data: – Image arrays + texture properties • Bind and rebind texture objects. Texture Object Creation function loadFileTexture(tex, filename) { tex.textureWebGL = gl.createTexture(); tex.image = new Image(); tex.image.src = filename ; tex.isTextureReady = false ; tex.image.onload = function() { handleTextureLoaded(tex); // The image is going to be loaded asyncronously (lazy) which could be // after the program continues to the next functions. OUCH! } Texture Object Creation function handleTextureLoaded(textureObj) { gl.bindTexture(gl.TEXTURE_2D, textureObj.textureWebGL); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA,
 gl.UNSIGNED_BYTE, textureObj.image); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER,
 gl.LINEAR); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, 
 gl.LINEAR_MIPMAP_NEAREST); gl.generateMipmap(gl.TEXTURE_2D); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, 
 gl.CLAMP_TO_EDGE); //Prevents s-coordinate wrapping gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T,
 gl.CLAMP_TO_EDGE); //Prevents t-coordinate wrapping gl.bindTexture(gl.TEXTURE_2D, null); } Using Textures 
 gl.activeTexture(gl.TEXTURE0) ; gl.bindTexture(gl.TEXTURE_2D,texture1) ; gl.uniform1i(gl.getUniformLocation(program, "stexture1"), 0); drawCube() ; // 
 gl.activeTexture(gl.TEXTURE0) ; gl.bindTexture(gl.TEXTURE_2D,texture2) ; gl.uniform1i(gl.getUniformLocation(program, "stexture1"), 0); drawSphere() ; // different texture for the sphere Fragment shader precision mediump float; uniform sampler2D stexture1; varying vec4 fColor; varying vec2 fTexCoord ; void main() { gl_FragColor = vec4(fColor.rgb,1.0); gl_FragColor = texture2D( stexture1, fTexCoord ); } 
 gl.activeTexture(GL_TEXTURE0) ; gl.bindTexture(GL_TEXTURE_2D,texture1) ; gl.uniform1i(gl.getUniformLocation(program, "stexture1"), 0); 
 gl.activeTexture(gl.TEXTURE1) ; gl.bindTexture(GL_TEXTURE_2D,texture2) ; gl.uniform1i(gl.getUniformLocation(program, "stexture2"),1); drawSphere() ; // two active textures Multiple Textures Fragment Shader precision mediump float; uniform sampler2D stexture1; uniform sampler2D stexture2; varying vec4 fColor; varying vec2 fTexCoord ; void main() { gl_FragColor = vec4(fColor.rgb,1.0) ; vec4 c1 = texture2D( stexture1, fTexCoord ); vec4 c2 = texture2D( stexture2, fTexCoord ); gl_FragColor = mix(c1,c2,0.5); } Filters! 
 
 gl.texParameterf(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); gl.texParameterf(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, g.NEAREST);
 DO NOT FORGET TO SET THE FILTERS!! You get black textures because of the default settings Procedural textures Fragment shaders can generate textures on the fly Define textures through a process (function) instead of predefined samples – 2D: T = F(s,t), 3D: T = F(x,y,z) Advantages Process can be parameterized Needs less memory especially 
 for the 3D case No predefined resolution Disadvantages Slower texture lookup In practice Combinations of both approaches Complex objects will use multiple textures some based on images some procedural Examples Common use of Textures: Light maps For static objects we can simulate lighting by blending textures