程序代写代做代考 algorithm data structure Java COMP3421

COMP3421

COMP3421
Introduction to 3D Graphics

3D coordinates

Moving to 3D is simply a matter of adding

an extra dimension to our points and

vectors:

3D coordinates

3D coordinate systems can be left or right

handed.

We typically use right-handed systems, but

left handed ones can arise (eg, if one of the

axes has negative scale).

Left Right

x

y

z

x

y

z

Right Handed

Coordinate System

Depth
Let’s try First3DExample.java

By default OpenGL draws objects in the order

in which they are generated in the code

To make sure closer objects are drawn in

front of objects behind them

gl.glEnable(GL2.GL_DEPTH_TEST);

gl.glClear(GL2.GL_DEPTH_BUFFER_BIT);

(We will talk in more detail about depth soon)

3D objects

We represent 3D objects as polygonal

meshes.

A mesh is a collection of polygons in 3D

space that form the skin of an object.

Let’s try the default glut teapot in

Second3DExample.java

Lighting
Without lighting, our 3D objects look flat.

gl.glEnable(GL2.GL_LIGHTING);

Once lighting is enabled, by default,

gl.glColor does not work. You need to specify

material properties

Enabling lighting does not actually turn any lights

on. You also need something like

gl.glEnable(GL2.GL_LIGHT0);

3D Transformations

in OpenGL
gl.glTranslated(dx, dy, dz);

gl.glScaled(sx, sy, sz);

gl.glRotated(angle, vx, vy, vz);

angle of

rotation

axis of

rotation

3D transformations

3D affine transformations have the same

structure as 2D but have an extra axis:

2

3D Transformations

Translation:

Scale: Shear:

3D Rotation

The rotation matrix depends on the axis of

rotation.

We can decompose any rotation into a

sequence of rotations about the x, y and z

axes.

Conversely, any sequence of rotations can

be expresses as a single rotation about an

axis.

3D Rotation

In each case, positive rotation is CCW from

the next axis towards the previous axis.

• Mx rotates y towards z
• My rotates z towards x
• Mz rotates x towards y

This works no matter whether the frame is

left or right handed.

x

y

z

Right Hand Rule
For any axis, if the right thumb points in the

positive direction of the axis the right fingers

curl in the direction of rotation

3D Rotation

3D Rotation

Our Own 3d Mesh: A

Cube
If we create a cube, we want to make sure each

face is drawn so the quad is facing outwards.

In a right-handed frame, this means the points

are in anticlockwise order.

Note: If you use your right hand, your curved

fingers represent the winding order and your thumb

the outwards direction.

Our Own 3D Mesh: A

Cube
gl.glBegin(GL2.GL_QUADS);

// front

gl.glVertex3d(0, 0, 0);

gl.glVertex3d(1, 0, 0);

gl.glVertex3d(1, 1, 0);

gl.glVertex3d(0, 1, 0);

// back

gl.glVertex3d(0, 0, -1);

gl.glVertex3d(0, 1, -1);

gl.glVertex3d(1, 1, -1);

gl.glVertex3d(1, 0, -1);

//etc

Exercise

//top face

//bottom face

//left face

//right face

//see code (rotateCube.java)

//for solns

Back face culling
An optimisation called face culling allows non-

visible triangles of closed surfaces (back faces)

to be culled before rasterisation.

This avoids unnecessary fragments from being

created.

This is based on the winding order of the

polygon – front faces are CCW by default.

Back face culling

// Disabled by default

// To turn on culling:

gl.glEnable(GL2.GL_CULL_FACE);

gl.glCullFace(GL2.GL_BACK);

Lighting and Normals
Once lighting is on, it is no longer enough

to model the coordinates of your vertices,

you need to provide normals as well. Eg

gl.glNormal3d(0,0,1);

These are used during the lighting

calculations. Otherwise lighting does not

work properly.

Note: The glut teapot already has normals defined – but we

will need to add these ourselves for our own meshes.

OpenGL

gl.glBegin(GL2.GL_POLYGON);

{

// set normal before vertex

gl.glNormal3d(nx, ny, nz);

gl.glVertex3d(px, py, pz);

// etc…

}

gl.glEnd();

A Cube With Normals
gl.glBegin(GL2.GL_POLYGON); // front

gl.glNormal3f(0,0,1);

gl.glVertex3d(0, 0, 0);

gl.glVertex3d(1, 0, 0);

gl.glVertex3d(1, 1, 0);

gl.glVertex3d(0, 1, 0);

gl.glEnd();

gl.glBegin(GL2.GL_POLYGON); // back

gl.glNormal3f(0,0,-1);

gl.glVertex3d(0, 0, -1);

gl.glVertex3d(0, 1, -1);

gl.glVertex3d(1, 1, -1);

gl.glVertex3d(1, 0, -1);

gl.glEnd();

Lighting and Normals
For the lighting calculations to work as

expected normals passed to it must be unit

length.

OpenGL transforms normals using a version of the

modelview matrix called the inverse transpose

modelview matrix (more on this later). This means

scaling also changes the length of normals.

To avoid this problem use

gl.glEnable(GL.GL_NORMALIZE);

Mesh Data Structures
It is common to represent a mesh in terms

of three lists:

• vertex list: all the vertices used in the
mesh

• normal list: all the normals used in the
object

• face list: each face’s vertices and
normals as indices into the above

lists.

Cube

vertex x y z

0 0 0 0

1 0 0 1

2 0 1 0

3 0 1 1

4 1 0 0

5 1 0 1

6 1 1 0

7 1 1 1

Cube

normal x y z

0 -1 0 0

1 1 0 0

2 0 -1 0

3 0 1 0

4 0 0 -1

5 0 0 1

Cube

face vertices normals

0 0,1,3,2 0,0,0,0

1 4,6,7,5 1,1,1,1

2 0,4,5,1 2,2,2,2

3 2,3,7,6 3,3,3,3

4 0,2,6,4 4,4,4,4

5 7,3,1,5 5,5,5,5

Modeling Normals
Every vertex has an associated normal

Default normal is (0,0,1)

On flat surfaces, we want to use face normals

set the normals perpendicular to the face (this

is what we did with our cube).

On curved surfaces, we may want to specify

a different value for the normal, so the

normals change more gradually over the

curvature.

Face Normals

Smooth vs Flat

Normals
Imagine this is a top down view of a prism

Calculation of Face

Normals
Every vertex for a given face will be given

the same normal.

This normal can be calculated by

• Finding cross product of 2 sides if the

face is planar (triangles are always

planar)

• Using Newell’s method for arbitrary

polygons which may not be planar

Newell’s Method
A robust approach to computing face

normal for arbitrary polygons:

Vertex Normals
For smooth surfaces we can calculate each

normal based on

• maths if it is a surface with a mathematical

formula

• averaging the face normals of adjacent

vertices (if this is done without normalising

the face normals you get a weighted

average). This is the basic way and can be

fined tuned to exclude averaging normals

that meet at a sharp edge etc.

Cylinder Example

For a cylinder we want smooth vertex

normals for the curved surface – as we do

not want to see edges there.

But face normals for the top and bottom

where there should be a distinct edge

See code Cylinder.java for implementation

or try it yourself.

The view volume

A 2D camera shows a 2D world window.

A 3D camera shows a 3D view volume.

This is the area of space that will be

displayed in the viewport.

Objects outside the view volume are

clipped.

The view volume is in camera coordinates.

Orthographic

view volume

x

y

z

near

plane

far

plane

camera

glOrtho()

// create a 3D orthographic

// projection

gl.glMatrixMode(GL2.GL_PROJECTION);

gl.glLoadIdentity();

gl.glOrtho(left, right,

bottom, top,

near, far);

glOrtho

The camera is located at the origin in

camera co-ordinates and oriented down the

negative z-axis.

Using a value of 2 for near means to place

the near plane at z = -2

Similary far =8 would place it at z=-8

Projection

We want to project a point from our 3D view

volume onto the near plane, which will then

be mapped to the viewport.

Projection happens after the model-view

transformation has been applied, so all

points are in camera coordinates.

Points with negative z values in camera

coordinates are in front of the camera.

Orthographic

projection
The orthographic projection is simple:

x

y

z

PQ

What about depth?

We still need depth information attached to

each point so that later we can work out

which points are in front.

The projection matrix maps z values of

visible points between to between -1 for

near and 1 for far.

So we are still working in 4D (x,y,z,w)

homogenous co-ordinates called clipping

co-ordinates.

Canonical View

Volume
It is convenient for clipping if we scale all

coordinates so that visible points lie within the

range (-1,1). Note the z axis signs are flipped. It is

now a left handed system.

This is called the canonical view volume (CVV).

x

y

z
(-1,-1,-1)

(1,1,1)
(l,b,-n)

(r,t,-f)

Orthographic

transformation matrix
This maps the points in camera/eye co-

ordinates into clip coordinates

Orthographic

Projections
Orthographic projections are commonly

used in:

• design – the projection maintains

parallel lines and describes shapes of

objects completely and exactly

• They are also used in some computer

games

Foreshortening

Foreshortening is the name for the

experience of things appearing smaller as

they get further away.

Retina

Pupil Object

Foreshortening

Foreshortening is the name for the

experience of things appearing smaller as

they get further away.

Retina

Pupil Object

Orthographic camera

The orthographic camera does not perform

foreshortening.

Objects size is independent of distance

from the camera.

Near

plane

Object

Orthographic camera

The orthographic camera does not perform

foreshortening.

Objects size is independent of distance

from the camera.

Near

plane

Object

Perspective

Perspective camera
We can define a different kind of projection

that implements a perspective camera. The

view volume is a frustum.

x

y

z

near

plane
far

plane

camera

glFrustum()
// create a perspective projection

gl.glMatrixMode(GL2.GL_PROJECTION);

gl.glLoadIdentity();

gl.glFrustum(left, right,

bottom, top,

near, far);

// left, right, bottom, top are the

// sides of the *near* clip plane

// near and far are *positive*

glFrustum

gluPerspective()

//a more convenient form instead

//of glFrustum

gl.glMatrixMode(GL2.GL_PROJECTION);

gl.glLoadIdentity();

GLU glu = new GLU();

//fieldOfView is in the y direction

glu.gluPerspective(fieldOfView,

aspectRatio, near, far);

gluPerspective

Side on View
Assuming a symmetric view volume
tan(FOV/2) = height/(2 * near)

width = height * aspectRatio

Perspective

projection
The perspective projection:

x

y

P
Q

-n

Pseudodepth
We still need depth information attached to

each point so that later we can work out

which points are in front. And we want this

information to lie between -1 and 1

We need to give each point a pseudodepth

value that preserves front-to-back ordering.

We want the equation for q3 to have the

same denominator (-p3) as q1 and q2

Pseudodepth

These constraints yield an equation for

pseudodepth:

Pseudodepth

Not linear. More precision for objects closer to

the near plane. Rounding errors worse

towards far plane.

Tip: Avoid setting near and far needlessy

small/big for better use of precision

Homogeneous

coordinates
We extend our representation for

homogeneous coordinates to allow values with

a fourth component other than zero or one.

We define an equivalence:

These two values represent the same point.

Example
(1,3,-2,1) is equivalent to

(2,6,-4,2) in homogeneous co-ordinates

This also means I can divide by say 3 using

matrix multiplication. So if multiplying by M

set my w value to 3 left other values

unchanged such as

M (12,6,-3,1) = (12,6,-3,3)

Which would be equivalent to (4,2,-1,1)

Perspective

transform
We can now express the perspective

equations as a single matrix:

Note that this matrix is not affine.

Perspective

transform
To transform a point:

Perspective

transform
This matrix maps the perspective view

volume to an axis aligned cube

Note the z-axis has been flipped.

x

y

(left, top, -n)

(r’, b’, -f)

-n

(left, top,-1)

(right, bottom,1)

(right,

bottom, -n)

(l’, t’, -f)

Perspective

projection matrix
We combine perspective transformation

and scaling into a single matrix to map it

into the canonical view volume for clipping:

Clipping

We can now clip points against the CVV.

The Liang-Barsky algorithm is a variant of

Cyrus-Beck extended to handle 3D and

homogeneous coordinates.

Details are in the textbook if you’re

interested.

Perspective division

After clipping we need to convert all points

to the form with the fourth component

equal to one. These are called normalized

device co-ordinates.

This is called the perspective division step.

Viewport

transformation
Finally we scale points into window coordinates

corresponding to pixels on the screen.

It also maps pseudodepth from -1..1 to 0..1

(-1,-1,-1)

(1,1,1)

(sx,sy)

(sx+ws,sy+hs)

Viewport

transformation
Again, we can do this with a matrix:

Where ns is 0 and fs is 1

The graphics pipeline

Projection

transformation
Illumination

Clipping
Perspective

division
ViewportRasterisation

Texturing
Frame

buffer
Display

Hidden

surface

removal

Model-View Transform

Model

Transform

View

Transform

Model

User

world

coordinates

local coordinates

eye

coordinates 4D Clip

coords

NDC window

coords

The graphics pipeline

To transform a point:

Extend to homogeneous coordinates:

Multiply by model matrix to get world

coordinates:

Multiply by view matrix to get camera (eye)

coordinates:

The graphics pipeline

Multiply by projection matrix to get clip

coordinates (with fourth component):

Clip to remove points outside CVV.

Perspective division to eliminate fourth

component.

Viewport transformation to window

coordinates.

Positioning a 3D

Camera
A 3D camera can be at any 3D point and

orientation.

As before, the view transform is the world-

to-camera transform, which is the inverse of

the usual local-to-global transformation for

objects.

The camera points backwards down its

local z-axis.

Setting the Camera

view

gl.glMatrixMode(GL2.GL_MODELVIEW);

gl.glLoadIdentity();

gl.glRotated( -45, 0, 0, 1);//Roll

gl.glRotated(-10, 0, 1, 0);//Heading

gl.glRotated( 15, 1, 0, 0);//Pitch

gl.glTranslated(0, -1, -3);

Camera View Demo

http://www.songho.ca/opengl/files/matrixMo

delView.zip

http://www.songho.ca/opengl/files/matrixModelView.zip

gluLookAt

A convenient shortcut for setting the

camera view transform.

gluLookAt(eyeX, eyeY, eyeZ,

centerX, centerY, centerZ,

upX, upY, upZ)

This places a camera an (eyeX, eyeY,

eyeZ) looking towards (centreX, centreY,

centreZ)

gluLookAt

A position and a target alone do not provide

a complete 3D coordinate frame.

k is fixed but i and j can rotate.

ki

j

Eye

Center

gluLookAt

A position and a target alone do not provide

a complete 3D coordinate frame.

k is fixed but i and j can rotate.

k

i

j

Eye

Center

gluLookAt

A position and a target alone do not provide

a complete 3D coordinate frame.

k is fixed but i and j can rotate.

k

i

j

Eye

Center

gluLookAt
Generally we want the camera angled so

that the image is vertical.

So we don’t (unless we want to) have our

camera upside down or on its side like

or

We achieve this by specifying an ‘up’ vector

in world coordinates, which points upwards.

gluLookAt

Now we want the camera’s i vector to be at

right angles to both the k vector and the up

vector:

ki

j

Eye

Centre

up

gluLookAt

We can calculate this as:

This is the view transformation computed

by gluLookAt.

‘Zoom’
‘Zoom’: Increasing/decreasing the FOV

(changes the perspective)

Dolly: Translating the camera along the z-axis

Dolly Zoom (Hitchcock effect): Zooming In/Out

and Dollying backwards/forwards at the same

time

https://docs.unity3d.com/Manual/DollyZoom.html

https://docs.unity3d.com/Manual/DollyZoom.html

Exercises 1

Write a snippet of jogl code to draw a

triangle with vertices:

(2,1,-4)

(0,-1,-3)

(-2,1,-4)

Make sure you specify face normals for the

vertices.

Exercises 2
We want to use a perspective camera to

view our triangle. Which command/s would

work?

gl.glOrtho(-3,3,-3,3,0,8);

gl.glFrustum(-3,3,-3,3,0,8);

gl.glFrustum(-3,3,-3,3,-2,8);

glu.gluPerspective(60,1,2,8);

glu.gluPerspective(60,1,0,8);

Exercises 3

What would be an equivalent way to specify

your perspective camera?

Where would the x and y vertices in our

triangle be projected to on the near plane?

What would the pseudo-depth of our

vertices be in CVV co-ordinates (-1..1)?

Exercises 4

Suppose we wanted to add another triangle

with vertices

(-0.5,0,0)

(0.5,0.5,0)

(0.5,-0.5,0)

Would this appear on the screen? How

could we fix this?