代写 algorithm Scheme math matlab graph UNIVERSITY OF HONG KONG

UNIVERSITY OF HONG KONG
DEPARTMENT OF MATHEMATICS
MATH4602 Scientific Computing
Computer programming project
Please select Four from the following Six problems. Scripts should be run-able on their own: after running the code, indicated input and output will be printed in Command
Window. Functions should be detailed illustrated with ’scripts’ showing each feature. Each problem will have 25% credits. If you finish more than four problems, you will get extra credits. Please try your best.
Due Date: 30 April 2019 (Tue.)
1. (Gaussian elimination method)
(a) Write function x = gauss(A,b) that uses Gaussian elimination without pivoting to solve the n × n system Ax = b. Note that the function does not perform the LU decomposition but just applies straightforward row operations. If a pivot element is zero, print an error message and exit.
(b) Use your function to solve three random systems of size 3 × 3, 4 × 4, and 10 × 10. For example, you can use A = rand(10, 10) to generate matrix A and b = rand(10,1) to generate vector b, which have uniformly distributed random numbers. You can also try A = randn(10,10) to generate matrix A and b = randn(10,1) to generate vector b, which have normal distributed random numbers.
2. (Gram-Schmidt Process) Classical Gram-Schmidt method may be error-prone. De- fine ε = 0.5 × 10−8 and build the matrix
11111  ε 0 0 0 0 
 0 ε 0 0 0  
A=00ε00. 
 0 0 0 ε 0  0000ε
Please write program to implement the Classical Gram-Schmidt Process and Mod- ified Gram-Schmidt Process, respectively. (See the appendix file)
[Q1 R1] = ClassicalGS(A);
[Q2 R2] = ModifiedGS(A).
For each decomposition, compute ∥ QTi Qi − I ∥. Explain the results.
3. (Singular-value decomposition) If A is nonsingular, the SVD of A = U ΣV T can be
used to solve a linear system Ax = b.
(a) Explain why Σ is invertible. What is its inverse?
(b) Show that
x = V Σ−1UT b.
(c) Develop a one line MATLAB command to compute Σ−1 that only uses the function diag.
1

(d) Solve
 1 1 0  x1   1   8 4 1   x 2  =  2  .
903×3 3
1 3 0 1x1 1
1 5 3 8x2 0  =. 12 5 7 0x3 1
6771535 x4 2 and using the results of parts (a)-(c).
(e) Does using the SVD seems like a practical means of solving Ax = b? Explain your answer.
4. (Finite difference method for boundary value problem) This problem takes an ap- proach to the column buckling application. Apply a compressive axial force, or load, P , to the top of a vertical thin elastic column of uniform cross-section having length L. See [Moodle\Related Learning Materials\ Matlab–The Algebraic Eigenvalue Problem\Section 18.1.3.] for more details.
(a) If the beam is hinged at both ends, we obtain the boundary value problem for deflection y(x),
EId2y+Py=0, y(0)=0, y(L)=0, dx2
where E is Youngs modulus of elasticity and I is the area moment of inertia
of column cross-section. Approach the problem using finite differences with n
intervals of length h = L and central finite difference approximation. Show n
that the result is a matrix eigenvalue problem of the form Ay = λy,
where A is a symmetric tridiagonal matrix. What is the relationship between λ and the critical load?
(b) For copper, E = 117 × 109N/m2. Assume I = .0052m4 and that L = 1.52m. Using h = 1.52 find the smallest three values of λ, compute critical loads, and
25
graph (xi,yi) for each value of λ on the same axes. Relate the results to the discussion in Section 18.1.3.
Note: Suppose that EV is a 26×3 matrix containing the eigenvectors corre- sponding to λ1, λ2, and λ3 as well as the zero values at the endpoints. Each row has the format [0 y2 y3 · · · y24 y25 0]T . To create a good looking graph place these statements in your program.
minval = min(min(EV)); maxval = max(max(EV)); axis equal;
axis([0 L minval maxval]); hold on;
2

graph each displacement using different line styles add a legend
hold off;
5. (Finite difference methods for Parabolic equations) Consider the following problem
ut = uxx +u(1−u), (x,t)∈(0,1)×(0,tF) ux(0,t) = ux(1,t) = 0, t ∈ [0,tF]
u(x,0) = f(x), x ∈ [0,1]
where u = u(x, t). Note that this problem comes from the population growth model,
which is also called Fisher’s equation.
(a) Develop an explicit scheme to solve this problem with f(x) = sin2(πx), ∆x =
0.02, ∆t = 0.001, tF = 5.
(b) What do you observe for tF → ∞? (Hint: in practice you cannot approach tF =∞. ButyoucancomputethesolutionattF =10,tF =20,tF =40,etc and observe the behaviors of the solutions.)
6. (Finite difference methods for Hyperbolic equations) Consider the following 2D hy- perbolic equation
utt =uxx +uyy, u(x, y, 0) = f(x, y), ut(x, y, 0) = g(x, y), u(0,y,t) = uL(y,t),u(1,y,t) = uR(y,t) u(x,0,t) = uB(x,t),u(x,1,t) = uT(x,t)
(x,y,t)∈(0,1)×(0,1)×(0,tF) (x, y) ∈ (0, 1) × (0, 1)
t ∈ [0,fF]
t ∈ [0,fF]
Implement the explicit scheme for solving the above problem with the corresponding initial and boundary conditions such that the analytic solution is

u(x, y, t) = sin(πx) sin(πy) cos( 2πt)
For example, the initial conditions are u(x, y, 0) = sin(πx) sin(πy) and ut(x, y, 0) =
0. The boundary conditions can be obtained similarly.
(a) Let h = 1/10 and tF = 0.5. Solve the problem with ∆t = 0.5h and ∆t = 0.8h.
Plot the numerical and analytic solutions at tF . What do you observe?
(b) What happens if you try a smaller mesh size, say h = 1/20 and ∆t = 0.5h and ∆t = 0.8h?
3

Lecture 5 Gram-Schmidt Orthogonalization
MIT 18.335J / 6.337J Introduction to Numerical Methods
Per-Olof Persson September 21, 2006
1

where
Gram-Schmidt Projections
• The orthogonal vectors produced by Gram-Schmidt can be written in terms of projectors
q1= P1a1 , q2= P2a2 , …, qn= Pnan √P1a1√ √P2a2√ √Pnan√
P=I−Qˆ Qˆ� withQˆ = q�q�···�q ���
j j−1j−1 j−1 1�2� �j−1 ���
• Pj projects orthogonally onto the space orthogonal to ∗q1, . . . , qj−1∞, and rank(Pj) = m−(j −1)
2
�����

The Modified Gram-Schmidt Algorithm
• The projection Pj can equivalently be written as Pj =P�qj−1 ···P�q2P�q1
where (last lecture)
P�q =I−qq�
• P�q projects orthogonally onto the space orthogonal to q, and
rank(P�q) = m − 1
• The Classical Gram-Schmidt algorithm computes an orthogonal vector by
vj =Pjaj
while the Modified Gram-Schmidt algorithm uses
vj = P�qj−1 ···P�q2P�q1aj 3

Classical vs. Modified Gram-Schmidt
• Small modification of classical G-S gives modified G-S (but see next slide) • Modified G-S is numerically stable (less sensitive to rounding errors)
Classical/Modified Gram-Schmidt
for j = 1 to n vj = aj
for i �= 1 to j − 1 rij = q�aj
(CGS) (MGS)
rij = q�vj i
vj =vj −rijqi rjj =√vj√2
qj =vj/rjj
4
i

Implementation of Modified Gram-Schmidt
• In modified G-S, P�qi can be applied to all vj as soon as qi is known • Makes the inner loop iterations independent (like in classical G-S)
Classical Gram-Schmidt
Modified Gram-Schmidt
for j = 1 to n vj = aj
for i = 1 to n vi = ai
for i = 1 to j − 1 rij = q�aj
for i = 1 to n rii = √vi√
i
vj =vj −rijqi
qi =vi/rii forj=i+1ton
rjj =√vj√2 qj = vj/rjj
rij = q�vj i
5
vj =vj −rijqi

Example: Classical vs. Modified Gram-Schmidt
• Compare classical and modified G-S for the vectors
a1 = (1,�,0,0)T, a2 = (1,0,�,0)T, a3 = (1,0,0,�)T
making the approximation 1 + �2 � 1 • Classical:
v1 � (1,�,0,0)T, r11 = →1+�2 � 1, q1 = v1/1 = (1,�,0,0)T
v2 � (1,0,�,0)T, r12 = q1Ta2 = 1, v2 � v2 −1q1 = (0,−�,�,0)T
r22 = →2�, q2 = v2/r22 = (0,−1,1,0)T/→2
v3 � (1,0,0,�)T, r13 = qTa3 = 1, v3 � v3 −1q1 = (0,−�,0,�)T
1
r23 = qTa3 = 0, v3 � v3 −0q2 = (0,−�,0,�)T 2
r33 = →2�, q3 = v3/r33 = (0,−1,0,1)T/→2 6

Example: Classical vs. Modified Gram-Schmidt
• Modified:
v1 � (1,�,0,0)T, r11 = →1+�2 � 1, q1 = v1/1 = (1,�,0,0)T
v2 � (1,0,�,0)T, r12 = qTv2 = 1, v2 � v2 −1q1 = (0,−�,�,0)T 1
r22 = →2�, q2 = v2/r22 = (0,−1,1,0)T/→2
v3 � (1,0,0,�)T, r13 = q1Tv3 = 1, v3 � v3 −1q1 = (0,−�,0,�)T
r23 = qTv3 = �/→2, v3 � v3 −r23q2 = (0,−�/2,−�/2,�)T 2
r33 = →6�/2, q3 = v3/r33 = (0, −1, −1, 2)T /→6 • Check Orthogonality:
– Classical: qT q3 = (0, −1, 1, 0)(0, −1, 0, 1)T /2 →= 1/2 2
– Modified: qTq3 = (0,−1,1,0)(0,−1,−1,2)T/ 12 = 0 2
7

Operation Count
• Count number of floating points operations – “flops” – in an algorithm • Each +, −, ←, /, or → counts as one flop
• No distinction between real and complex
• No consideration of memory accesses or other performance aspects
8

(1) (2) (3) (4) (5) (6) (7) (8)
fori=1ton vi = ai
fori=1ton rii = √vi√
m multiplications, m − 1 additions m divisions
Operation Count – Modified G-S
• Example: Count all +, −, ←, / in the Modified Gram-Schmidt algorithm (not just the leading term)
qi = vi/rii
for j = i + 1 to n
rij =qi�vj
vj = vj − rij qi
mmultiplications,m−1additions m multiplications, m subtractions
9

Operation Count – Modified G-S
• The total for each operation is
n⎝n�n #A=⎞�m−1+ ⎞ m−1�=n(m−1)+⎞(m−1)(n−i)=
i=1 j=i+1 i=1 =n(m−1)+ n(n−1)(m−1) = 1n(n+1)(m−1)
22 nnn1
#S=⎞ ⎞ m=⎞m(n−i)=2mn(n−1) i=1 j=i+1 i=1
n⎝n�n
#M =⎞�m+ ⎞ 2m�=mn+⎞2m(n−i)=
i=1 j=i+1 i=1 = mn + 2mn(n − 1) = mn2
n
#D = ⎞ m = mn i=1
2
10

Operation Count – Modified G-S
and the total flop count is
1n(n+1)(m−1)+1mn(n−1)+mn2 +mn= 22
2mn2 +mn−1n2 −1n�2mn2 22
• The symbol � indicates asymptotic value as m, n ≈ ∼ (leading term)
• Easier to find just the leading term:
– Most work done in lines (7) and (8), with 4m flops per iteration
– Including the loops, the total becomes nnnn
� � 4m=4m�(n−i)�4m�i=2mn2 i=1 j=i+1 i=1 i=1
11