CS计算机代考程序代写 matlab AI ENGR20005

ENGR20005
Numerical Methods in Engineering
Workshop 5
Part A: MATLAB Livescripts
5.1 The livescript ENGR20005 Workshop5p1.mlx runs through the convergence of iterative methods and how to accelerate this process.
(a) Read through the livescript and understand what each command does.
(b) Modify the livescript to solve the following system of equations
2x− 4y+4z= 12
3x+ y−8z= 4 −5x+11y+ z=−32
Does the method converge? Can you explain why.
(5.1)
5.2 The livescript ENGR20005 Workshop5p2.mlx runs through the use of MATLAB func- tions to perform least squares regression. Read through the livescript and make sure you understand what each line of code does.
5.3 The livescript ENGR20005 Workshop5p3.mlx runs through least squares regression.
(a) Read through the livescript and make sure you understand what each line of code
does.
(b) Modify the livescript to perform quadratic and cubic least squares regression.
1

Part B: Problems
5.4 In the livescript ENGR20005 Workshop5p1.mlx, you would have seen that successive over-relaxation applied to the Gauss–Seidel method doesn’t converge for all 0 < ω < 2. In the special case when [A] is symmetric positive definite, it can be shown that ρ([P]) < 1 for all 0 < ω < 2, ensuring convergence for any initial guess {x}(0). Consider the following system of linear equations 2x−y =1 −x + 2y − z = 4 −y + 2z = 7 Recall that a matrix [A] as positive definite if and only if {z}T[A]{z} > 0
for any {z} ∈ Rn excluding the zero vector.
(a) Determine the matrix [A] and the vector {c}.
(b) Prove that [A] is positive definite.
(c) Apply the Gauss–Seidel method to solve Eq. (5.2).
(5.2)
(d) Apply successive over-relaxation to your answer in (c) and determine the spectral radius for 0 ≤ ω ≤ 2. Plot your results and use this to find the optimal relaxation factor ωopt.
(e) Solve Eq. (5.2) using successive over-relaxation and verify that ωopt is the optimal value of ω.
5.5 When determining the least squares regression, our goal is to minimise the squared error S. We do this by computing the parameters ai such that the gradient is zero.
∂S =0 (5.3) ∂ai
However, for complicated models, the resulting set of equations are nonlinear.
(a) Apply the Newton–Raphson method to Eq. (5.3) and show that the parameter
vector {a} can be found by the iterative formula
{a}(n+1) = {a}(n) − ([H](n))−1{d}(n)
where di = ∂S/∂ai and [H] is the Hessian matrix of S, given by ∂2S ∂2S···∂2S
(5.4)
(5.5)
∂ a 21  ∂2S
∂2S ∂2S ··· ∂2S ∂ a n ∂ a 1 ∂ a n ∂ a 2 ∂ a 2n
[H] = ∂a2∂a1
∂a2 ···
∂a2∂an 
∂ a 1 ∂ a 2 ∂2S
∂ a 1 ∂ a n ∂2S 
. 2 . . … . ..
2

(b) Show that another way of writing Eq. (5.4) is
[H](n)({a}(n+1) − {a}(n)) = −{d}(n) (5.6)
From a computational perspective, why is Eq. (5.6) a better way of writing Eq. (5.4)?
(c) Apply Eq. (5.6) to a power function y = axb.
(d) Write a MATLAB function (.m file) to implement your answer to (c).
(e) Consider the following data set
xy
0.7860 1.0113 1.5916 4.4279 2.1917 11.4360 2.9606 22.2894 3.6441 37.0793 4.2554 56.9877 5.0294 81.0653
Use your answer to part (d) to fit a power function of the form y = axb. Compare your answer with the linearization of nonlinear relationship technique you learnt in lectures.
5.6 * From lectures, you would have seen that fixed point iteration and the Newton– Raphson method have linear and quadratic orders of convergence respectively. In case you don’t believe it, we’ll conduct some numerical experiments to verify this.
Hopefully, you still have your MATLAB functions from workshops 2 and 3. In case you don’t, the data is provided in ENGR20005 Workshop5p6.mat.
(a) The error at step i is given by
Ei = |xi − xr|
where xr is the root. Write a MATLAB function (.m file) that determines the
error of each method.
(b) Plot the error of the (i + 1)st iteration against the error in the ith iteration. i.e.
Ei+1 vs Ei.
(c) The theoretical analyses conducted in lectures have shown that these methods obey a power law, Ei+1 = aEib. Perform a log transformation on the data and fit a linear curve to determine the parameters a and b.
(d) Check your answer to part (c) with MATLAB’s inbuilt functions and your answer to Q. 5.5. Are there any differences? Do you know why?
(e) Do your results here match with what you have seen in lectures? What can you conclude about the order of convergence of the Secant and Bracketing methods?
3