程序代写代做代考 js gui School of Mathematics and Statistics MAST30028 Numerical Methods & Scientific Computing Week 9

School of Mathematics and Statistics MAST30028 Numerical Methods & Scientific Computing Week 9
Drag and drop the folder Week9 from L: \MAST30028 to C:\…\MATLAB and include it in the path. Now MATLAB knows how to find the files in Week9.
Recall: there are 2 common ways to solve Linear Least Squares problems — the normal equations AT Ax = AT b
or preferably by using the QR factorization of A. From A = QR, we solve the triangular system Rx = QT b
This is what the MATLAB \ command does in the case that A is rectangular with m > n (overdetermined). Exercise Set 1
This relates to material in Lecture 16.
Sensitivity of the least squares problem
Run the script ShowLSqV2.m and enter 10 4 7 as input values. This solves 2 least squares problems using a 10 × 4 design matrix with condition number of 107, first as a zero residual problem and next as a non-zero residual problem.
This illustrates how the sensitivity of least squares problems depends on the residual of the fit. The relative error is magnified by ≈ κ(A) for small-residual problems, and by ≈ κ(A)2 for large-residual problems.
QR factorization
There are 2 ways to do the QR factorization. Each repeatedly applies orthogonal transformations to the matrix A to create zeroes under the diagonal, leaving an upper triangular matrix R. Suitable transformations can be rotations — called Givens rotations — or reflections — called Householder reflections.
a. Run the script ShowQR to see how Givens rotations are used to create zeroes under the diagonal. You don’t need to worry about how the Givens matrices are constructed i.e. how Rotate.m works.
b. MATLAB actually uses Householder reflections. Run the function qrExample.m which works through the example on p. 150 of Moler, illustrating the use of QR factorization to find the least squares fit to a quadratic model. You don’t need to worry about how the Householder matrices are constructed i.e. how qrsteps.m works.
1

Four ways
The system
!”111$% !”89$%
“1 1 0% “67% “#0 1 1%&x=”#53%& 100 35 001 20
has the least squares solution (35.125, 32.5, 20.625)T . Obtain this solution:
a. solving the normal equations by Cholesky factorization
b. using the QR factorization, and using the matrix Q
c. using\
d. by using the Singular Value Decomposition (SVD) of A to get
where
x = Vy Σy = UTb
Which methods give the smallest error for this simple problem?
Exercise Set 2
This relates to material in Lecture 17.
Steepest descent
The simplest optimization method using gradient information is steepest descent. It can be applied to any objective function f(x), not just the least squares case f = 1R(x)TR(x). The method uses the descent
2
direction −∇f and must choose a stepsize to ensure f decreases sufficiently at each step. It is not necessary to solve this line search problem exactly i.e. accurately.
The method can slow down on certain functions with narrow, steep valleys, of which the Rosenbrock ‘banana’ function is the prime example.
f(x) = 100(x2 − x21)2 + (1 − x1)2
a. To see how steepest descent fails on such a function, run bandem.m in MATLAB R2015b or earlier, which comes from the Optimization Toolbox. In the GUI which pops up, click Info to see a description of the problem. Then click on the 3rd button Steepest to run steepest descent. It fails to find the minimum after 250 function evaluations. Rosenbrock’s function has the form of a sum of squares, so we can apply methods designed for nonlinear least squares to it. Click on the last button L-M to run the Levenberg-Marquardt method for this problem. It finds the minimum after 34 function evaluations.
b. Run the script ShowSDbanana to see how steepest descent with line search performs in 40 iterations. Then zoom in on the trajectory to see the characteristic zig-zag trajectory that happens when steepest descent stalls.
Note: If you’re running MATLAB R2016a or later, instead investigate the example Banana Function Minimization in the MATLAB documentation Examples → Optimization Toolbox → Nonlinear Optimization. Click on the Open Script button and run the first three cells. The third cell runs steepest descent and fails to find the minimum after 600 function evaluations.
2

Table 1: Data for Gauss-Newton
t y
0123 2.0 0.7 0.3 0.1
Gauss-Newton method
Here we try the Gauss-Newton method on a simple fitting problem from Heath, Scientific Computing, 2nd ed. The task is to fit the data above
to the exponential model
y ∼ x1ex2t
a. I wrote the code GNdriver.m to show how undamped Gauss-Newton method performs, both for a good
initial guess and a poor one. By undamped, we mean that we compute the GN direction by solving Js = −R
andupdatexbyx→x+γs,withγ=1i.e. wetakeafullGNstep.
b. it is more common to instead step along the GN direction a fraction γ < 1 of the full step, to ensure a minimum reduction in the cost function i.e. avoid overshoot of the minimum along the search direction. The damping factor is chosen by an approximate linesearch along the GN direction. An implementation by Kelley is given by gaussn.m, with a driver for the same problem given in kelleyGNdriver. You don’t need to understand the linesearch used in gaussn.m. How does each method perform for good and poor initial guesses? 3