程序代写代做代考 go Numerical Methods & Scientific Computing: lecture notes

Numerical Methods & Scientific Computing: lecture notes
Linear Systems
Error analysis
Ill-conditioning means ‘nearly singular’
Using any subordinate matrix norm
1 = min kAkp
p (A) kAkp where A + A is singular
if (A) is large, A is close to a singular matrix Example

Numerical Methods & Scientific Computing: lecture notes
Linear Systems
Error analysis
The determinant does not help
Note: det A is NOT a good measure of how close to singular a matrix is. It gives false negatives
Example
and false positives
Example

Numerical Methods & Scientific Computing: lecture notes
Linear Systems
Error analysis
Geometric interpretation
An ill-conditioned matrix =) a system of hyperplanes that are almost parallel
=) wouldn’t have to tilt them much to make them parallel and hence have no or infinitely many solutions.

Numerical Methods & Scientific Computing: lecture notes
Linear Systems
Error analysis
In MATLAB
(A) is computed in MATLAB with the cond command. Not cheap to compute (A)
instead usually estimate its value.
In MATLAB one can use either:
condest to estimate 1(A)
or rcond which estimates its reciprocal.

Numerical Methods & Scientific Computing: lecture notes
Linear Systems
Error analysis
Backwards error and residuals
Suppose all backward error is in A not b. Then ( A + A ) ˆx = b
so that
where r = b Aˆx is the residual. krk
A ˆx = b A ˆx = r Then krk  kAk kˆxk =) kAk kˆxk
so the relative backward error
kAk krk
k A k k A k k ˆx k is measured by the relative residual ⌘.
⌘ ⌘

Numerical Methods & Scientific Computing: lecture notes
Linear Systems
Error analysis
Hence
large relative residuals =) large (normwise) relative backward error
A stable method (small backward error) produces small relative residuals.
the bound can be attained, so that
small relative residuals signal small (normwise) relative backward error
BUT small residuals do NOT =) small errors in x
( it depends on the size of the condition number of A)

Numerical Methods & Scientific Computing: lecture notes
Linear Systems
Error analysis
Backward error analysis: triangular systems
Under standard model for Floating point arithmetic:
Theorem
If the n ⇥ n triangular system
Tx = b
is solved by substitution, the computed solution ˆx satisfies ( T + T ) ˆx = b
where (componentwise bound)
|Tij |n |Tij |⇡nu|Tij | sincen ⌘ nu ⇡nu 1nu
i.e. it solves a system with a nearby matrix (and the correct b!).

Numerical Methods & Scientific Computing: lecture notes
Linear Systems
Error analysis
Solving triangular systems by forward/backsubstitution is backward stable
In general, componentwise stability is a more stringent requirement than normwise stability, since for any monotone norm
|Tij |n |Tij |)kTknkTk
which is the usual usage of the term backward stable.
=) final stages of solving a linear system are no problem; any
problems must arise in the factorization stage!
Solving triangular systems by forward/backsubstitution is normwise backward stable

Numerical Methods & Scientific Computing: lecture notes
Linear Systems
Error analysis
Gauss elimination: any pivoting strategy
Theorem
If the n ⇥ n system
Ax = b
is solved by Gauss elimination, the computed solution ˆx satisfies
( A + A ) ˆx = b where (componentwise again)
|A | X|Lˆ ||Uˆ | ij 3n ik kj
k
which is not what we’re after. Want a bound for A involving | A |.

Numerical Methods & Scientific Computing: lecture notes
Linear Systems
Error analysis
Gauss elimination with partial pivoting (GEPP)
The problem with GaPuss elimination is that P | Lˆ || Uˆ | can be much ˆˆ kikkj
bigger than | Aij |=| k Lik Ukj |.
i.e. A must have been produced from L, U using lots of cancellation.
Partial pivoting ensures that
| Lˆ i k |  1
but what can we say about | Uˆ |? kj
To progress, go to a normwise analysis:
k A k  3 n k Lˆ k k Uˆ k
for a monotone norm.

Numerical Methods & Scientific Computing: lecture notes
Linear Systems
Error analysis
The growth factor
Definition
The growth factor (using the max-norm):
⇢ = k Lˆ k M k Uˆ k M / k A k M
Using the definition =) for the 1 or 1 norms kAk  n23n⇢kAk
normwise backward error is determined by the growth factor.

Numerical Methods & Scientific Computing: lecture notes
Linear Systems
Error analysis
Without pivoting…
there is no bound for the growth factor.
Example
Thematrix 1 for⌧1has⇢⇡2 whichcanbearbitrarily 11
large.
even for 2 ⇥ 2 system!

Numerical Methods & Scientific Computing: lecture notes
Linear Systems
Error analysis
For Gauss elimination with partial pivoting
There are matrices that have ⇢ = 2n1 — for them GEPP is not backward stable.
These matrices appear to be very rare in practise. For most matrices, ⇢ is very modest in size e.g < 10 For random matrices e.g. matrices with components drawn from a standard normal distribution, ⇢ ⇠ n1/2 Hence experts say that, for practical purposes, GEPP is (normwise) backward stable — the default direct method for solving general dense linear systems. Numerical Methods & Scientific Computing: lecture notes Linear Systems Error analysis SUMMARY 1 Gauss elimination without pivoting is not backward stable. 2 GEPP is, for practical purposes, (normwise) backward stable. 3 Cholesky factorization is (normwise) backward stable 4 There are other factorizations (QR) that are (normwise) backward stable for general matrices but more expensive than GEPP 5 A backward stable method produces small (relative) residuals. 6 This does not imply small (forward) errors if (A) >> 1 i.e your problem is ill-conditioned.

Numerical Methods & Scientific Computing: lecture notes
Linear Systems
Error analysis
What if A is ill-conditioned?
Remedy:
1 do you really need x or just f(x) or r?
2 do the problem another way i.e. change A
3 live with the reduced accuracy

Numerical Methods & Scientific Computing: lecture notes
Linear Systems
Error analysis
End of Week 7