Solution of Week 5 Lab (Prepared by Yuan Yin)
December 22, 2019
1 Error Propagation: 1.1 a.
• Prove the recursive relation (with boundary condition) and the restriction on In: xn = xn−1(x+2)−2xn−1 = xn−1 − 2xn−1 ;
x+2 x+2 x+2
In = 1 xn dx = 1 xn−1dx−21 xn−1 dx = 1 −2In−1 as required.
0 x+2 0 0 x+2 n 11 13
I0 = 0 x+2dx = log(x+2)0 = log(2) > 0;
Also,since xn >0whenx∈[0,1]∀n,In >0. BecausewehaveIn−1 >0aswell,In = 1−2In−1 < 1
x+2 nn as required.
• Explain the magnitude of the error you find when running BadRecurrence.m:
The output shows that In = 6.0580 × 1012 ( which contradicts with the fact that In < 1 = 1 ).
n 100 This is because of the ‘−2’ in the recursive relation! —— Every time when you go into the for
loop, the error will be magnified by −2.
If you switch on the debug mode, you can actually see that when n ∼ 50, In oscillates between negative and positive values. Why?—— Let’s assume that the initial relative input error is the unit roundoff, 2−53. Then, after n = 53 iterations, the relative error ∼ |2−53 × (−2)53| = 1! Hence, when n ∼ 50, the absolute error carried by In can be of the same magnitude as In itself! And it is now clear that we shouldn’t expect this algorithm to give us any reliable answer.
• How to run the recurrence backwards?
If we try to rearrange the recursive relation, we will get In−1 = n
condition to be I200 = 1 , we can get I100 through iterations. 200
• What do you find when running GoodRecurrence.m?
The output is I100 = 0.003311185913527. If you try to use the MATLAB command, ‘integral’, to check the result, you will see that GoodRecurrence.m gives a quite accurate answer.
Why? —— This is because of the ‘−1’ factor in the good recursive relation! This factor means 2
that each time when you go into the for loop, the error will be halved. Therefore, although our initial guess, I200 = 1 , may not be accurate enough, after 100 iterations, the absolute error will
200
be so small that we should expect this algorithm to give us a reliable answer.
1 −In 2
. Then, by guessing the initial
1
1.2 b.
• Observation:
With the decrease in the h value, |total error| firstly gets smaller until h ∼ 10−8. This optimal value of h gives min(|total error|) ≈ 10−8. After that, |total error| keeps increasing even though h gets really small.
• Explain why the error first falls as h is reduced, then rises:
In the lecture, we have proved that |total error| = |Truncation Error| + |RoundOff Error| ≤
K1h + K2 u + K3u. We can consider this expression as a function of h, i.e. g(h). Then, working h
out min(| total error |) is equivalent to finding one h such that g′(h) = 0. However, g′(h) = 0 ⇐⇒ u1−8 −8
K1 −K2h2 = 0 ⇐⇒ h ≈ u2 ≈ 10 . The truncation error dominates when h > 10 while the roundoff error dominates when h < 10−8. Also, h ↓⇒ R.E. ↑ and T.E.↓.
1.3 c.
1.
1 1 1
In = 0 xnex−1dx = [xnex−1]0 − n 0 xn−1ex−1dx = 1 − nIn−1 (integration by parts).
xnex−1 >0,∀nwhenx∈[0,1]⇒In >0;
ex−1 <1whenx∈(0,1)⇒In =1xnex−1dx<1xndx= 1 ;
So,indeed0
b.
For fixed point iteration, if it converges, the absolute error behaves like en+1 ≈ ken (i.e. linear convergence), where k is different for different g (actually, k = |g′(xroot)|) and k ̸= 0. The smaller the k is, the faster the convergence. If k = 0, we would expect quadratic convergence.
From the outputs of ‘FixedPointErrors.m’, we can see that before arriving to the root, k ≈ 0.512 for g3(x) and k ≈ 0.127 for g4(x). For g5(x), k starts from 0.00395 and shrinks dramatically. Therefore, we can conclude that g5(x) exhibits better than linear convergence.
c.
Call ‘cobweb’ and try to understand the result.
d.
The last three all have the same fixed point, 1.3097995858041.
[3]:
%%file Ex2D.m
function Ex2D clc
format long
N = 30;
x0_1 = 1;
x1_1 = zeros(N, 1); % preallocate memory x1_1(1) = g1(x0_1);
for n = 2 : N x1_1(n)=g1(x1_1(n – 1));
4
end
x0_2 = 3;
x1_2 = zeros(N, 1); % preallocate memory x1_2(1) = g1(x0_2);
for n = 2 : N x1_2(n)=g1(x1_2(n – 1));
end
x0_3 = 6;
x1_3 = zeros(N, 1); % preallocate memory x1_3(1) = g1(x0_3);
for n = 2 : N
x1_3(n) = g1(x1_3(n – 1));
end
Iterates=[x1_1 x1_2 x1_3];
disp(Iterates)
%%
clc
format long
x0 = 2; N = 30;
x2 = zeros(N, 1); % preallocate memory x2(1) = g2(x0);
for n = 2 : N
x2(n) = g2(x2(n – 1));
end
x3 = zeros(N, 1); % preallocate memory x3(1) = g3(x0);
for n = 2 : N
x3(n) = g3(x3(n – 1));
end
x4 = zeros(N, 1); % preallocate memory x4(1) = g4(x0);
5
for n = 2 : N
x4(n) = g4(x4(n – 1));
end
Iterates=[x2 x3 x4]; disp(Iterates)
end
% ————————————-
% subfunctions
function y=g1(x) y = cos(x);
end
function y=g2(x) y = exp(exp(-x)); end
function y=g3(x)
y = x – log(x) + exp(-x); end
function y=g4(x)
y = x + log(x) – exp(-x); end
Created file ‘/Users/RebeccaYinYuan/MAST30028 Tutorial Answers Yuan Yin/WEEK
5/Ex2D.m’.
[7]: Ex2D
0.540302305868140 -0.989992496600445 0.960170286650366
0.857553215846393 0.548696133603097 0.573380480369621
0.654289790497779 0.853205311505747 0.840071952619900
0.793480358742566 0.657571671944072 0.667409245090195
0.701368773622757 0.791478749684416 0.785427856067595
0.763959682900654 0.702794111808299 0.707085784986426
0.722102425026708 0.763039187796815 0.760258236815235
0.750417761763761 0.722738904784978 0.724658081694652
0.731404042422510 0.749996919694713 0.748726116355687
0.744237354900557 0.731690968525826 0.732556603421911
0.735604740436347 0.744045681952540 0.743467047700358
0.741425086610109 0.735734568286841 0.736126336713148
0.737506890513243 0.741337961246103 0.741074976073316
6
0.740147335567876 0.737565726923219 0.737743288820334
0.738369204122323 0.740107770052690 0.739988350091130
0.739567202212256 0.738395886397535 0.738476414072198
0.738760319874211 0.739549242570510 0.739495036797005
0.739303892396906 0.738772423983223 0.738808955148466
0.738937756715344 0.739295741775515 0.739271141894108
0.739184399771494 0.738943248365088 0.738959822746324
0.739018262427412 0.739180701117231 0.739169538051310
0.739130176529671 0.739020754151705 0.739028274469591
0.739054790746917 0.739128498195072 0.739123432755420
0.739105571926536 0.739055921348123 0.739059333642071
0.739071365298945 0.739104810364846 0.739102511871966
0.739094407379091 0.739071878307350 0.739073426631278
0.739078885994992 0.739094061815581 0.739093018860241
0.739089341403393 0.739079118773054 0.739079821326797
0.739082298522402 0.739089184602345 0.739088711356633
0.739087042695332 0.739082404145953 0.739082722931292
1.144920592687449 1.442188102676667 2.557811897323333
1.374718782110840 1.312436529823259 3.419489986371418
1.287768285352150 1.309714605776453 4.616252276551578
1.317697367725113 1.309802422943473 6.135945666000545
1.307021814569891 1.309799491189784 7.947946196938755
1.310783209399275 1.309799588959517 10.020506365014210
1.309452110560984 1.309799585698920 12.325095516401717
1.309922438815369 1.309799585807660 14.836728547172431
1.309756162981972 1.309799585804033 17.533833952149614
1.309814935371413 1.309799585804154 20.397966310946892
1.309794160076128 1.309799585804150 23.413401514803315
1.309801503702707 1.309799585804150 26.566710087468685
1.309798907864212 1.309799585804150 29.846369019001621
1.309799825443164 1.309799585804150 33.242432210536805
1.309799501096317 1.309799585804150 36.746259349042617
1.309799615746765 1.309799585804150 40.350295783100265
1.309799575220004 1.309799585804150 44.047894508225546
1.309799589545445 1.309799585804150 47.833172061695251
1.309799584481674 1.309799585804150 51.700891436688629
1.309799586271621 1.309799585804150 55.646366460543504
1.309799585638909 1.309799585804150 59.665383243420791
1.309799585862560 1.309799585804150 63.754135250475294
1.309799585783504 1.309799585804150 67.909169299084198
1.309799585811449 1.309799585804150 72.127340365755231
1.309799585801571 1.309799585804150 76.405773538802350
1.309799585805062 1.309799585804150 80.741831801999183
1.309799585803828 1.309799585804150 85.133088604830718
1.309799585804265 1.309799585804150 89.577304385107311
1.309799585804110 1.309799585804150 94.072406373729166
1.309799585804165 1.309799585804150 98.616471140056930
7
3 Exercise Set 3: Bisection
Read the Tute sheet and run ‘driverBisect.m’
8