[Content_Types].xml
_rels/.rels
matlab/document.xml
matlab/output.xml
metadata/coreProperties.xml
metadata/mwcoreProperties.xml
metadata/mwcorePropertiesExtension.xml
metadata/mwcorePropertiesReleaseInfo.xml
Numerical Integration In this livescript, you will learn how to Apply the Trapezoidal rule and Simpsons’s rule to approximate integrals. Apply Gaussian quadrature to approximate integrals Numerical Integration The main idea behind numerical integration (or quadrature) is to approximate a complicated function at some set of points using a Lagrange polynomial, which we can then easily integrate. Trapezoidal Rule The easiest method of approximating a function is to assume that it varies linearly between the end points. f_{1}(x)\approx f(x_{0})\frac{x-x_{1}}{x_{1}-x_{0}}+f(x_{1})\frac{x-x_{0}}{x_{0}-x_{1}} Which we may then integrate to find \int^{x_{1}}_{x_{0}}{f(x)dx}\approx\int^{1}_{0}{f_{1}(x)dx}=\frac{\Delta}{2}[f(x_{0})+f(x_{1})] where \Delta=x_{1}-x_{0} . For example, with f(x)=\exp(x) x = linspace(0,1,100);
f = @(x)exp(x);
plot(x,f(x))
hold on
plot([0 1],[f(0) f(1)],’b–‘)
legend(‘Exact’,’Linear Approximation’)
hold off However, we do see that the linear approximation does overestimate the function, and thus so will the integral. And in fact, the error can be found by integrating the error in the Lagrange polynomial from last week E(t)=\frac{(t-x_{0})(t-x_{1})}{2}f”(\xi) (a) Show that the error in the integral is given by \frac{\Delta^{3}}{12}f”(\xi) for some \xi in [x_{0},x_{1}] . (b) Using your answer to part (a) obtain a maximal bound for the error in applying the trapezoidal rule to \int^{1}_{0}{\exp{(x)}dx} Implementing this in MATLAB, we have f = @(x) exp(x);
b = 1;
a = 0;
delta = (b-a);
I = (delta/2)*(f(a)+f(b)) The error is therefore given by Error = abs((f(1)-f(0))-I) (c) Run this command and compare this with your answer in part (b). We see that the Trapezoidal rule provides a poor approximation to the integral. However, a way of improving the accuracy of the integral is to approximate the function by dividing it into smaller intervals and applying a linear approximation to each interval. For our example, with two intervals, we can see that the large error in the linear Lagrange polynomial at the centre is avoided. x = linspace(0,1,100);
f = @(x)exp(x);
plot(x,f(x))
hold on
plot([0 0.5 1],[f(0) f(0.5) f(1)],’b–‘)
legend(‘Exact’,’Piecewise Linear Approximation’)
hold off So, decomposing the integral into the two integrals over these smaller sub-intervals, we have \int^{1}_{0}{\exp(x)dx}=\int^{1}_{0.5}{\exp(x)dx}+\int^{0.5}_{0}{\exp(x)dx} And applying the Trapezoidal rule \int^{1}_{0}{\exp(x)dx}\approx\frac{\Delta}{2}[f(x_{0})+f(x_{0.5})]+\frac{\Delta}{2}[f(x_{0.5})+f(x_{1})] For an arbitrary function f , we can extend this argument to N arbitrary intervals to give \int^{x_{N}}_{x_{0}}{f(x)dx}=\sum^{N-1}_{i=0}\int^{x_{i+1}}_{x_{i}}{f(x)dx} (d) Assume that each sub-interval is equally spaced. Show that by applying the trapezoidal rule to each sub-interval gives \int^{1}_{0}{f(x)dx}\approx \frac{\Delta}{2}\Big(f(x_{0})+2\sum^{N-1}_{i=1}{f(x_{i})+f(x_{N})\Big) Coding this in MATLAB gives the following piece of code. a = 0;
b = 1;
N = 2;
f = @(x) exp(x)
delta = (b-a)/N;
xi = linspace(a,b,N+1)
fi = f(xi);
I = (delta/2)*(fi(1)+2*sum(fi(2:end-1))+fi(end)) (d) Run this piece of code for N=2,4,8,16,32 intervals. As for the error them, since we have N intervals, the error in the integral is \sum^{N}_{i=1}\frac{\Delta^{3}}{12}f”(\xi_{i})=\frac{b-a}{12}\Delta^{2}f”(\mu) for some \mu\in[a,b] . To verify this, the following code fragments computes the error in the integral for the segments in part (d). a = 0;
b = 1;
f = @(x) exp(x)
error = zeros(1,10);
for k = 1:10
N = 2^k;
delta = (b-a)/N;
xi = linspace(a,b,N+1);
fi = f(xi);
I = (delta/2)*(fi(1)+2*sum(fi(2:end-1))+fi(end))
error(k) = I;
end
g = fitlm(log10(2.^[1:10]),log10(abs((f(1)-f(0))-error)))
plot(g) (e) Run the code and see if the fit matches the predicted results. Simpson’s Rule Another way we could refine the integral is to increase the order of the interpolating polynomial as well. If we add an interior node in the middle, then our Lagrange polynomial is f_{2}(x)=f(x_{0})\frac{(x-x_{1})(x-x_{2})}{(x_{0}-x_{1})(x_{0}-x_{2})}+
f(x_{1})\frac{(x-x_{0})(x-x_{2})}{(x_{1}-x_{0})(x_{1}-x_{2})}+
f(x_{2})\frac{(x-x_{0})(x-x_{1})}{(x_{2}-x_{0})(x_{2}-x_{1})} Integrating over the interval x_{0}\leq x \leq x_{2} gives \int^{x_{2}}_{x_{0}}{f_{2}(x)dx}=\frac{\Delta}{3}[f(x_{0})+4f(x_{1})+f(x_{2})] Furthermore, using the Taylor series expansion of f , it is possible to show that the error in Simpson’s rule is given by E_{\text{int}}=\Big|\frac{\Delta^{5}}{90}f^{(4)}(\xi)\Big| Turning back to the example f(x)=\exp(x) If we plot the quadratic interpolating polynomial at the nodes x=0,0.5,1 , we find x = linspace(0,1,100);
f = @(x)exp(x);
a = polyfit([0 0.5 1],exp([0 0.5 1]),2);
plot(x,f(x))
hold on
plot(x,polyval(a,x),’b–‘)
legend(‘Exact’,’Quadratic Approximation’)
hold off Which is much more accurate than the linear interpolating polynomial. Hence, we expect the integral to be more accurate as well. f = @(x) exp(x);
b = 1;
a = 0;
c = (b-a)/2;
delta = (b-a)/2;
I = (delta/3)*(f(a)+4*f(c)+f(b)) The error is therefore given by Error = abs((f(1)-f(0))-I) which is much smaller than with the trapezoidal rule with a single interval. Again, we can also refine the integral by using smaller subintervals. For two sub-intervals x = linspace(0,1,100);
f = @(x)exp(x);
a1 = polyfit([0 0.25 0.5],exp([0 0.25 0.5]),2);
a2 = polyfit([0.5 0.75 1],exp([0.5 0.75 1]),2);
plot(x,f(x))
hold on
plot(x,[polyval(a1,x(1:50)),polyval(a2,x(51:100))],’b–‘)
legend(‘Exact’,’Quadratic Approximation’)
hold off Applying Simpson’s rule gives \int^{x_{N}}_{x_{0}}{f(x)dx}\approx\frac{\Delta}{3}\Big(f(x_{0})+4\sum^{N-1}_{i=1,i\text{odd}}{f(x_{i})}+2\sum^{N-2}_{i=1,i\text{even}}{f(x_{i})}+f(x_{N})\Big) And the error E_{\text{int}}=\frac{\Delta^{5}}{90}\sum^{N/2}_{j=1}{|f^{(4)}(\xi_{j})|=\Big|\frac{(b-a)}{180}\Delta^{4}f^{(4)}(\mu)\Big| In MATLAB a piece of code that computes this is a = 0;
b = 1;
N = 2;
f = @(x) exp(x)
delta = (b-a)/(2*N);
xi = linspace(a,b,N+1);
c = (xi(2:end)+xi(1:end-1))/2;
fout = f(xi);
fint = f(c);
I = (delta/3)*(fout(1)+2*sum(fout(2:end-1))+4*sum(fint)+fout(end)) (f) Re-run the code with N=2,4,8,16,32,64,128,256,512,1024 intervals. (g) Perform a numerical experiment with the points in part (f). Gaussian Quadrature As we saw last week, using equally spaced nodes to interpolate a polynomial can lead to large oscillations near the end points. For example, approximating the Witch of Agnesi with an 8th order interpolating polynomial x = linspace(-1,1,1000);
f = @(x) 1./(1+25*x.^2);
x_even = linspace(-1,1,9);
f_even_interp = polyval(polyfit(x_even,f(x_even),8),x)
plot(x,f(x),x,f_even_interp) Which wouldn’t provide a very good approximation to integrate. Therefore, we seek a set of points \{x\} such that the quadrature of order N integrates a polynomial 2N-1 exactly. In lectures, you would have seen that this set of points \{x\} corresponds with the the roots of the Legendre polynomial of order N . The only problem here is that finding the roots of associated Legendre polynomial can be tricky. In MATLAB the problem isn’t too severe as there is the \texttt{legendreP(n,x) function, which we can then solve to obtain the quadrature points. As for the quadrature weights, analytical expressions do exist, however are a bit complicated and won’t be given here. So, we’ve provided the function \texttt{gauss\_legendre\_points(k)} to give the weights and points for the k-point Gauss-Legendre quadrature rule [w,x_GL] = gauss_legendre_points(4) So, using the Gauss-Legendre points to approximate the Witch of Agnesi we find [w,x_GL] = gauss_legendre_points(9);
f_GL_interp = polyval(polyfit(x_GL,f(x_GL),8),x)
plot(x,f(x),x,f_GL_interp) which is a much better approximation. To illustrate how one would use Gauss-Legendre quadrature to evaluate and integral, consider I=\int^{b}_{a}{\exp{(x)}dx} where b=1 and a=0 . Notice that the interval is 0\leq x\leq1 whereas the Gauss-Legendre quadrature rules only apply for the standard interval -1\leq \xi\leq 1 . Step 1. We need to first relate the global coordinates x to the local coordinates \xi , which we can do using the mapping x(\xi)=\frac{1}{2}(1-\xi)a+\frac{1}{2}(1+\xi)b Step 2. To convert the integral from global coordinates to local coordinates, determine the Jacobian of the mapping using \frac{dx}{d\xi}=\frac{b-a}{2} Hence the integral is given by I=\int^{1}_{-1}{\exp{\{x(\xi)\}}\frac{b-a}{2}d\xi} Step 3. Use the Gauss-Legendre rules to approximate I I=\sum^{k}_{i=0}{w_{i}\exp{\{x(\xi_{i})\}}\frac{b-a}{2} A piece of code that uses the \texttt{sum} function to compute I is given below a = 0;
b = 1;
N = 10;
[w,xi] = gauss_legendre_points(N);
x = 0.5.*(1-xi).*a+0.5.*(1+xi).*b;
f = exp(x);
I = sum(w.*f)*(b-a)/2 (h) Run this piece of code and see if how well it matches the exact value.
manual code ready 0.4 true false x = linspace(0,1,100); 0 15 15 false false f = @(x)exp(x); 1 16 16 false false plot(x,f(x)) 2 17 17 false false hold on 3 18 18 false false plot([0 1],[f(0) f(1)],’b–‘) 4 19 19 false false legend(‘Exact’,’Linear Approximation’) 5 20 20 false true hold off 6 21 21 true false f = @(x) exp(x); 7 33 33 false false b = 1; 8 34 34 false false a = 0; 9 35 35 false false delta = (b-a); 10 37 37 false true I = (delta/2)*(f(a)+f(b)) 11 39 39 true true Error = abs((f(1)-f(0))-I) 12 42 42 true false x = linspace(0,1,100); 13 47 47 false false f = @(x)exp(x); 14 48 48 false false plot(x,f(x)) 15 49 49 false false hold on 16 50 50 false false plot([0 0.5 1],[f(0) f(0.5) f(1)],’b–‘) 17 51 51 false false legend(‘Exact’,’Piecewise Linear Approximation’) 18 52 52 false true hold off 19 53 53 true false a = 0; 20 66 66 false false b = 1; 21 67 67 false false N = 2; 22 68 68 false false f = @(x) exp(x) 23 69 69 false false delta = (b-a)/N; 24 71 71 false false xi = linspace(a,b,N+1) 25 73 73 false false fi = f(xi); 26 74 74 false true I = (delta/2)*(fi(1)+2*sum(fi(2:end-1))+fi(end)) 27 76 76 true false a = 0; 28 83 83 false false b = 1; 29 84 84 false false f = @(x) exp(x) 30 85 85 false false error = zeros(1,10); 31 86 86 false false for k = 1:10
N = 2^k;
delta = (b-a)/N;
xi = linspace(a,b,N+1);
fi = f(xi);
I = (delta/2)*(fi(1)+2*sum(fi(2:end-1))+fi(end))
error(k) = I;
end 32 88 98 false false g = fitlm(log10(2.^[1:10]),log10(abs((f(1)-f(0))-error))) 33 100 100 false true plot(g) 34 101 101 true false x = linspace(0,1,100); 35 117 117 false false f = @(x)exp(x); 36 118 118 false false a = polyfit([0 0.5 1],exp([0 0.5 1]),2); 37 119 119 false false plot(x,f(x)) 38 120 120 false false hold on 39 121 121 false false plot(x,polyval(a,x),’b–‘) 40 122 122 false false legend(‘Exact’,’Quadratic Approximation’) 41 123 123 false true hold off 42 124 124 true false f = @(x) exp(x); 43 128 128 false false b = 1; 44 129 129 false false a = 0; 45 130 130 false false c = (b-a)/2; 46 131 131 false false delta = (b-a)/2; 47 133 133 false true I = (delta/3)*(f(a)+4*f(c)+f(b)) 48 135 135 true true Error = abs((f(1)-f(0))-I) 49 138 138 true false x = linspace(0,1,100); 50 142 142 false false f = @(x)exp(x); 51 143 143 false false a1 = polyfit([0 0.25 0.5],exp([0 0.25 0.5]),2); 52 144 144 false false a2 = polyfit([0.5 0.75 1],exp([0.5 0.75 1]),2); 53 145 145 false false plot(x,f(x)) 54 146 146 false false hold on 55 147 147 false false plot(x,[polyval(a1,x(1:50)),polyval(a2,x(51:100))],’b–‘) 56 148 148 false false legend(‘Exact’,’Quadratic Approximation’) 57 149 149 false true hold off 58 150 150 true false a = 0; 59 157 157 false false b = 1; 60 158 158 false false N = 2; 61 159 159 false false f = @(x) exp(x) 62 160 160 false false delta = (b-a)/(2*N); 63 162 162 false false xi = linspace(a,b,N+1); 64 164 164 false false c = (xi(2:end)+xi(1:end-1))/2; 65 165 165 false false fout = f(xi); 66 167 167 false false fint = f(c); 67 168 168 false true I = (delta/3)*(fout(1)+2*sum(fout(2:end-1))+4*sum(fint)+fout(end)) 68 170 170 true false x = linspace(-1,1,1000); 69 177 177 false false f = @(x) 1./(1+25*x.^2); 70 178 178 false false x_even = linspace(-1,1,9); 71 179 179 false false f_even_interp = polyval(polyfit(x_even,f(x_even),8),x) 72 181 181 false false plot(x,f(x),x,f_even_interp) 73 183 183 false true [w,x_GL] = gauss_legendre_points(4) 74 187 187 true false [w,x_GL] = gauss_legendre_points(9); 75 190 190 false false f_GL_interp = polyval(polyfit(x_GL,f(x_GL),8),x) 76 192 192 false true plot(x,f(x),x,f_GL_interp) 77 194 194 true false a = 0; 78 210 210 false false b = 1; 79 211 211 false false N = 10; 80 213 213 false false [w,xi] = gauss_legendre_points(N); 81 214 214 false false x = 0.5.*(1-xi).*a+0.5.*(1+xi).*b; 82 215 215 false false f = exp(x); 83 216 216 false true I = sum(w.*f)*(b-a)/2 84 218 218
2020-04-23T05:12:59Z 2020-09-13T02:33:43Z
application/vnd.mathworks.matlab.code MATLAB Code R2020a
9.8.0.1298242 70207570-edc5-4958-bfc6-f63cc869b5fe
9.8.0.1446778
R2020a
Update 5
Jul 30 2020
2676164157