MATH 210 Quiz 3¶
In [1]:
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as spi
Notation¶
\begin{align*} \Delta x &= \frac{b – a}{N} \\ & \\ x_n &= a + n \Delta x \\ & \\ S_N(f) &= \frac{\Delta x}{3} \sum_{n=1}^{N/2} \left( f(x_{2n-2}) + 4f(x_{2n-1}) + f(x_{2n}) \right) \\ & \\ E^S_N(f) &= \left| \ \int_a^b f(x) \, dx – S_N(f) \ \right| \leq \frac{(b – a)^5}{180 N^4} K_4 \ \ , \ \ | f””(x) | \leq K_4 \ \ \text{for all} \ \ x \in [a,b] \\ \end{align*}
Q2¶
Consider a definite integral $I = \int_a^b f(x)dx$ such that $f(x)$ is not a polynomial with degree 3 or less. If we know the exact value $I$ and we plot $\log(E_N^S(f))$ as a function of $\log(N)$ then we expect a line with slope __.
In [2]:
N = [10,100,1000,1000]
E = np.zeros(len(N))
for n in range(0,len(N)):
x = np.linspace(0,1,N[n]+1)
y = np.sin(x**2)
I = spi.simps(y,x)
E[n] = np.abs(I – 0.31026830172338105)
plt.loglog(N,E,’.-‘)
plt.show()

Q3¶
If we know that $|f”'(x)| \leq A$ and $|f””(x)| \leq B$ then for all $x \in [0,1]$ then find an integer $N$ which gurantees $E_N^S \leq E$ for $\int_0^1 x f(x) dx$.
Compute the fourth derivative of $g(x) = x f(x)$:
\begin{align*} g'(x) &= f(x) + xf'(x) \\ g”(x) &= 2f'(x) + xf”(x) \\ g”'(x) &= 3f”(x) + xf”'(x) \\ g””(x) &= 4f”'(x) + xf””(x) \end{align*}
Then $|g””(x)| \leq 4A + B$ and we use the error formula $E_N^S$.
In [3]:
B = 5
A = 2
E = 0.001
((B+4*A)/180/E)**0.25
Out[3]:
2.915195680565539
In [4]:
B = 4
A = 1
E = 0.0001
((B+4*A)/180/E)**0.25
Out[4]:
4.591497693322865
In [5]:
B = 10
A = 4
E = 0.0001
((B+4*A)/180/E)**0.25
Out[5]:
6.164888279872118
Q4¶
Simpson’s rule computes the exact answer for $N$ if $f(x)$ is a polynomial of degree 3 or less. TRUE
Q5¶
Consider a first order differential equation $y’ = f(t,y)$, $y(0) = y_0$. Let $h$ be the time step in Euler’s method. If we reduce the step size $h$ then:
• the number of computations to approximate the solution $y(t)$ on the interval $[0,1]$ increases. TRUE
• the number of computations to compute one of Euler’s method increases.
• the error of our approximation $y_N$ decreases. TRUE
• the error of our approximation $y_1$ decreases. TRUE
Q6¶
Consider a first order differential equation $y’=f(t,y)$, $y(0)=1$ with unique solution $y(t)$. Suppose we implement one iteration of Euler’s method $y_1$ with step size $h=0.01$ and compute the error $E = |y(h) – y_1| = 0.05$. If we then implement one iteration of Euler’s method $y_1$ with step size $h=0.005$, we expact the error $E = |y(h) – y_1|$ to be approximately __.
The slope of the $\log E$ versus $\log h$ graph is 2 therefore if we reduce $h$ by $1/2$ then we expect $E$ decreases by $1/4$.
In [6]:
0.05/4
Out[6]:
0.0125
Q7¶
Consider the differential equation $y’ = y^2 + t$, $y(0)=0.5$ with unique solution $y(t)$. If we implement $N$ iterations of Euler’s method over the interval $[0,1]$ with step size $h = 1/N$ then we expect:
• $y(1) > y_N$
• $y(1) < y_N$
• $y(1) = y_N$
• Not enough information to make a conclusion.
The solution $y(t)$ is concave up because $y'' > 0$ therefore we expect $y_N$ to be an underestimate $y(1) > y_N$.
In [7]:
def odeEuler(f,t,y0):
y = np.zeros(len(t))
y[0] = y0
for n in range(0,len(t)-1):
y[n+1] = y[n] + f(t[n],y[n])*(t[n+1] – t[n])
return y
t = np.linspace(0,1,100)
f = lambda t,y: y**2 + t
y0 = 0.5
y = odeEuler(f,t,y0)
print(y[-1])
y1 = spi.odeint(f,y0,t,tfirst=True)
print(y1[-1,0])
2.067150966382936
2.1278722514427244
Consider the differential equation $y’ = \cos(y)$, $y(0)=0$ with unique solution $y(t)$. If we implement $N$ iterations of Euler’s method over the interval $[0,1]$ with step size $h = 1/N$ then we expect the approximation $y_N$ at the $N$th iteration to satisfy:
• $y(1) > y_N$
• $y(1) < y_N$
• $y(1) = y_N$
• Not enough information to make a conclusion.
The solution $y(t)$ is concave down because $y'' < 0$ therefore we expect $y_N$ to be an overestimate $y(1) < y_N$.
In [8]:
def odeEuler(f,t,y0):
y = np.zeros(len(t))
y[0] = y0
for n in range(0,len(t)-1):
y[n+1] = y[n] + f(t[n],y[n])*(t[n+1] - t[n])
return y
t = np.linspace(0,1,100)
f = lambda t,y: np.cos(y)
y0 = 0
y = odeEuler(f,t,y0)
print(y[-1])
y1 = spi.odeint(f,y0,t,tfirst=True)
print(y1[-1,0])
0.8671906411097661
0.8657694518428407
Q8¶
Find a value $\alpha$ such that the following numerical method is order 2:
\begin{align*} k_1 &= f(t_n,y_n) \\ k_2 &= f(t_n + \alpha h, y_n + \alpha k_1 h) \\ y_{n+1} &= y_n + (- k_1 + 2 k_2)h \end{align*}
Apply the method to the equation $y' = y$, $y(0) = 1$. We know the exact solution is $y(t) = e^t = 1 + t + t^2/2 + t^3/6 \cdots$. Plug in the values for $t_0 = 0$ and $y_0 = 1$:
\begin{align*} k_1 &= 1 \\ k_2 &= 1 + \alpha h \\ y_{n+1} &= 1 + (- 1 + 2 (1 + \alpha h))h = 1 + h + 2\alpha h^2 \end{align*}
To match the Taylor series of $e^t$ up to degree 2, we need $\alpha = 1/4$.
Find a value $\alpha$ such that the following numerical method is order 2:
\begin{align*} k_1 &= f(t_n,y_n) \\ k_2 &= f(t_n + \alpha h, y_n + \alpha k_1 h) \\ y_{n+1} &= y_n + (k_1/3 + 2k_2/3)h \end{align*}
Apply the method to the equation $y' = y$, $y(0) = 1$. We know the exact solution is $y(t) = e^t = 1 + t + t^2/2 + t^3/6 \cdots$. Plug in the values for $t_0 = 0$ and $y_0 = 1$:
\begin{align*} k_1 &= 1 \\ k_2 &= 1 + \alpha h \\ y_{n+1} &= 1 + (1/3 + 2 (1 + \alpha h)/3)h = 1 + h + 2\alpha h^2/3 \end{align*}
To match the Taylor series of $e^t$ up to degree 2, we need $\alpha = 3/4$.
Q9¶
Suppose we implement one iteration of a numerical method to the equation $y' = -y^2$, $y(0) = 1$ with step size $h=0.01$ and the result is $y_1 = 0.9900996666666667$, and then we apply the same method with $h=0.005$ and the result is $y_1 = 0.9950249583333334$. Determine the order of the method.
Compute the slope of $\log E$ versus $\log h$.
In [11]:
y11 = 0.9900996666666667
y12 = 0.9950249583333334
(np.log(y11 - 1/1.01) - np.log(y12 - 1/1.005))/(np.log(0.1) - np.log(0.05))
Out[11]:
2.9892198944559185
The slope is 3 therefore the order is 2.
Q10¶
The following code computes an approximation of what exact value?
In [12]:
N = 100
t = np.linspace(0,1,N)
y = np.zeros(N)
dy = np.zeros(N)
y[0] = 1
dy[0] = 1
for n in range(0,len(t)-1):
h = t[n + 1] - t[n]
y[n + 1] = y[n] + dy[n]*h
dy[n + 1] = dy[n] - y[n]*h
print(y[-1])
1.3887795316439246
This is Euler's method for the second order equation $y'' + y = 0$, $y(0)=1$, $y'(0)=1$. The solution is $y(t) = \cos(t) + \sin(t)$ and the output value is $y(1) = \cos(1) + \sin(1)$.
In [13]:
np.cos(1) + np.sin(1)
Out[13]:
1.3817732906760363
In [14]:
N = 100
t = np.linspace(0,1,N)
y = np.zeros(N)
dy = np.zeros(N)
y[0] = 1
dy[0] = -1
for n in range(0,len(t)-1):
h = t[n + 1] - t[n]
y[n + 1] = y[n] + dy[n]*h
dy[n + 1] = dy[n] - y[n]*h
print(y[-1])
-0.30264627289613044
This is Euler's method for the second order equation $y'' + y = 0$, $y(0)=1$, $y'(0)=-1$. The solution is $y(t) = \cos(t) + \sin(t)$ and the output value is $y(1) = \cos(1) - \sin(1)$.
In [15]:
np.cos(1) - np.sin(1)
Out[15]:
-0.30116867893975674
In [16]:
N = 100
t = np.linspace(0,1,N)
y = np.zeros(N)
dy = np.zeros(N)
y[0] = 2
dy[0] = 0
for n in range(0,len(t)-1):
h = t[n + 1] - t[n]
y[n + 1] = y[n] + dy[n]*h
dy[n + 1] = dy[n] + y[n]*h
print(y[-1])
3.0706926408729154
This is Euler's method for the second order equation $y'' - y = 0$, $y(0)=2$, $y'(0)=0$. The solution is $y(t) = e^t + e^{-t}$ and the output value is $y(1) = e + e^{-1}$.
In [17]:
np.exp(1) + np.exp(-1)
Out[17]:
3.0861612696304874
In [18]:
N = 100
t = np.linspace(0,1,N)
y = np.zeros(N)
dy = np.zeros(N)
y[0] = 0
dy[0] = 2
for n in range(0,len(t)-1):
h = t[n + 1] - t[n]
y[n + 1] = y[n] + dy[n]*h
dy[n + 1] = dy[n] + y[n]*h
print(y[-1])
2.338665431456556
This is Euler's method for the second order equation $y'' - y = 0$, $y(0)=0$, $y'(0)=2$. The solution is $y(t) = e^t - e^{-t}$ and the output value is $y(1) = e - e^{-1}$.
In [19]:
np.exp(1) - np.exp(-1)
Out[19]:
2.3504023872876028
Q11¶
Put the following lines in the correct order to plot the solution of the system of equations:
\begin{align*} x' &= x^2 - \alpha y^2 \\ y' &= 1 - \beta xy \end{align*}
In [20]:
import scipy.integrate as spi; import matplotlib.pyplot as plt; import numpy as np;
alpha = 2; beta = -0.1; N = 100
def odefun(u,t):
dudt = np.array([0.,0.])
dudt[0] = u[0]**2 - alpha*u[1]**2; dudt[1] = 1 - beta*u[0]*u[1];
return dudt
t = np.linspace(0,1,N); u0 = np.array([0.,0.]); u = spi.odeint(odefun,u0,t);
x = u[:,0]; y = u[:,1];
plt.plot(x,y)
plt.show()

Q12¶
Let $\mathbf{x}(t) = (x_1(t),x_2(t))$ be the unique solution of the system:
\begin{align*} x_1' &= x_1^2 - x_2^2 \\ x_2' &= 1 - x_1x_2 \end{align*}
such that $x_1(0) = 1$ and $x_2(0) = 2$. At time $t = 0.01$ the solution satisfies:
• $x_1(0.01) > 1$ and $x_2(0.01) > 2$
• $x_1(0.01) < 1$ and $x_2(0.01) > 2$
• $x_1(0.01) > 1$ and $x_2(0.01) < 2$
• $x_1(0.01) < 1$ and $x_2(0.01) < 2$
Plug the value $t=0$ into the equation:
\begin{align*} x_1'(0) &= 1^2 - 2^2 < 0 \\ x_2'(0) &= 1 - 2 < 0 \end{align*}
Therefore the solution is initially moving in the negative $x_1$ direction and the negative $x_2$ direction therefore we expect $x_1(0.01) < 1$ and $x_2(0.01) < 2$.
Let $\mathbf{x}(t) = (x_1(t),x_2(t))$ be the unique solution of the system:
\begin{align*} x_1' &= x_1^2 - x_2^2 \\ x_2' &= 1 - x_1x_2 \end{align*}
such that $x_1(0) = 2$ and $x_2(0) = 1$. At time $t = 0.01$ the solution satisfies:
• $x_1(0.01) > 2$ and $x_2(0.01) > 1$
• $x_1(0.01) < 2$ and $x_2(0.01) > 1$
• $x_1(0.01) > 2$ and $x_2(0.01) < 1$
• $x_1(0.01) < 2$ and $x_2(0.01) < 1$
Plug the value $t=0$ into the equation:
\begin{align*} x_1'(0) &= 2^2 - 1^2 > 0 \\ x_2′(0) &= 1 – 2 < 0 \end{align*}
Therefore the solution is initially moving in the positive $x_1$ direction and the negative $x_2$ direction therefore we expect $x_1(0.01) > 1$ and $x_2(0.01) < 2$.