Continuous-time Dynamical Systems
Dr Matthew Science University of Auckland
Continuous-time DS
Copyright By PowCoder代写 加微信 powcoder
In the last set of lectures we looked at discrete-time dynamical systems. Systems whose state changes in discrete steps. Now let us consider the case when a system’s state is changing not in steps, but continuously.
By continuous, we mean that if you were to look at a system’s time-series, it would always show smooth curves (and not discrete jumps), provided you look closely enough.
Discrete-time dynamical systems were defined using difference equations, where the next state of the system is defined as a function of the previous state.
For continuous systems the equations instead describe the rate of change in the variables.
Like with difference equations, it is possible for f to be defined in terms of the current state (x) and/or time (t). And like with difference equations, we can think of x as being a vector,
Or, equivalently, we can define a system as a set of coupled differential equations, with more than one scalar and/or vector state-variables.
Some minimal examples..
Each curve shows a trajectory, that starts from a different initial condition.
the horizontal axis is time, and the vertical axis is state (which conveniently is 1D, and thus easy to visualise)
A 2D system might have two such plots, but as systems increase in dimensionality, visualizing them becomes increasingly difficult.
Euler Integration
Integrating differential equations
Given a function, e.g.
it is sometimes possible to algebraically identify the derivative–a function that describes how the original function’s slope changes
It is also sometimes possible to reverse this process; i.e. given Function A, which describes how Function B’s slope changes, identify an antiderivative that describes B.
Integrating differential equations
So, for example, given
We can find that functions of the form y = t2 + C
The ‘+C’ is because the addition of any constant value to this equation does not change its slope at any point, and so for any constant C, the slope of the curve is described by y’=2x.
If we know the state of y for some value of t, then we reduce this set of equations to a single function. Similar to how an initial condition specifies as trajectory in the discrete-time systems we’ve already looked at.
Integrating differential equations
In general, it is NOT possible to algebraically integrate systems of equations.
This means that generally, given a system of differential equations, the only way to see what they do is to use numerical methods.
The simplest technique for doing this is known as Euler integration. Let’s take a look at how it works…
Euler integration
Goal. Given a set of differential equations, and an initial condition, calculate the forward trajectory of some duration D.
Note the difference between the differential equation (above) and the difference equations (below) that we have been working with.
One defines “the next state” the other defines “how things are changing”.
Euler integration
The method essentially works by assuming that for very short passages of time, the derivative is unchanging;
i.e. that the way that the system is changing does not itself change (over short passages of time).
For continuous functions, the shorter the section of time that we are looking at, the better this assumption holds…
The slopes of the pink and green lines are different (long time scale), but reasonable approximations of the curve at shorter time scales.
Euler integration
So, if x0=-5, we can calculate the local derivative there…
dx/dt = 4-x = 9
We assume that this does not change for some period of time Δt=0.5, allowing us to estimate the state of the system that amount of time later.
xt = xt-Δt + ·Δt
xt = -5 + 9 · 0.5 = -0.5
Euler integration
I have deliberately picked a Δt=0.5 that is too large, so as to show how this method is an estimate of a trajectory, and often not a very accurate one. In general, its accuracy decreases as Δt increases.
Q. When is Δt too large?
Euler integration
I have deliberately picked a Δt=0.5 that is too large, so as to show how this method is an estimate of a trajectory, and often not a very accurate one. In general, its accuracy decreases as Δt increases.
Q. When is Δt too large?
A. It depends upon the system. Essentially the question is: at what temporal scale are the dynamics well approximated by the assumption just described? The answer depends upon the system. Trial and error are often used to figure out a good Δt.
Does decreasing Δt significantly change the system’s dynamics? If so, you are probably using too large a Δt.
Simulating the Lotka-Volterra System
Let’s take a look at some code for this..
Specifically, we’ll look at a program I wrote to solve the Lotka-Volterra equations, which are used to model predator / prey interaction.
The prey (x) grow exponentially, but are killed by the predators.
The predators (y) die due to old-age and/or random events and reproduce at an exponential rate, but one that is limited by the amount of food (prey) that is available to them.
More assumptions here:
https://en.wikipedia.org/wiki/Lotka%E2%80%93Volte rra_equations
Parameters
⍺: rate of prey growth
β: rate of prey death
ẟ: rate of predator growth 𝛾: rate of predator death
lotkavolterra.py
How does such a model (not) relate to real-world systems?
Phase Portraits
Here is a phase portrait for a (frictionless) pendulum.
Vertical axis is velocity
Horizontal axis is position
We can see…
…back-and-forth periodic trajectories
…’loop-the-loop’ (all the way around) periodic trajectories
When we include friction, the (neverending) periodic trajectories disappear.
We instead find a transient toward an fixed point (where the pendulum is hanging straight down and not moving).
This fixed point is a stable equilibrium.
There is also an unstable equilibrium, where the pendulum is balanced perfectly upside down.
Fixed points
For each of these systems, x changes as a simple function of x…
Fixed points
In the discrete-time DS, fixed points exist where the LHS of the difference equation equals the RHS,
i.e. where xt = xt-1
For continuous-time DS, fixed points exist where the derivative is zero..
In the simple systems we’ve got on the right, identifying the roots of the derivative (where they equal zero) is trivial. But this is not the typical case, e.g.:
For each of these systems, x changes as a simple function of x…
Finding roots
For quadratic equations
the quadratic formula can be used to factor the equation and find its roots.
For cubic and quartic equations…
there also exist (more complicated) methods for algebraically finding roots.
For polynomials of degree greater than 4, there is no general algebraic method for identifying roots.
Abel’s impossibility theorem
And so we must use numerical (i.e. computational) methods for doing so!
Finding roots
Finding roots is a very common problem (not just in analyzing dynamical systems).
There are several numerical methods for doing so. We will look at two:
Bisection method Newton’s method
Problem statement
Given a continuous, real-valued function
f : R → R, find x∗ such that f(x∗) = 0
Bisection Method
We define the function sign:
sign( f(x) ) = −1 if f(x) < 0 sign( f(x) ) = 1 if f(x) > 0
Initialise: Choose initial guesses a, b such that sign(f(a) ≠ sign(f(b)).
Iterate until the absolute difference |a−b|≈0 c = (a+b)/2
If sign(f(c)) = sign(f(a)) then a ← c otherwise b ← c
bisection.py
Considerations
How do you pick the initial guesses (a and b)?
How do you know you’ve found all of the roots?
Once you’ve found a root, how do you look for others?
How efficient is this algorithm? How many iterations does it take to converge?
Newton’s Method
The bisection always converges.
The error is (at least) halved every step, and thus its convergence is described as linear.
There exist faster methods, such as the Newton method, which is uses Taylor series to approximate the function.
It generally converges faster, but in certain conditions where there exists a root, it will fail to converge to that root.
A Taylor series is a way to represent a function as a sum of an infinite series.
This says that the value of a function at a point x that is close to a is equal to the above sum, which can be written more in this more compact form
Let’s break this down.
If we look just at the first term this says that f(x) ≅ f(a), i.e. “At point x (which is close to a) f(x) is approximately the same as f(a)”
So if we look, e.g. at the plot on the right, and think of a=0 and x = 0.01, then this is a reasonable approximation, but gets worse quite quickly.
If we include the 2nd term, our approximation improves. Conceptually it takes into account the first derivative of the system (i.e. the slope of the curve) at a, and uses this to improve the approximation.
Now our approximating curve (blue) stays closer to the actual function it is approximating for longer (black).
It is of course an imperfect approximation, because the slope of the actual function changes over time.
“It is of course an imperfect approximation, because the slope of the actual function changes over time.”
…which is captured by the 2nd derivative, and is included in the 3rd term of the Taylor series…
The more terms you include, the more accurate an approximation it is.
The returns are diminishing, but the infinite sum is equivalent to the original function.
In any case, Newton’s method considers only the first two terms of the Taylor series, and uses them to narrow in on a root of the function. Let’s look at how it works.
Newton’s Method
Initialise: Choose x0 as an initial guess. Iterate until the absolute difference
What is going on here? Given a current guess (xi) we are approximating the function as a line (using the first two terms of the Taylor series); looking for the root of that approximation, and using that as our next guess.
Newton’s Method
Initialise: Choose x0 as an initial guess. Iterate until the absolute difference
What is going on here? Given a current guess (xi) we are approximating the function as a line (using the first two terms of the Taylor series); looking for the root of that approximation, and using that as our next guess.
Newton’s Method
Initialise: Choose x0 as an initial guess. Iterate until the absolute difference
What is going on here? Given a current guess (xi) we are approximating the function as a line (using the first two terms of the Taylor series); looking for the root of that approximation, and using that as our next guess.
Newton’s Method
Initialise: Choose x0 as an initial guess. Iterate until the absolute difference
What is going on here? Given a current guess (xi) we are approximating the function as a line (using the first two terms of the Taylor series); looking for the root of that approximation, and using that as our next guess.
Newton’s Method
Initialise: Choose x0 as an initial guess. Iterate until the absolute difference
What is going on here? Given a current guess (xi) we are approximating the function as a line (using the first two terms of the Taylor series); looking for the root of that approximation, and using that as our next guess.
Newton’s Method
Initialise: Choose x0 as an initial guess. Iterate until the absolute difference
What is going on here? Given a current guess (xi) we are approximating the function as a line (using the first two terms of the Taylor series); looking for the root of that approximation, and using that as our next guess.
Considerations
This method requires an algebraic expression of the first derivative of the function. (It is also possible to estimate this numerically, but it decreases the efficacy of the method).
This method is NOT guaranteed to find a root.
It often works faster, but not always (again sometimes doesn’t converge at all!)
e.g. the function,
with an initial guess close to 0 or 1 gets in a never ending cycle between these guesses.
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com