程序代写代做代考 python chasing.tex

chasing.tex

Computation Neuroscience – 2 the bucket equation

Introduction

These notes are about first order ordinary differential equations. These equations appear
frequently in neuroscience so it is useful to consider them now before going any further.

Buckets of water

In the simplest model of neurons their voltage dynamics is similar to the dynamics of a bucket
with a leak and the class of equations that apply in this case will also be applied to synapses,
for example.

l(t)

i(t)

h(t)

Consider a bucket with straight sides which is filled to a height h with water. Imagine the
water leaks out of a hole in the bottom. The rate the water leaks out depends on h; the larger
h is the larger the pressure at the bottom is and hence the faster the water pours out. In other
words

l(t) ∝ h(t) (1)

or
l(t) = Gh(t) (2)

where G is a constant which will depend on the size of the hole and complicated things like
the viscosity of water. Of course, we are also ignoring lots of complicated stuff, like turbulence
and so forth, but since we are interested in the equation rather than the amount of water in a
bucket, this is fine. Imagine water also pours in the top at a rate i(t). This means the total
rate of change of the amount of water is i(t)−Gh(t).

Now, h(t) is the height of the water not the volume: the volume is Ch(t) where C is the
cross-sectional area of the bucket. The rate of change of the volume is therefore

dCh(t)

dt
= i(t)−Gh(t) (3)

or
dh

dt
=

1

C
(i−Gh) (4)

1

Computation Neuroscience – 2 the bucket equation

Let’s solve this equation for constant i before going on to look at neurons. Probably the
best way to do this is by using an integrating factor, let τ = C/G and ı̃ = i/G

τ
dh

dt
+ h = ı̃ (5)

then we multiply across by exp t/τ and divide by τ

et/τ
dh

dt
+

1

τ
et/τh =

ı̃et/τ

τ
(6)

Now we can rewrite the left hand side using the product rule

d

dx
(uv) = u

dv

dx
+ v

du

dx
(7)

to give
d

dt

(

et/τh
)

=
ı̃et/τ

τ
(8)

Now integrating both sides gives
et/τh = ı̃et/τ +A (9)

where A is an integration constant. This gives

h = Ae−t/τ + ı̃ (10)

and putting t = 0 shows A = h(0) − ı̃ so

h(t) = [h(0) − ı̃]e−t/τ + ı̃ (11)

so, basically, the value of h decays exponentially until it equilibriates with ı̃.
These dynamics make good intuitive sense; the more water there is in the bucket, the higher

the pressure will be at the leak and the quicker the water will pour out. If there is just the
right amount of water the rate the water pours out the leak will precisely match the rate it
pours in, this is the equilibrium. If there is more water than required for equilibrium it will
pour out faster than the flow coming in, if there is less, it will pour out slower. Either way, as
time passes the height of the water will reach the equilibrium. The plot in Fig. 1 illustrates
this.

Variable input

We have only discussed constant inputs; the variable input case where i depends on time is
harder and although it can sometimes be solved it is often easier just to compute it numerically,
we will look briefly at how to do this, but first note that the effect of variable input is that
the solution kind of chases the input with a timescale set by τ . For very small τ is chases it
quickly, so it is close to the input, but for large τ it lags behind it and smooths it out. This is
sometimes described by saying that it filters the input. There is an illustration in Fig. 2.

2

Computation Neuroscience – 2 the bucket equation

0

0.5

1

1.5

2

2.5

3

0 0.5 1 1.5 2 2.5 3

h
(t
)

t

h(0) = 2
h(0) = 3
h(0) = 0

Figure 1: Exponential relaxation. The dynamics described by the ‘bucket equation’ are very
common. Here h = [h(0) − ı̃] exp(−t/τ) + ı̃ is plotted with ı̃ = 1, τ = 1 and three
different values of h(0). h(t) relaxes towards the equilibrium value ı̃ = 1, the closer
it gets, the slower it approaches.

Numerical solutions

Consider the problem of solving a general first order differential equation

df(t)

dt
= F (f, t) (12)

This includes our bucket case if we replace f with h and F (f, t) with (i−Gh)/C. Now consider
the situation where we want to solve this but can’t do it analytically, as will be the case for
many values of the time dependent input i(t) or, indeed, if i(t) itself is only known numerically.

The simplest standard numerical approach is Euler’s method. Say you know the value of
f(t) and want to approximate f(t + δt) where δt is some small increment in time. Now, we
know from the Taylor expansion that

f(t+ δt) = f(t) + δt
df(t)

dt
+

1

2
δt2

d2f(t)

dt2
+O(δt3) (13)

Now we know what df(t)/dt is, it is F (f, t) from the equation, so

f(t+ δt) = f(t) + δtF (f, t) +O(δt2) (14)

so approximating f(t+δt) with f(t)+δtF (f, t) is actual to first order in the time step δt. This
is the Euler approximation. More formally if we have an initial condition f(t0) and write

fn = f(t0 + nδt) (15)

tn = t0 + nδt (16)

3

Computation Neuroscience – 2 the bucket equation

-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

0 2 4 6 8 10 12 14 16 18 20

τ = 0.25
τ = 2
τ = 4
input

Figure 2: Variable input. Here the input is a sine wave ı̃ = sin t and the equation is evolved
with h(0) and three different τ values. For τ = 0.25 we see h(t) closely matches the
input whereas for larger τ it is smoother and lags behind.

4

References

then the Euler approximation is

fn+1 = fn + δtF (fn, tn) (17)

Thus, in the case we are interested in here

hn+1 =
[i(tn)−Ghn]δt

C
(18)

Now this approximation is often good enough for neuroscience, especially if as often happens
the errors are sometimes positive and sometimes negative. However, it can also happen that
they can add up and the O(δt2) error can be a problem. The way around this is to use Runge-
Kutta. Roughly, for Runge-Kutta a number of different Taylor expansions are considered,
using not just the expansion with δt but also δt/2 and combining them in such a way as to get
an approximation that is accurate to a higher order in δt. In other words it combines different
quantities so that the coefficients of δt2 and so on cancel. The Euler method can be thought
of as a first order Runge Kutta method, the method that is used most often is fourth order
Runge Kutta, the error for this is O(δt5). Explicitly, see for example [], first define k1 through
to k4:

k1 = δtF (fn, tn) (19)

k2 = δtF (fn + k1/2, tn + δt/2) (20)

k3 = δtF (fn + k2/2, tn + δt/2) (21)

k4 = δtF (fn + k3, tn + δt) (22)

and then

f(n+ 1) = fn +
1

6
k1 +

1

3
k2 +

1

3
k3 +

1

6
k4 (23)

In fact, in neuroscience, we sometimes have to deal with what are called stiff differential
equations; these are equations where change is sometimes very rapid and sometimes much
slower. Obviously the bucket equations discussed so far don’t have this property, in fact they
only have one timescale, τ , but later we will example the Hodgkin-Huxley equations, which do.
We will not consider the problem of stiff equations in the course, but there are methods, such
as the backwards Euler method that deal well with stiff equations and the standard neuron
simulation packages, like NEURON and GENESIS, and BRIAN with Python, are optimized
to the particular properties of the differential equations governing neuronal dynamics.

References

[1] Press WH, Teukolsky SA, Vetterling WT and Flannery BP. (1988) ”Numerical recipes in
C.” Cambridge University Press.

5