代写代考 Examples of Dynamical Systems Modelling

Examples of Dynamical Systems Modelling
Dr Computer Science University of Auckland

The Intelligence of Natural Systems

Copyright By PowCoder代写 加微信 powcoder

Diverse natural systems demonstrate remarkable forms of ‘intelligence’.
Bacteria, elephants, humans, ecosystems, immune systems, insect colonies, etc.,
All of these are capable of adapting to changes in their environment, and doing things in such a way as to improve their own chances of survival.
How do they do it?
In many cases, we don’t know.

How do the 302 neurons in Caenorhabditis elegans interact with its body and environment to produce functional movement?

Computational Modelling
Dynamical models are a useful tool in trying to understand these systems.
Empirical work is essential, but also limited by the complexity of natural systems.
Computational models allow us to build systems that are
– more simple (and thus perhaps more understandable)
– easier to manipulate
– easier to measure

What is the question?
This method can be used to ask a variety of scientific research questions.
e.g. How do the 302 neurons in C elegans interact with its body and environment to produce functional movement? (Izquierdo & Beer, 2015)

Brain, Body & World
One hypothesis that has gained traction over the past 20+ years is that natural forms of intelligence are not the result of a computer-like brain, but instead the result of feedback between brain, body and world.
– the brain is important, but also..
– the environment is important
– the body is important

Look Ma…
… no brain!

Artificial Evolution
Using the techniques we have learned about, we can design equations that describe the movement of a body in an environment, controlled by a dynamical ‘brain’.

Evolutionary Optimization
Often when developing these models, there are parts that we do not understand or have data about.
We can use a genetic algorithm to tune parameters in the equations to optimize the system’s performance in some way.
The assumption is that natural evolution will similarly have optimized these things in a similar way.
For example, we might optimize a system for its ability to orient and move quickly toward food in its environment.

Signal delay
One question that I had was…
How might the delay in signals in the nervous system be useful?
It takes a finite amount of time for a signal to propagate down a neuron, and thus there is always a delay between the start of its firing and the signal arriving at the end of the axon. How might that delay be used by an organism as part of its adaptive, ‘intelligent’ behaviour?

Background
Conduction velocity in nervous tissues varies from 0.5 m/s to 120 m/s
So some signals take as much as ~2 seconds to travel from toe to head!
Primary source of variety is degree of myelination and axon diameter (bigger axons, faster conduction velocity)
A common assumption is that evolution has acted to maximize conduction velocity. (e.g. Swadlow and Waxman (2012); and Chklovskii et al. (2002))
“FASTER = BETTER!”
Neuron Conduction Velocity:
0.5 – 120 m/s
Electrical Conduction Velocity:
120,000,000 m/s

Engineering
A common assumption is that evolution has acted to maximize conduction velocity. (e.g. Swadlow and Waxman (2012); and Chklovskii et al. (2002))
“FASTER = BETTER!”
This is also a common approach in engineering… let’s make robots that move quickly and precisely to exactly where we tell them to (as anything else is difficult to predict or control).
Neuron Conduction Velocity:
0.5 – 120 m/s
Electrical Conduction Velocity:
120,000,000 m/s

Evidence of functional lag
“In Loligo there is a graded series of fibres with the larger in the longer nerves, and this is apparently a further device for ensuring more nearly simultaneous contraction.”
Pumphrey and Young, 1938

Evidence of functional lag ??
Conduction velocity
1. varies within the central and peripheral nervous systems
2. changes over multiple timescales
– bursts increase conduction velocity for seconds afterwards
– conduction velocities have shown to slowly progress (1–2%/day) in slowly conducting callosal axons over periods of months

Ordinary Differential Equations (ODE)
y(t) is a finite state vector representing the state of the system at time t
f is the evolution function that describes how the system changes given its current state
Delay Differential Equations (DDE)
y(t-𝜏) represents the state of the system at time t-𝜏

The brain, body & world
A robot in a 1D world with periodic boundaries.
The robot has a one-neuron brain (more in a moment)
Motor It moves left or right as a sigmoidal function of the state of that neuron.
Sensor The robot senses its “altitude” which is a function of its position.
x = the position of the robot y = the state of the neuron
The robot is to navigate the environment, find the more narrow of the two peaks and stay there.
In each trial the peak’s relative position, and the starting position of the robot changes.

Details of the robot’s “brain”
The robot’s brain consists of a single neuron. The neuron has a recurrent delayed connection.
The first term prevents the system from exploding.
By giving the robot such an incredibly simple ‘brain’, I am essentially forcing it to use its delayed connection to solve the problem.
lagged recurrent connection
sensor input
motor y output

The genetic algorithm optimizes the parameters specified by the greek letters.
– how much the sensory input excites the neuron
– how much the old (delayed) state of the neuron excites the neuron
– the amount of delay in the connection
– the general speed at which the neuron
Fitness is the maximum distance of the robot from the narrow peak during the last 10 time units of a ~50 time unit trial. (The lower the distance the better the behaviour.)
Each trial, the initial position of the robot and the position of the distractor peak is different.

We evaluate fitness in a different conditions. Specifically, we
– put the distractor peak in different locations relative to the target narrow peak and
– starting the robot in different places
This way, the genetic algorithm cannot ‘cheat’, but has to really solve the problem of finding the small peak.

Is it possible?
Is it possible for such a simple system to solve this task?
This plot shows how well the best evolved agent performs for diverse conditions.
xt=0 is the initial starting position of the robot
pw is the position of the distractor (wide) stimulus peak relative to the target stimulus peak
The darker the shading, the better the robot performed. The dashes indicate which conditions were tested during the optimization process.

Bifurcation diagram
Black: stable equilibria
Magenta: unstable equilibria
Blue: stable periodic orbit (max x value)
Red: unstable periodic orbit (max x value)
The equilibria are almost always unstable, so so generally will find oscillating solutions.
Blue curves on left correspond to typical successful solutions.
Blue curves up top and down low are typical failure solutions.

Observations
Chaotic solutions were found that were both “successful” and “failures” (in terms of evolved goal).

Conclusions
The single delayed-recurrently connected neuron was capable of solving this minimal task. We could identify the periodic and chaotic attractors that were associated with good and bad behaviours of this system.
But..constructing a conceptual story for how this system works remains difficult (impossible?).
We can say “Here is an attractor associated with successful behaviour,” etc. but how could we explain this to a non DS expert?
The best I can do is something hand-wavy about resonance. The “brain” produces motor activity that produces sensor activity that causes the brain to produce similar motor activity that will produce similar sensor activity, etc. etc. etc. all in a way that results in success in some cases and failure in others.
Q: Can we do better? A: I don’t know.

This is the result of exploring a ‘minimal’ model. It is the simplest system I could conceive of that uses delay to solve a task, yet it remains difficult to reduce to some underlying mechanism.
If I were studying this dynamic in (much more complicated) natural systems, I might think that I was missing some underlying mechanism to explain how the agent finds the narrow peak.
But an advantage of computational models is you know exactly what you are working with. There is nothing in the model but what we put in it, yet it remains hard to simplify. What does this say about natural systems and our ability to study and understand them?

Design by evolution vs.
Design by human

Evolving Technology
Genetic algorithms (and other automated optimization techniques) combined with computational models are allowing us to explore technology that is different from classic by-human design.
– Artificial living tech.
– Satellite antenna
Humans need to understand the technology they are designing, (artificial) evolutionary processes do not, allowing for different designs.
https://www.livescience.com/frogbots-living-robots.html

Lots of systems are hard to sufficiently understand so as to be able to predict and use as a medium in an engineering context.
– soft-bodied robots
– asynchronous communication
systems with delay
– limited sensors
– limited communication
between robots
Other media that can be engineered by humans might also have ‘solutions’ that are not findable by way of human-engineering, but by artificial evolution
– Unconventional electronics (e.g. nonsensical configurations of FPGA circuits)

Neuroscience
Evolved controllers for simulated C. elegans are being used to develop hypotheses for how their nervous tissues interact.

How can models be used to understand nature?
Computational modelling allow us to create (simulate) just about any system we can imagine.
This freedom brings challenges. What do you choose to model? How can you use a model to learn about reality?
Let’s look at some examples of computational models being used in recent research.

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com