SWEN90004
Modelling Complex Software Systems Lecture Cx.07
Agent-Based Models I
Artem Polyvyanyy, Nic Geard
artem.polyvyanyy@unimelb.edu.au;
nicholas.geard@unimelb.edu.au
Semester 1, 2021
SLIDE 1
Introduction to agent-based models
SLIDE 2
What is an “agent-based” model?
Like cellular automata (CA), agent-based models (ABMs) are an approach to modelling complex systems that focuses on the components of the system and the interactions between them.
An ABM typically has three elements:
agents
environment (including other agents) interactions
ABMs differ from CA in that allow more flexibility in how they represent agent behaviour and interaction structure.
SLIDE 3
“Agent-based modelling is a computational method that enables a researcher to create, analyze, and experiment with models composed of agents that interact within an environment.” (Gilbert, 2008) The analytical unit in ABMs is the individual “agent”, rather than the aggregate (system- or macro- level) variables. This means that additional modelling insights are required when designing models: now, we are simulating the behaviour and interactions of autonomous entities (agents) with each other and with their local environment, in order to predict higher level emergent patterns.
A goal of designing ABMs is to achieve the same “aggregate dynamics” as are observed in the real world, and potentially in more traditional modelling approaches. However, we now also want to gain an insight into how these aggregate dynamics emerge from interactions between the individual agents
An example: flocking behaviour
Flocking behaviour, as exhibited by large groups of birds and fish, was earlier introduced as a canonical example of complex system.
FigureA murmuration of starlings
Craig Reynolds, a computer graphics and animation expert who has worked for Electronic Arts, DreamWorks Animation and Sony, develped an early (1986) ABM model of flocking behaviour.
SLIDE 4
‘Boids’ model of flocking behaviour
The agent (Reynolds used the term boid) Agent behaviours include:
forward motion
turning left or right
(perhaps) acceleration and deceleration
SLIDE 5
In the classification scheme introduced later in this lecture, boids are reflexive agents, with some internal state variables (their current speed and heading).
Their future behaviour is based on their current state and a reflexive reaction to their current sensory inputs (percepts about the presence, location and heading of nearby boids).
A boid has no long-term goals that it is seeking to satisfy, it has no notion of utility that it is attempting to maximise, it doesn’t learn from its past experience, it just reacts. In a sense, it is a minimal agent.
The boids neighbourhood function
http://www.red3d.com/cwr/steer/gdc99/
The environment consists of the other boids in the flock.
The neighbourhood function is the sensory range of the boid (how far away it can
perceive other agents).
Boids ‘interact’ according to the following three rules. . .
SLIDE 6
Rules for flocking behaviour
1. Separation
http://www.red3d.com/cwr/steer/gdc99/ Boids will steer to avoid crowding nearby boids—collision avoidance
SLIDE 7
Rules for flocking behaviour
2. Cohesion
http://www.red3d.com/cwr/steer/gdc99/ Boids will steer towards the average location of nearby boids—safety in numbers
SLIDE 8
Rules for flocking behaviour
3. Alignment
Boids will steer toward the average bearing of nearby boids
http://www.red3d.com/cwr/steer/gdc99/
SLIDE 9
Characteristics of agents
SLIDE 10
Essential characteristics: self-contained
An agent is a modular component of a system, is has a boundary and can be clearly distinguished from and recognised by other agents.
Flocking model: the boids are clearly distinguishable and ‘recognised’ (though not uniquely) by other boids.
SLIDE 11
Essential characteristics: autonomous
http://pixabay.com/
An agent can function independently in its environment, and in its interactions with other agents (within the scope of the defined model).
Flocking model: the behaviour of each boid is entirely defined by the information it obtains about its local environment
SLIDE 12
Essential characteristics: dynamic state
http://www.hudforglass.com/
An agent has attributes, or variables, that can change over time. An agents current state is what determines its future actions and behaviour.
Flocking model: a boid’s state consists of its current heading and speed. This state determines the motion of the boid, and is modified on the basis of information it recieves about its local environment.
SLIDE 13
Essential characteristics: social
http://www.alexanderwild.com/ An agent has dynamic interactions with other agents that influence their behaviour.
Flocking model: boids ‘interact’ by perceiving and reacting to the location and behaviour of other boids.
SLIDE 14
Other characteristics: adaptive
An agent may have the ability to learn and adapt its behaviour on the basis of its past experiences.
For example, imagine if prey boids in predator-prey model observed that predators more often attacked boids on their right than their left (becuase of the location of their sensors/eyes), they might be able to ‘learn’ to move to the left of a predator and hence escape detection.
SLIDE 15
eg, imagine if the prey boids in a predator-prey model moved at different speeds… Those that moved faster would live longer and reproduce more often. If the children inherited their parents speed, then over time the prey population would evolve to move faster.
Other characteristics: goal-directed
An agent may have goals that it is attempting to achieve via its behaviours.
For example, imagine that, in addition to avoiding predators, prey boids also have the goal of collecting materials to build a nest. . . This would be another factor influencing their behaviour.
SLIDE 16
Other characteristics: heterogeneous
A system may be comprised of agents of different types: these differences may be by design (eg, predators and prey), or a result of an agents’ past history (eg, the ‘energy level’ of predators, based upon whether it has eaten recently)
More complex ABMs might include a wide array of different types of agents; for example, models of land use may include residents, planners, infrastructure providers, businesses, developers and lobbyists, etc.
SLIDE 17
Environment and interactions
SLIDE 18
Environment
Agents monitor and react to their environment—the physical or virtual space in which the agent ‘functions’.
The one- and two- dimensional grids of CA were very simple environments.
Environments may be static (unchanging over time), or they may change as a
result of agent behaviour, or they may be independently dynamic.
Real environments are typically dynamic (and often stochastic). Therefore we may not be able to foresee all possible ‘states’ of the environment and how agents will respond to them.
SLIDE 19
Richer environments may be:
• continuous, in two or three dimensions
• may involve multiple layers detailing; eg, in a model of agricultural land use, layers may be used to describe land attributes (soil composition), transportations networks (road and rail), land value and property ownership, etc.
• involve complex feedback loops; eg, incorporating atmospheric and climatic models that take agent behaviours as inputs.
Interactions
A defining characteristic of complex systems is local interactions between agents, as defined by the agent neighbourhood.
Depending on the structure of the system, the composition of an agent’s neighbourhood may be dynamic (ie, change over time as it moves through its environment)
Neighbourhoods, and hence patterns of interaction, are not always defined in spatial terms—they may also be networks with a particular topology (more on this next week)
SLIDE 20
Interactions may be direct or indirect (eg, mediated by the environment). For example, two termites building a nest may communicate directly with one another via physical or chemical signals, or they may affect their environment by placing and removing building materials, in a way that influences the behaviour of other termites.
Example: Foraging in ant colony
Ant colonies are able to achieve remarkable feats of organisation—constructing elaborates nests and efficiently foraging for food—“without chief, overseer or ruler”
Work by evolutionary biologist Edmund O. Wilson in the 1960s and 1970s led to new understanding of how ant societies operate.
Individual ants perform a limited range of tasks—picking up and dropping food and building materials—based on relatively simple decision rules.
The inputs to these decision rules are chemical signals (pheromones). These signals can come from both:
direct interactions with nearby ants; and
stigmergic interactions via pheromones deposited by ants in the environment
SLIDE 21
Ant colony foraging
Dorigo M, Stutzle (2004) Ant Colony Optimization, MIT Press
SLIDE 22
This figure shows a classic experiment conducted to demonstrate how ant colonies forage in an efficient fashion, without central coordination.
The experimental set-up is as follows: a food source is placed opposite the nest, separated by a bridge with euqal length branches. Initially, ants will randomly choose either bridge. Ants deposit pheromone as they move. If, by random chance, a few more ants select one bridge, a greater amount of pheromone will accumulate on this bridge. Therefore future ants will be more likely to choose this bridge.
If the two bridges are of unequal length, the shorter branch will accumulate more pheromone because the journey is shorter, and the ants deposit pheromone on both their outward and return journeys. Therefore the same amplification of initial fluctuations will occur (positive feedback).
This behaviour is the basis of a powerful class of nature-inspired optimisation algorithms known as ant colony optimisation.
NetLogo ‘Ants’ model
Ant rules
1. If an ant is not carrying food:
move randomly around its environment
if it encounters food pherommenes, move in the direction of the strongest signal
2. If an ant encounters food: pick it up
3. If an ant is carrying food:
follow the nest pheromone gradient back toward the nest
deposit food pheromones into the environment while moving
There is a nest pheromone gradient diffusing out from the nest food pheromones evaporate over time
Environment
SLIDE 23
Agent decision making
SLIDE 24
Agent decision making
Agents are often designed to make rational decisions: they will act to maximise the expected value of their behaviour given the information they have received thus far.
Note that:
rational ̸= omniscient: an agent’s percepts may not supply all required information rational ̸= clairvoyant: an agent’s actions may not produce the expected outcome therefore, rational ̸= successful (necessarily)
rational → exploration, learning, adaptation, . . .
SLIDE 25
Agent decision making
Agent decision making may be:
probabilistic: representing decisions using distributions
rule-based: modelling the decision making process
adaptive: agents may learn via reinforcement or evolution
The choice will often depend on the purpose of the model (what question is it being used to answer?)
Agent decision making can range from simple to complex. . .
SLIDE 26
There is scope for implementing incredibly elaborate models of agent behaviour, decision making and learning. It is important that the purpose of the model—the question that it is being designed to answer—is kept in mind when designing the model, as it is this which will help to constrain how much needs to be included in the model.
“Everything should be made as simple as possible, but not simpler.” (often attributed to Albert Einstein)
In this context, a model should be as simple as possible, while still retaining sufficient flexibility to reproduce the real world phenomenon of interest.
The KISS principle (Keep It Simple, Stupid) is often invoked in ABM design.
Reflexive agents
Russell and Norvig (2009) Artificial Intelligence: A Modern Approach, Prentice Hall
SLIDE 27
Reflex agents percieve some aspect of their environment via sensors and, acting according to some set of rules, choose some behaviour or action.
This is the simplest type of agent, with little or no internal state, no internal sense of history, no capacity to plan towards a long term goal, and no ability to change its behavioural rules (ie, learn) over time.
Reflexive agents with internal state
Russell and Norvig (2009) Artificial Intelligence: A Modern Approach, Prentice Hall
SLIDE 28
This agent is slightly more complex, in that it holds some internal state that may keep track of its past actions, the consequences of these actions, and so may have a richer perception of the current state of the world than that simply provided by its sensors at the current moment.
eg, such an agent would be able to react to trends (increases or decreases) in some environmental variable, or possibly be able to detect periodic oscillations and react accordingly.
Goal-driven agents
Russell and Norvig (2009) Artificial Intelligence: A Modern Approach, Prentice Hall
SLIDE 29
Goal-driven agents select their action based not only on the current and past states of the world, but also in accordance to some desired future state of the world.
Such an agent can be conceived of as buliding an internal model of its environment that it is then able to use to test the effects of its possible actions, and select the action that brings the state of the environment closer to its desired target, or goal, state.
Utilitarian agents
Russell and Norvig (2009) Artificial Intelligence: A Modern Approach, Prentice Hall
SLIDE 30
A utilitarian agent is a refinement of a goal-driven agent, where possible future states are not necessarily assessed against a specific goal state, but rather evaluated according to some utility function; eg, a trading agent in a model of a financial market may seek to maximise its profit, though the specific means used to achieve this (particular shares bought or sold) may not be defined.
Learning agents
Russell and Norvig (2009) Artificial Intelligence: A Modern Approach, Prentice Hall
SLIDE 31
A learning agent is able to adapt the rules that it uses to make choices, in response to some feedback on its past performance. There are a wide variety of approaches to implmenting learning in ABMs, such as artificial neural networks, a set of statistical learning algorithms based on the way biological networks adapt over time.
Summary
Agent-based models consist of agents, an environment, and interactions between agents and (a) other agents and (b) the environment.
ABMs feature decentralised information and decision making, and can reproduce the emergent behaviour and self-organisation found in complex systems.
ABMs are employed in a wide variety of domains: ecology, social science, economics, political science, etc.
ABMs can be much more elaborate than CAs.
SLIDE 32
Application domains:
Economics: stock market, self organising markets, trade networks, consumer behaviour, deregulated electric power markets
Political Sciences: water rights in developing countries, party competition, origins and patterns of political violence, power sharing in multicultural states
Ecology: population dynamics of salmon and trout, land use dynamics, flocking behaviour in fish and birds, rain forest growth
Social Science: insect societies, group dynamics in fights, growth and decline of ancient societies, group learning, spread of epidemics, civil disobedience
Follow up activites!
Explore the Flocking, Ants and Segregation models in NetLogo Think about:
What are agents? What state variables do they have? What decisions do they make?
What is the environment? Is it static or dynamic?
What interactions take place? Between agents? Between the agents and the environment?
SLIDE 33
References
Macal CM, North MJ (2010), Tutorial on agent-based modeling and simulation. Journal of Simulation 4:151–162
Reynolds C (1987) Flocks, herds and schools: a distributed behavioral model, SIGGRAPH ’87: Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, Association for Computing Machinery, pp 25–34
Schwarz N, Kahlenberg D, Haase D, Seppelt R (2012) ABMland—a Tool for Agent-Based Model Development on Urban Land Use Change, Journal of Artificial Societies and Social Simulation 15(2):8
Bonabeau E, Theraulz G, Deneuborg J-L, Aron S, Camazine S (1997) Self-organization in social insects, Trends in Ecology and Evolution 12(5):188–193
SLIDE 34