CS计算机代考程序代写 ant algorithm PARTICLE SWARM OPTIMISATION

PARTICLE SWARM OPTIMISATION
DR H.K. LAM
Department of Engineering
King’s College London
Office S2.14, Strand Building, Strand Campus
Email: hak-keung.lam@kcl.ac.uk
https://nms.kcl.ac.uk/hk.lam
Nature-Inspired Learning Algorithms (7CCSMBIM)
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 1/57

Outline
1 Introduction
2 Basic Particle Swarm Optimisation Global Best PSO
Local Best PSO
Velocity Components Particle Initialisation Stopping Criteria
Social Network Structure Velocity Clamping
Inertia Weight
3 Examples DrH.K.Lam (KCL)
ParticleSwarmOptimisation
NILAs2020-21 2/57

Learning Objectives
To get an idea of swarm intelligence.
To get the concept of particle swarm optimisation and know how it works. To apply the particle swarm optimisation to optimisation problems.
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 3/57

Introduction
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 4/57

Introduction
Swarm Intelligence
Swarm intelligence is an artificial intelligence technique based on the study of collective behaviour in decentralised, self-organised systems.
Swarm intelligence systems are typical made up of a population of simple agents interacting locally with one another and with their environment.
Although there is no centralised control structure, local interactions between such agents often lead to the emergence of global behaviour.
Examples of systems in nature, including social insect colonies, bird flocking, fish schooling and animal herding.
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 5/57

Introduction
Self-organisation
A set of dynamical mechanisms whereby structures appear at the global level of a system from interactions of its lower-level components.
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 6/57

Introduction
Self-organisation
A set of dynamical mechanisms whereby structures appear at the global level of a system from interactions of its lower-level components.
Four basic ingredients:
Positive feedback (amplification): To show the right of direction to the food source (optimal solution); to reinforce those portions of good solutions that contribute to the quality of these solutions.
Negative feedback: to introduce a time scale into the algorithm through pheromone evaporation, to prevent premature convergence (stagnation), for counter-balance and stabilisation
Amplification of fluctuation: Randomness or errors, e.g., lost ant foragers can find new food sources. An element moves more randomly to search for a solution and then amplified by a positive feedback loop.
Multiple interactions: Direct or indirect communication (e.g., modification of the environment).
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 6/57

Introduction
Why do Animals Swarm?
Defence against predators.
Enhance the detection of predators. Minimise the chance of being captured.
Enhance success in foraging. Better chances to find a mate. Decrease of energy consumption.
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 7/57

Introduction
Coordinated Collective Behaviour
Reynolds (1987) proposed a behaviour model to interpret bird flocking, fish schooling and animal herds.
Biologically and physically sound assumption:
1. Individualhasonlylocalknowledge. 2. Hascertaincognitivecapabilities. 3. Boundbylawsofphysics.
Comply with only three simple rules.
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 8/57

Introduction
Coordinated Collective Behaviour Reynolds (1987) Behaviour Model
1. Separation: each agent tries to move away from its neighbours if they are too
close.
2. Alignment: each agent steers towards the averaging heading of its neighbours.
3. Cohesion: each agent tries to go towards the average position of its neighbours.
(a) Separation
Dr Kanchana Jeganathan
3/14/2011
Dr K(b) Alignment anchana Jeganathan
3/14/2011
(c) Cohesion
Dr Kanchana Jeganathan
DrH.K.Lam (KCL)
ParticleSwarmOptimisation
NILAs2020-21 9/57

Particle Swarm Optimisation (PSO)
Building a Metaheuristic
They expand the swarm behavioural model to n-dimensional psychosocial space without constraints of physical laws.
A swarm consists of a set of particles, where each particle represents a potential solution for an n-dimensional optimisation problem.
Particles are flown through the hyperspace, where position of each particle is changed according to its own experience and that of its neighbours.
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 10/57

Basic Particle Swarm Optimisation
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 11/57

Basic Particle Swarm Optimisation
Particle Swarm Optimisation (PSO): A numerical population-based optimisation techniques discovers optimal regions of a high dimensional search space through collective behaviour of individuals.
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 12/57

Basic Particle Swarm Optimisation
Particle Swarm Optimisation (PSO): A numerical population-based optimisation techniques discovers optimal regions of a high dimensional search space through collective behaviour of individuals.
Ingredients:
A particle: an individual (a potential solution) A swarm: a population.
Update rules.
Image from the Internet
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 12/57

Basic Particle Swarm Optimisation
Image from the Internet
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 13/57

Basic Particle Swarm Optimisation
Image from the Internet
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 14/57

Basic Particle Swarm Optimisation
Image from the Internet
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 15/57

Basic Particle Swarm Optimisation
Notation
xi(t) = [xi1,··· ,xinx ]: the ith particle (individual). nx: number of elements in each particle.
ns: size of swarm (number of particles in the swarm). nt: maximum number of iterations.
yˆ(t) = [yˆ1(t),··· ,yˆnx (t)]: the global best position since the first generation. yˆi(t) = [yˆi1(t),··· ,yˆinx (t)]: the local best position since the first generation. yi(t) = [yi1(t),··· ,yinx (t)]: the personal best position since the first generation. Ni: the set of neighbourhoods of particle i.
xmin = [x1min ,··· ,xnxmin ]: a vector of constants denoting the lower bound of xi(t). xmax = ⇥x1max ,··· ,xnxmax ⇤: a vector of constants denoting the upper bound of xi(t). vi = [vi1,··· ,vinx ]: a velocity vector.
r1j(t), r2j(t) 2 [0,1]: a random number .
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 16/57

Basic Particle Swarm Optimisation
Two basic PSO algorithms:
The main difference is the size of their neighbourhoods.
Global best PSO (gbest PSO):
Social network: star topology.
Neighbours of a particle: the whole swarm.
Particles are updated based on the social information from all particles in the swarm. Social information: the best position (solution) found by the swarm.
Local best PSO (lbest PSO):
Social network: ring topology.
Neighbours of a particle: a small number of particles in the swarm
Particles are updated based on the social information exchanged within the neighbourhood of the particle.
Social information: the local best position (solution) within the neighbourhood.
Image from the Internet
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 17/57

Global Best PSO
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 18/57

Global Best PSO
Size of Swarm: ns particles.
The (global) best position found in the swarm:
yˆ(t) = [yˆ1(t),··· ,yˆnx (t)] The personal best position since the first generation:
yi(t) = [yi1(t),··· ,yinx (t)]
vij(t + 1) = vij(t) + c1r1j(t)yij(t) xij(t) + c2r2j(t)yˆj(t) xij(t) (1)
Velocity update rule:
where c1 0 and c2 0 are acceleration constants; r1j(t) and r2j(t) 2 [0,1] are random numbers.
Particle update rule:
xi(t+1) = xi(t)+vi(t+1), i = 1, ···, ns
wherexi(t)=[xi1,···,xinx]andvi =[vi1,···,vinx].
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21
19/57

Global Best PSO
Update of personal best position:
yi(t+1)=(yi(t) iff(xi(t+1))f(yi(t)) ,i=1,···,ns
xi(t + 1) otherwise The global best position:
yˆ(t+1)2{y1(t+1),···,yns(t+1)}|f(yˆ(t+1))=min{f(y1(t+1)),···,f(yns(t+1))}
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 20/57

Global Best PSO
better position than the current global best. In this case the new particle position
will become the new global best position, and all particles will be drawn towards
2. The new position is still worse than the current global best particle. In subse- What is the swaqrumenstizteim?e steps the cognitive and social components will cause the particle to
change direction back towards the global best.
How many elements does each particle have?
How many elements does the velocity vector have?
The cumulative eect of all the position updates of a particle is that each particle
converges to a point on the line that connects the global best position and the personal
What is the initial velocity, vi(0)?
best position of the particle. A formal proof can be found in [863, 870].
x2
x2

x1
x1

(a) At time t = 0
Figure 16.2 Multi-particle gbest PSO Illustration
Figure 1: Multi-particle gbest PSO illustration. ‘⇥’ indicates the optimum. Returning to more than one particle, Figure 16.2 visualizes the position updates with reference to the task of minimizing a two-dimensional function with variables x1 and
x2 using gbest PSO. The optimum is at the origin, indicated by the symbol ‘⇥’. DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21
21/57
(b) At time t = 1
it.
Figure 16.2(a) illustrates the initial positions of eight particles, with the global best

Global Best PSO
gbest PSO algorithm
Create and initialise an nx-dimensional swarm, yi(0) and yˆ(0); Set t = 0, vi(0) = 0; Choose values for c1 and c2;
while STOP-CRIT do
foreachparticlei=1,···,ns do Updatethevelocity,vij(t+1)=vij(t)+c1r1j(t) yij(t)xij(t) +c2r2j(t) yˆj(t)xij(t) ; Update the position of particles, xi(t+1) = xi(t)+vi(t+1);
end
for each particle i = 1, ···, ns do
if f (xi(t + 1)) < f (yi(t)) then yi(t+1)=xi(t+1); else yi(t+1)=yi(t); end if f (yi (t + 1)) < f (yˆ (t)) then yˆ(t+1)=yi(t+1); end end t t+1; end DrH.K.Lam (KCL) Table 1: Pseudocode of gbest PSO algorithm. ParticleSwarmOptimisation NILAs2020-21 22/57 Global Best PSO Initialise parameters, counter Update the velocity, vij(t) Update the position, xi(t) no Update the personal best, yi(t) Update the global best, yˆ(t) Stopping criteria met? yes Return the best x(t) Figure 2: Flowchart of gbest PSO algorithm. DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 23/57 Global Best PSO DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 24/57 Global Best PSO DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 24/57 Global Best PSO DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 24/57 Global Best PSO DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 24/57 Global Best PSO DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 24/57 Global Best PSO DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 24/57 Global Best PSO DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 24/57 Global Best PSO DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 24/57 Local Best PSO DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 25/57 Local Best PSO Size of Swarm: ns particles. The (local) best position found in the swarm (found by the neighbourhood of particle i): yˆi(t) = [yˆi1(t),··· ,yˆinx (t)] The personal best position since the first generation: yi(t) = [yi1(t),··· ,yinx (t)] vij(t + 1) = vij(t) + c1r1j(t)yij(t) xij(t) + c2r2j(t)yˆij(t) xij(t) (2) Velocity update rule: where c1 0 and c2 0 are acceleration constants; r1j(t) and r2j(t) 2 [0,1] are random numbers. Particle update rule: xi(t+1) = xi(t)+vi(t+1), i = 1, ···, ns DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 26/57 Local Best PSO The (local) best position found in the swarm (found by the neighbourhood of particle i): yˆi(t+1) 2 Ni|f(yˆi(t+1)) = min{f(x(t))},8x(t) 2 Ni Ni : the set of neighbourhoods of particle i. The local best position is the neighbourhood best position. The personal best position since the first generation: Particle update rule: yi(t) = [yi1(t),··· ,yinx (t)] xi(t+1) = xi(t)+vi(t+1), i = 1, ···, nx Note: When the neighbourhoods of all particles are the entire swarm, the lbest PSO becomes the gbest PSO. DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 27/57 Local Best PSO 296 16. Particle Swarm Optimiz ation x2 x2 2 4kfefd e d j i g 2 x1 x1 h3 c b a1 (a) Local Best Illustrated – Initial Swarm (b) Local Best – Second Swarm Figure 3: Illustration of lbest PSO. ‘⇥’ indicates the optimum. vij(0) = 0 except for particle f . Figure 16.3 Illustration of lbest PSO f move towards e as illustrated in Figure 16.3(b) (only part of the solution space is illustrated). The blocks represent the previous positions. Note that e remains the best DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 28/57 Local Best PSO lbest PSO algorithm Create and initialise an nx-dimensional swarm, yi(0) and yˆi(0); Set t = 0, vi(0) = 0; Choose values for c1 and c2; while STOP-CRIT do foreachparticlei=1,···,ns do Updatethevelocity,vij(t+1)=vij(t)+c1r1j(t) yij(t)xij(t) +c2r2j(t) yˆij(t)xij(t) ; Update the position of particles, xi(t+1) = xi(t)+vi(t+1); end for each particle i = 1, ···, ns do if f (xi(t + 1)) < f (yi(t)) then yi(t+1) = xi(t+1); else yi(t + 1) = yi(t); end end for each particlei = 1, ···, ns do end yˆi(t+1) 2 Ni|f(yˆi(x(t))) = min{f(x(t))},8x(t) 2 Ni t t+1; end nNi : number of neighbourhoods Table 2: Pseudo Code of lbest PSO algorithm. DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 29/57 Local Best PSO no Initialise parameters, counter Update the velocity, vij(t) Update the position, xi(t) Update the personal best, yi(t) Update the local best, yˆi(t) Stopping criteria met? yes Return the best x(t) Figure 4: Flowchart of lbest PSO algorithm. DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 30/57 Local Best PSO Selection of Neighbourhoods: Particles indices: Computationally inexpensive. Promotion of the spread of information irrespective of the position of the particles. Spatial similarity: Computationally expensive. Information of similar particles can be used for a local search. Overlapping of neighbourhoods: A particle can be a member of a number of neighbourhoods. The interconnection of neighbourhoods can promote information sharing such that all particles come to consensus faster (converge faster to a single solution). DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 31/57 Local and Global Best PSO lbest vs gbest PSO: computational demand diversity convergence speed trapped in local minima lbest PSO higher larger slower less likely gbest PSO lower smaller faster more likely DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 32/57 Velocity Components DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 33/57 Velocity Components Velocity Components: previous velocity, cognition component and social component gbest best PSO: vij(t+1) = vij(t) +c1r1j(t)yij(t)xij(t)+c2r2j(t)yˆj(t)xij(t) |{z} | {z }| {z } previous velocity cognitive component social component lbest best PSO: vij(t+1) = vij(t) +c1r1j(t)yij(t)xij(t)+c2r2j(t)yˆij(t)xij(t) |{z} | {z }| {z } previous velocity cognitive component social component DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 34/57 Velocity Components Velocity Components: previous velocity, cognition component and social component gbest best PSO: vij(t+1) = vij(t) +c1r1j(t)yij(t)xij(t)+c2r2j(t)yˆj(t)xij(t) |{z} | {z }| {z } previous velocity cognitive component social component lbest best PSO: vij(t+1) = vij(t) +c1r1j(t)yij(t)xij(t)+c2r2j(t)yˆij(t)xij(t) |{z} | {z }| {z } previous velocity cognitive component social component Previous velocity: it is an inertia making the particle move in the same direction as in the tth generation. Cognitive component: It is a personal influence which attempts to improve the individual by making the particle return to a previous good position. Social component: It is social influence which makes the particle follow the best neighbour’s direction. Previous velocity is for exploration: Its searches new regions (new solutions) for potentially better solutions. Cognitive component and social component are for exploitation: It searches the previous regions (previous solutions) for better solutions. DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 34/57 Velocity Components 294 16. Particle Swarm Optimiz ation x2 x2 yˆ(t) x(t + 1) yˆ(t + 1) new velocity inertia velocity social velocity cognitive velocity x(t) y(t) social velocity inertia velocity x(t+1) x(t + 2) new velocity cognitive velocity y(t + 1) x(t) x1 x1 (a) Time Step t Figure 16.1 GeomFeigturiec5a:lIllIulslturastiornatoifoVneloocfityVaendloPcoistiytionanUdpdaPteosfoirtigobnestUPpSOd.ates for a Single Two-Dimensional Particle (b) Time Step t+1 DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 35/57 The social component, c2r2(yˆ xi), in the case of the gbest PSO or, c2r2(yˆi • Particle Initialisation DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 36/57 Particle Initialisation Particle position: xij(0) = xmin,j +rj(xmax,j xmin,j), i = 1, ···, ns; j = 1, ···, nx. rj 2 [0, 1]: a random number [xmin,j,xmax,j]: boundary of the j-th element of particle, for all j Initialised velocity: vi(0) = 0, i = 1, ···, ns. Personal best position: yi(0) = xi(0), i = 1, ···, ns. DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 37/57 Stopping Criteria DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 38/57 Stopping Criteria Maximum number of iterations has been reached. Acceptable solution has been found. No improvement has been observed over a period of iterations. The swarm radius is close to zero. The objective function slope is approximately zero. Slope: f 0(t) = f (yˆ(t))f (yˆ(t1)) f ( yˆ ( t ) ) algorithm terminates if |f 0(t)|  e for a number of consecutive iterations. e > 0: a user-specified parameter
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 39/57

Stopping Criteria
Swarm Radius:
1. Maximum swarm radius: The algorithm terminates if Rmax(t)  e.
e > 0: a user-specified parameter. Rmax(t) = max ||xm(t)yˆ(t)||.
m2{1,··· ,ns }
yˆ(t) is the global best position of the gbest or lbest PSO.
2. Particle clustering algorithm: The algorithm terminates if |C| d .
0 < d  1: a user-specified parameter. C: a cluster. |C|: number of elements in cluster C. ns DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 40/57 Stopping Criteria Particle Clustering Algorithm Initialise cluster C = {yˆ(t)}; for about 5 times do Calculate the centroid of cluster C: x(t) = i=1,xi(t)2C ; for each particle i, i = 1, ···, ns do |C| if ||xi(t)x(t)||  e do C C[{xi(t)} end end end e > 0: a user-specified parameter
Table 3: Pseudo Code of Particle Clustering Algorithm.
|C|
 xi(t)
DrH.K.Lam (KCL) ParticleSwarmOptimisation
NILAs2020-21
41/57

Social Network Structure
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 42/57

Social Network Structure
Figure 6: Examples of Social Network Structure.
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 43/57
302
16. Particle Swarm Optimization
(a) Star
(b) Ring
(c) Wheel
(d) Pyramid
(e) Four Clusters
(f) Von Neumann
Figure 16.4 Example Social Network Structures

Velocity Clamping
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 44/57

Velocity Clamping
Exploration: the ability to explore different regions to locate good optima.
Exploitation: the ability to concentrate the search around a region to refine a solution.
Potential problem of gbest and lbest PSO: Velocity explosion – velocities are updated quickly to large values.
It pushes particles to boundaries.
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 45/57

Velocity Clamping
Velocity Clamping: Particle velocity is adjusted before updating the particles’ positions.
v0ij(t + 1) = (vij(t + 1) if vij(t + 1) < Vmax,j Vmax,j otherwise Vmax,j > 0 : a user-specified maximum velocity of the jthelement of the particle. Vmax,j controls 1) particle moving speed 2) ability of exploration and exploitation.
When vij (t + 1) is of negative, the above condition needs to be modified accordingly.
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 46/57

Velocity Clamping
Advantage: Explosion of velocity is controlled.
Disadvantages: 1)The searching direction of particle is changed. 2) When velocities reach maximum, particles will search on a hypercube defined by
[x (t)V ,x (t)+V
ij m1a6x.3,jBasicj Variationsmax,j
] for all i.
305
x2
velocity update position update
xi(t + 1)
v2(t + 1)
x i ( t + 1 )
v 2 ( t + 1 )
v1(t + 1)
xi(t)
Figure 7: Effect of Velocity Clamping.
Figure 16.5 Eects of Velocity Clamping
x1
the problem-dependent nature of Vmax,j values). Firstly, velocity clamping changes
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 47/57
not only the step size, but also the direction in which a particle moves. This eect is

Velocity Clamping
Dynamic Velocity Approaches:
Change of the maximum velocity if the global best position does not improve over t consecutive iterations:
Vmax,j(t+1)=(gVmax,j(t) iff(yˆ(t))f(yˆ(tt0))8t0 =1,···,t Vmax,j(t) otherwise
g decreases linearly or exponentially from 1 to 0.01. Exponentially decay the maximum velocity:
Vmax,j(t+1) = ✓1✓ t ◆a◆Vmax,j(t) nt
a 0 : a user-specified parameter, nt is the maximum number of iterations. Velocities are updated:
vij(t+1) = Vmax,j(t+1)tanh v0ij(t+1) ! Vmax,j (t + 1)
where v0ij (t + 1) is calculated from the gbest velocity update rule (1) or lbest velocity update rule (2).
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 48/57

Inertia Weight
DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 49/57

Inertia Weight
Inertia Weight – a mechanism 1) to control the exploration and exploitation abilities of the swarm, 2) to eliminate the need for velocity clamping.
gbest PSO:
vij(t + 1) = wvij(t) + c1r1j(t)yij(t) xij(t) + c2r2j(t)yˆj(t) xij(t)
lbest PSO:
vij(t + 1) = wvij(t) + c1r1j(t)yij(t) xij(t) + c2r2j(t)yˆij(t) xij(t)
w>0
w 1 : Velocities increase over time. w < 1 : Velocities decrease over time. DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 50/57 Inertia Weight Random adjustments: w 2 [0, 1]: a random number Linear decreasing: w(t) = (w(0)w(nt))ntt +w(nt). nt w(0) : initial inertia weight. w(nt) : final inertia weight. w(0) w(nt). nt : maximum number of iterations. Nonlinear decreasing: w(t + 1) = (w(t)0.4)(nt t) with w(0) = 0.9. nt +0.4 DrH.K.Lam (KCL) ParticleSwarmOptimisation NILAs2020-21 51/57