What is particle swarm optimization?

Key takeaways:

  • Particle swarm optimization (PSO) is a nature-inspired optimization algorithm based on the behavior of birds and fish.

  • PSO models the search for optimal solutions by simulating particles navigating the search space.

  • Each particle represents a potential solution and adjusts its position based on personal and global best solutions.

    • Particles update their velocities and positions iteratively using equations that factor in inertia, personal best, and global best positions.

  • PSO terminates when a set number of iterations are reached or a global best value is achieved.

  • PSO is applied in domains like function optimization, data clustering, resource allocation, and vehicle routing.

Particle swarm optimization (PSO) is an algorithm inspired by nature. In 1995, Kennedy and Eberhard proposed this evolutionary optimization algorithm after observing the nature and behavior of flocks of birds and schools of fish.
PSO models the search for optimal solutions as a swarm of particles navigating the solution space. Each particle in PSO represents a solution in the search space, dynamically adjusting its position and reporting based on its knowledge and data exchange with other particles in the swarm. Particles dynamically explore the search space and converge towards optimal or near-optimal solutions by iteratively updating their positions.

Particle swarm optimization
Particle swarm optimization

Pros of particle swarm optimization

  • Simple implementation: PSO has a simple implementation and understanding compared to other optimization algorithms, making it accessible to many users.

  • Efficient search: PSO's population-based approach allows for efficient search space exploration, enabling the algorithm to quickly converge towards promising regions where optimal solutions may exist.

  • Global and local search: PSO balances global explorationSearching the entire solution space to discover diverse and potentially promising solutions and local exploitationFocusing on specific regions of the solution space to refine and improve existing solutions based on past knowledge through the collective movement of particles, which helps avoid getting stuck in local optima.

  • Versatility: PSO can be easily adapted and extended to tackle optimization problems, including continuous, discrete, and multi-objective optimization.

The mathematics of PSO

Let's understand the mathematical logic behind PSO using an example. We are tasked with finding the minimum value of the function f(x)=x2f(x) = x^2, where xx is a real number.

  1. Initialize random particles: Let's initialize 5 particles with random positions and velocities between -10 and +10.

    1. Particle 1: x1=5,v1=2x_1 = -5, v_1 = 2

    2. Particle 2: x2=3,v2=1x_2 = 3, v_2 = -1

    3. Particle 3: x3=8,v3=0x_3 = -8, v_3 = 0

    4. Particle 4: x4=6,v4=1.5x_4 = 6, v_4 = 1.5

    5. Particle 5: x5=9,v5=2x_5 = 9, v_5 = -2

  2. Calculate the fitness function for every particle: Compute the fitness function (f(x)f(x)) for each particle with its respective xx value.

    1. Fitness of particle 1: f(5)=(5)2=25f(-5) = (-5)^2 = 25

    2. Fitness of particle 2: f(3)=(3)2=9f(3) = (3)^2 = 9

    3. Fitness of particle 3: f(8)=(8)2=64f(-8) = (-8)^2 = 64

    4. Fitness of particle 4: f(6)=(6)2=36f(6) = (6)^2 = 36

    5. Fitness of particle 5: f(9)=(9)2=81f(9) = (9)^2 = 81

  3. Update the personal best (pbest) for every particle: Update the existing fitness value if the current fitness value is better than the current fitness value.

  4. Update global best (gbest): Determine the particle with the best fitness (lowest value of f(x)f(x)) among all particles. This particle represents the global best solution (gbest) encountered by the swarm. In our example, our gbest is 9, which is the fitness value of particle 2.

  5. Update velocity and position of each particle: Update the velocity viv_iand position xix_iof each particle using the following equations

  • vi(t)v_i(t)is the velocity of the particle ii at the time tt

  • xi(t)x_i(t) is the particle's position

  • pbestipbest_i​ is its personal best position

  • gbestgbest is the global best position

  • ww is the inertia weight

  • c1c_1​ and c2c_2​ are acceleration coefficients

  • r1r_1​ and r2r_2​ are random numbers in the range [0, 1].

  1. Repeat steps 2-5 till termination criteria are met: There are different termination criteria that can be used, such as finishing a set number of iterations or achieving a predefined global best value.

  2. Output: Once the algorithm terminates, the particle's position with the best fitness (global best) represents the optimized solution to the problem.

To make the process clear, let us implement it in Python.

Implementing PSO in Python

We will implement the previous example of finding the optimal minimum of the function f(x)=x2f(x) = x^2 .

import random
# Initialize parameters
num_particles = 5
max_iterations = 10
particles = [random.randint(-10, 10) for _ in range(num_particles)] #positions
velocities = [random.randint(-1, 1) for _ in range(num_particles)] #velocities
personal_best_positions = particles[:] #pbest of particles
global_best_position = float('inf') #global best position
w = 0.5
c1 = 1.5
c2 = 1.5
r1 = random.uniform(0,1)
r2 = random.uniform(0,1)
# Define the fitness function
def fitness_function(x):
return x ** 2
# Define the function for updating the velocities
def update_velocity(v1, x1, pbest, gbest):
return (w*v1 + (c1*r1*(pbest - x1)) + (c2*r2*(gbest-x1)))
def pso(num_particles, max_iterations):
# PSO Loop
for j in range(max_iterations):
for i in range(num_particles):
# Calculating the personal best for each particle
if fitness_function(particles[i]) < fitness_function(personal_best_positions[i]):
personal_best_positions[i] = round(particles[i],3)
# Calculating the global best
global_best_position = min(personal_best_positions)
# Updating velocity and position of each particle
for i in range(num_particles):
# Update velocity
new_velocity = update_velocity(velocities[i], particles[i], personal_best_positions[i], global_best_position)
velocities[i] = round(new_velocity, 3)
# Update particle position
particles[i] += round(new_velocity, 3)
particles[i] = round(particles[i], 3)
# Printing the global best, personal bests, velocities and particles for every other iteration
if j%2 != 0:
print("Global best position at iteration ", j+1, " : ", global_best_position)
print("Personal best positions at iteration ", j+1, " : ", personal_best_positions)
print("Updated velocities at iteration ", j+1, " : ", velocities)
print("Updated positions at iteration ", j+1, " : ", particles)
return global_best_position
print("Initializion of swarm: \nPositions: ", particles, "\nVelocities: ", velocities)
result = pso(num_particles, max_iterations)
print("Minimum value found at x = ", result)
print("Minimum value of f(x) = ", round(fitness_function(result),3))

Explanation

Let's go through the code line by line.

  • Lines 4-14: We are initializing the parameters used in the algorithm and particles for the swarm.

  • Lines 17-18: We are defining the fitness function, which is f(x)=x2f(x)=x^2.

  • Lines 21-22: We are defining the function used for updating the velocities of the particles.

  • Lines 24-48: This is the main PSO loop. First, there is a nested loop which runs max_iterations times for num_particles . In our example, it will run 10 times for 5 particles.

  • Lines 29-32: We are updating the personal_best_positions for each particle if the current particle's position is lower than its current personal_best_positions[i]. Next, the minimum of the personal best positions list is assigned as the global_best_position.

  • Lines 35-40: We are updating the velocity and position of each particle based on the predefined functions. Next, in

  • lines 44-47: We are printing the updated velocities, particles , global_best_position, and personal_best_positions for every other iteration. Lastly, we return the global_best_position.

Conclusion

Particle swarm optimization is a simple, effective approach applicable to various domains, including function optimization, data clustering, optimal resource allocation, vehicle routing problems, and more.

Frequently asked questions

Haven’t found what you were looking for? Contact Us


What is the function of the PSO algorithm?

The function of the PSO algorithm is to find optimal or near-optimal solutions to optimization problems by simulating the movement of particles in a search space, updating their positions based on personal and global best solutions.


What are the objective functions of particle swarm optimization?

The objective functions of PSO are the mathematical functions that the algorithm aims to optimize (minimize or maximize) by finding the best solution in the search space. These functions depend on the specific problem being solved, such as minimizing a cost function or maximizing efficiency.


Free Resources

Copyright ©2025 Educative, Inc. All rights reserved