The function of the PSO algorithm is to find optimal or near-optimal solutions to optimization problems by simulating the movement of particles in a search space, updating their positions based on personal and global best solutions.
Key takeaways:
Particle swarm optimization (PSO) is a nature-inspired optimization algorithm based on the behavior of birds and fish.
PSO models the search for optimal solutions by simulating particles navigating the search space.
Each particle represents a potential solution and adjusts its position based on personal and global best solutions.
Particles update their velocities and positions iteratively using equations that factor in inertia, personal best, and global best positions.
PSO terminates when a set number of iterations are reached or a global best value is achieved.
PSO is applied in domains like function optimization, data clustering, resource allocation, and vehicle routing.
Particle swarm optimization (PSO) is an algorithm inspired by nature. In 1995, Kennedy and Eberhard proposed this evolutionary optimization algorithm after observing the nature and behavior of flocks of birds and schools of fish.
PSO models the search for optimal solutions as a swarm of particles navigating the solution space. Each particle in PSO represents a solution in the search space, dynamically adjusting its position and reporting based on its knowledge and data exchange with other particles in the swarm. Particles dynamically explore the search space and converge towards optimal or near-optimal solutions by iteratively updating their positions.
Simple implementation: PSO has a simple implementation and understanding compared to other optimization algorithms, making it accessible to many users.
Efficient search: PSO's population-based approach allows for efficient search space exploration, enabling the algorithm to quickly converge towards promising regions where optimal solutions may exist.
Global and local search: PSO balances
Versatility: PSO can be easily adapted and extended to tackle optimization problems, including continuous, discrete, and multi-objective optimization.
Let's understand the mathematical logic behind PSO using an example. We are tasked with finding the minimum value of the function
Initialize random particles: Let's initialize 5 particles with random positions and velocities between -10 and +10.
Particle 1:
Particle 2:
Particle 3:
Particle 4:
Particle 5:
Calculate the fitness function for every particle: Compute the fitness function (
Fitness of particle 1:
Fitness of particle 2:
Fitness of particle 3:
Fitness of particle 4:
Fitness of particle 5:
Update the personal best (pbest) for every particle: Update the existing fitness value if the current fitness value is better than the current fitness value.
Update global best (gbest): Determine the particle with the best fitness (lowest value of
Update velocity and position of each particle: Update the velocity
Repeat steps 2-5 till termination criteria are met: There are different termination criteria that can be used, such as finishing a set number of iterations or achieving a predefined global best value.
Output: Once the algorithm terminates, the particle's position with the best fitness (global best) represents the optimized solution to the problem.
To make the process clear, let us implement it in Python.
We will implement the previous example of finding the optimal minimum of the function
import random# Initialize parametersnum_particles = 5max_iterations = 10particles = [random.randint(-10, 10) for _ in range(num_particles)] #positionsvelocities = [random.randint(-1, 1) for _ in range(num_particles)] #velocitiespersonal_best_positions = particles[:] #pbest of particlesglobal_best_position = float('inf') #global best positionw = 0.5c1 = 1.5c2 = 1.5r1 = random.uniform(0,1)r2 = random.uniform(0,1)# Define the fitness functiondef fitness_function(x):return x ** 2# Define the function for updating the velocitiesdef update_velocity(v1, x1, pbest, gbest):return (w*v1 + (c1*r1*(pbest - x1)) + (c2*r2*(gbest-x1)))def pso(num_particles, max_iterations):# PSO Loopfor j in range(max_iterations):for i in range(num_particles):# Calculating the personal best for each particleif fitness_function(particles[i]) < fitness_function(personal_best_positions[i]):personal_best_positions[i] = round(particles[i],3)# Calculating the global bestglobal_best_position = min(personal_best_positions)# Updating velocity and position of each particlefor i in range(num_particles):# Update velocitynew_velocity = update_velocity(velocities[i], particles[i], personal_best_positions[i], global_best_position)velocities[i] = round(new_velocity, 3)# Update particle positionparticles[i] += round(new_velocity, 3)particles[i] = round(particles[i], 3)# Printing the global best, personal bests, velocities and particles for every other iterationif j%2 != 0:print("Global best position at iteration ", j+1, " : ", global_best_position)print("Personal best positions at iteration ", j+1, " : ", personal_best_positions)print("Updated velocities at iteration ", j+1, " : ", velocities)print("Updated positions at iteration ", j+1, " : ", particles)return global_best_positionprint("Initializion of swarm: \nPositions: ", particles, "\nVelocities: ", velocities)result = pso(num_particles, max_iterations)print("Minimum value found at x = ", result)print("Minimum value of f(x) = ", round(fitness_function(result),3))
Let's go through the code line by line.
Lines 4-14: We are initializing the parameters used in the algorithm and particles for the swarm.
Lines 17-18: We are defining the fitness function, which is
Lines 21-22: We are defining the function used for updating the velocities of the particles.
Lines 24-48: This is the main PSO loop. First, there is a nested loop which runs max_iterations
times for num_particles
. In our example, it will run 10 times for 5 particles.
Lines 29-32: We are updating the personal_best_positions
for each particle if the current particle's position is lower than its current personal_best_positions[i]
. Next, the minimum of the personal best positions list is assigned as the global_best_position
.
Lines 35-40: We are updating the velocity and position of each particle based on the predefined functions. Next, in
lines 44-47: We are printing the updated velocities
, particles
, global_best_position
, and personal_best_positions
for every other iteration. Lastly, we return the global_best_position
.
Particle swarm optimization is a simple, effective approach applicable to various domains, including function optimization, data clustering, optimal resource allocation, vehicle routing problems, and more.
Haven’t found what you were looking for? Contact Us
Free Resources