What is local search heuristic algorithm?

Local search is a heuristic methodHeuristic methods are approximation algorithms with a focus on average empirical behavior. that is widely used to find approximate solutions to a variety of hard optimization problems. The quality of the solution is evaluated based on a cost function or objective function, and the goal is to find a solution that minimizes or maximizes this function.

The basic idea behind a local search algorithm is to start from an arbitrary initial solution and then iteratively apply local changes to get a better solution, moving from one solution to another until a stopping criterion is met. Typically, this criterion is when there are no further improvements or a predefined number of iterations is achieved. The solution obtained in the last iteration is an approximation of the optimal solution to the problem.

At each iteration, local search explores the neighboring solutions by making small changes to the current solution and searches for the favorable transition to make. There are various methods for defining a favorable solution as the next one from the list of neighbor solutions. Here are some well known examples of such methods: hill climbingThis is a simple local search algorithm that starts with a random solution and iteratively moves to better neighboring solutions., simulated annealingThis is a stochastic local search algorithm inspired by the process of annealing in metallurgy. It uses a temperature parameter to control the probability of accepting worse solutions., and tabu searchThis is a local search algorithm that uses a tabu list to keep track of recently visited solutions and restricts the search to unexplored solutions..

Furthermore, we will focus on the hill climbing local search algorithm.

Hill climbing the local search

Hill climbing local search algorithm generates a sequence of improving solutions by making small modifications to the current solution. The algorithm evaluates each solution using an objective function or cost function and selects the neighboring solution that improves the objective/cost function. The steepest ascent hill climbing algorithm iterates over all neighbors and accepts the solution that gives the best improvement of the cost function. The basic idea is to always move toward the direction of the steepest ascent in the objective function; therefore, the name "hill climbing."

To apply this technique, the following steps should be taken:

  1. Define the solution space of the problem, describing all possible solutions.

  2. Define the objective/cost function to be optimized (minimized or maximized) over all possible solutions.

  3. Define the neighborhood of the solution, which are transitions from one solution to other solutions that are close in terms of some metric.

  4. Define the stop criteria for iterations.

The basic steps are as follows:

  1. Start with an initial solution to the optimization problem.

  2. Generate a set of neighboring solutions by making small changes to the current solution.

  3. Evaluate the quality of each neighboring solution using an objective function.

  4. Select the best neighboring solution as the current solution.

  5. Repeat the last three steps until a stopping criterion is met.

Applications

Local search algorithms have a wide range of applications, including solving optimization problems in fields such as scheduling, planning, routing, machine learning, and more. We will focus on applications of the hill climbing algorithm for the traveling salesmen problem (TSP):

"Given a list of cities and the distances between each pair, what is the shortest possible route that visits each city exactly once and returns to the origin city?"

For the traveling salesman problem (TSP), we have:

  1. All possible solutions are routes that pass through all cities and return to the origin. If nn is the number of cities, the solution space can be described as (n1)!(n-1)! circular permutations of the list of cities. Therefore, a solution S=(S1,....,Sn),where Si{1,,n},S = (S_1, ...., S_n), \text{where }S_i \in \{1, \dots, n\}, is a circular permutation of {1,,n}\{1, \dots, n\} that corresponds to one route.

  2. Let S=(S1,....,Sn)S = (S_1, ...., S_n) be a solution, objective/cost function f(S)f(S) is given by: f(S)=i=1nd(Si,Si+1)f(S) = \sum_{i=1}^{n}d(S_{i}, S_{i+1}), where d(Si,Si+1)d(S_{i}, S_{i+1}) is the distance between cities SiS_{i} and Si+1S_{i+1} and Sn+1=S1S_{n+1}=S_1.

  3. Neighbor solutions can be obtained by swapping any pair of vertices in the route. In terms of Hamming distance, which measures the number of positions that two solutions differ in, neighbor solutions S=(S1,...,Sn)S^{'} = (S^{'}_1, ..., S^{'}_n)and S=(S1,...,Sn)S^{''}=(S^{''}_1, ..., S^{''}_n) satisfy ρH(S,S)={𝑖:𝑆𝑖𝑆𝑖}=2\rho_{H}(S^{'}, S^{''})=|\{ 𝑖 :𝑆_𝑖^{'} \neq 𝑆_𝑖^{''}\}| = 2.

  4. Possible stop criteria is reaching a state where there are no further improvements in the neighborhood of the current solution or reaching a predefined maximum number of iterations.

TSP example: lowest cost circular route is 1-2-4-3-1 with cost=56

Code implementation

Let's see the code below:

#include <iostream>
#include <vector>
#include <algorithm>
#include <random>
#include <cmath>
using namespace std;
class TSPSolution {
private:
vector<size_t> cities_; // Order in which cities are visited
double cost_; // Total cost of the solution
vector<vector<double>> distances_; // Matrix distances between cities
public:
// Construct a random solution to the TSP problem
TSPSolution(const vector<vector<double>>& distances):
cost_(0.0),
distances_(distances)
{
cities_.resize(distances_.size());
for(size_t i = 0; i < distances_.size(); ++i)
{
cities_[i] = i;
}
// Shuffle indices except the first one, as we assume that routes start from the city with index 0
shuffle(cities_.begin() + 1, cities_.end(), mt19937(random_device{}()));
CalculateCost();
}
// Get the cost of the solution
double GetCost() const
{
return cost_;
}
// Get the order in which cities are visited
const vector<size_t>& GetCities() const
{
return cities_;
}
// Calculate the cost of the solution given a matrix of distances between cities
void CalculateCost()
{
cost_ = 0.0;
for(size_t i = 0; i < cities_.size() - 1; ++i)
{
cost_ += distances_[cities_[i]][cities_[i+1]];
}
cost_ += distances_[cities_[cities_.size()-1]][cities_[0]];
}
// Swap two cities in the route and evaluate the cost
void SwapCities(size_t i, size_t j)
{
size_t size = cities_.size();
size_t prevI = (i + size - 1) % size;
size_t nextI = (i + size + 1) % size;
size_t prevJ = (j + size - 1) % size;
size_t nextJ = (j + size + 1) % size;
cost_ -= distances_[cities_[prevI]][cities_[i]];
cost_ -= distances_[cities_[i]][cities_[nextI]];
cost_ -= distances_[cities_[prevJ]][cities_[j]];
cost_ -= distances_[cities_[j]][cities_[nextJ]];
swap(cities_[i], cities_[j]);
cost_ += distances_[cities_[prevI]][cities_[i]];
cost_ += distances_[cities_[i]][cities_[nextI]];
cost_ += distances_[cities_[prevJ]][cities_[j]];
cost_ += distances_[cities_[j]][cities_[nextJ]];
}
};
TSPSolution HillClimbingSearch(const vector<vector<double>>& distances)
{
// Create a random initial solution
TSPSolution solution(distances);
bool stuck = false;
int iterationCount = 0;
const int maxIterations = 1000;
while(!stuck && iterationCount <= maxIterations)
{
stuck = true;
// Generate neighbor solutions by swapping pairs of cities
vector<TSPSolution> neighbors;
for(size_t i = 1; i < distances.size(); ++i)
{
for(size_t j = i + 1; j < distances.size(); ++j)
{
TSPSolution neighbor(solution);
neighbor.SwapCities(i, j);
neighbors.emplace_back(neighbor);
}
}
// Find the best neighbor solution and update current solution
for(TSPSolution neighbor : neighbors)
{
if(neighbor.GetCost() < solution.GetCost())
{
solution = neighbor;
stuck = false;
}
}
++iterationCount;
}
return solution;
}
int main()
{
vector<vector<double>> distances =
{
{0, 10, 8, 25},
{10, 0, 15, 20},
{8, 15, 0, 18},
{25, 20, 18, 0}
};
TSPSolution solution = HillClimbingSearch(distances);
cout << "Shortest circular route: ";
auto route = solution.GetCities();
for(int city : route)
{
cout << city + 1 << " ";
}
cout << route[0] + 1 << endl;
cout << "Total cost: " << solution.GetCost() << endl;
return 0;
}

The TSPSolution class represents a sequence of cities in a route, which is described as a circular permutation of indices {0,...,n1}\{0, ..., n-1\} starting from index 0.0. Initially, a TSPSolution object is initialized with a random route chosen uniformly from the set of (n1)!(n-1)! possible routes. Neighboring solutions/routes are generated by swapping ii-th and jj-th cities for all i,j=1,,n1,𝑖𝑗i, j =1, \dots, n-1, 𝑖≠ 𝑗. The cost value for each neighbor is evaluated in O(1)O(1) time in SwapCities() function, as only two positions are swapped and there is no need to recalculate the cost of the entire route. The algorithm moves from one solution to its neighbor solution until either there is no improvement in the current solution's neighborhood (stuck equals true) or the maximum number of iterations has been reached. The final solution, which is an approximation to the optimal solution, is returned by the algorithm.

As hill climbing local search always moves from one solution to another that improves the value of the objective function, it may get stuck in a local optimum where no further improvement is possible. Therefore, this algorithm is mainly used in situations where there is only one hill in the solution space, and the local optimum is also the global optimum. To address the issue of getting stuck in local optima, other optimization methods such as tabu search and simulated annealing should be used.

Free Resources