Heterogeneous comprehensive learning particle swarm optimization (HCLPSO)

A module for parallel heterogeneous comprehensive learning particle swarm optimization with both constriction and inertia weight support. HCLPSO leverages two subpopulations, one focuses on exploration (you) and one focuses on exploitation (your friend).

Original paper: Lynn, N., Suganthan, P. N. (2015). Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm and Evolutionary Computation, 24, 11-24.

alternate text

What can you use?

  • Multi processing: ✔️

  • Discrete spaces: ✔️

  • Continuous spaces: ✔️

  • Mixed Discrete/Continuous spaces: ✔️

Parameters

class neorl.evolu.hclpso.HCLPSO(mode, bounds, fit, g1=15, g2=25, int_transform='nearest_int', ncores=1, seed=None)[source]

Heterogeneous comprehensive learning particle swarm optimization (HCLPSO)

Parameters
  • mode – (str) problem type, either min for minimization problem or max for maximization

  • bounds – (dict) input parameter type and lower/upper bounds in dictionary form. Example: bounds={'x1': ['int', 1, 4], 'x2': ['float', 0.1, 0.8], 'x3': ['float', 2.2, 6.2]}

  • fit – (function) the fitness function

  • g1 – (int): number of particles in the exploration group

  • g2 – (int): number of particles in the exploitation group (total swarm size is g1 + g2)

  • int_transform – (str): method of handling int/discrete variables, choose from: nearest_int, sigmoid, minmax.

  • ncores – (int) number of parallel processors (must be <= g1+g2)

  • seed – (int) random seed for sampling

evolute(ngen, x0=None, verbose=False)[source]

This function evolutes the HCLPSO algorithm for a number of generations.

Parameters
  • ngen – (int) number of generations to evolute

  • x0 – (list of lists) initial position of the particles (must be of same size as g1 + g2)

  • verbose – (bool) print statistics to screen

Returns

(tuple) (best individual, best fitness, and dictionary containing major search results)

Example

from neorl import HCLPSO

#Define the fitness function
def FIT(individual):
    """Sphere test objective function.
                    F(x) = sum_{i=1}^d xi^2
                    d=1,2,3,...
                    Range: [-100,100]
                    Minima: 0
    """
    y=sum(x**2 for x in individual)
    return y

#Setup the parameter space (d=5)
nx=5
BOUNDS={}
for i in range(1,nx+1):
    BOUNDS['x'+str(i)]=['float', -100, 100]

#setup and evolute HCLPSO
hclpso=HCLPSO(mode='min', bounds=BOUNDS, g1=15, g2=25, fit=FIT, ncores=1, seed=1)
x_best, y_best, hclpso_hist=hclpso.evolute(ngen=120, verbose=1)

Notes

  • The number of particles in the exploration subgroup (g1) and exploitation subgroup (g2) are the only hyperparameters for HCLPSO. In the original algorithm, g1 tends to be smaller than g2.

  • HCLPSO provides time dependent (annealing) behavior for all major PSO hyperparameters over the number of search generations (ngen). The cognitive speed constant (c1) is linearly annealed from 2.5-0.5, social speed constant (c2) is annealed from 0.5-2.5, inertia weight (w) is annealed from 0.99-0.2, while constriction coefficient (K) is annealed from 3-1.5. Therefore, the HCLPSO user does not need to tune these values.

  • Look for an optimal balance between g1, g2, and ngen, it is recommended to minimize particle size to allow for more generations.

  • Total number of cost evaluations for PSO is (g1 + g2) * (ngen + 1).