Ensemble Particle Swarm Optimization (EPSO)

A powerful hybrid ensemble of five particle swarm optimization variants: classical inertia weight particle swarm optimization (PSO), self-organizing hierarchical particle swarm optimizer withtime-varying acceleration coefficients (HPSO-TVAC), Fitness-Distance-Ratio based PSO (FDR-PSO), Distance-based locally informed PSO (LIPS), and Comprehensive Learning PSO (CLPSO).

Original paper: Lynn, N., Suganthan, P. N. (2017). Ensemble particle swarm optimizer. Applied Soft Computing, 55, 533-548.

What can you use?

  • Multi processing: ✔️

  • Discrete spaces: ✔️

  • Continuous spaces: ✔️

  • Mixed Discrete/Continuous spaces: ✔️

Parameters

class neorl.hybrid.epso.EPSO(mode, bounds, fit, g1=15, g2=25, int_transform='nearest_int', ncores=1, seed=None)[source]

Ensemble Particle Swarm Optimization (EPSO)

Parameters
  • mode – (str) problem type, either min for minimization problem or max for maximization

  • bounds – (dict) input parameter type and lower/upper bounds in dictionary form. Example: bounds={'x1': ['int', 1, 4], 'x2': ['float', 0.1, 0.8], 'x3': ['float', 2.2, 6.2]}

  • fit – (function) the fitness function

  • g1 – (int): number of particles in the exploration group

  • g2 – (int): number of particles in the exploitation group (total swarm size is g1 + g2)

  • int_transform – (str): method of handling int/discrete variables, choose from: nearest_int, sigmoid, minmax.

  • ncores – (int) number of parallel processors (must be <= g1+g2)

  • seed – (int) random seed for sampling

evolute(ngen, LP=3, x0=None, verbose=False)[source]

This function evolutes the EPSO algorithm for a number of generations.

Parameters
  • ngen – (int) number of generations to evolute

  • LP – (int) number of generations before updating the success and failure memories for the ensemble variants (i.e. learning period)

  • x0 – (list of lists) initial position of the particles (must be of same size as g1 + g2)

  • verbose – (bool) print statistics to screen

Returns

(tuple) (best individual, best fitness, and dictionary containing major search results)

Example

from neorl import EPSO

#Define the fitness function
def FIT(individual):
    """Sphere test objective function.
                    F(x) = sum_{i=1}^d xi^2
                    d=1,2,3,...
                    Range: [-100,100]
                    Minima: 0
    """
    y=sum(x**2 for x in individual)
    return y

#Setup the parameter space (d=5)
nx=5
BOUNDS={}
for i in range(1,nx+1):
    BOUNDS['x'+str(i)]=['float', -100, 100]

#setup and evolute EPSO
epso=EPSO(mode='min', bounds=BOUNDS, g1=15, g2=25, fit=FIT, ncores=1, seed=None)
x_best, y_best, epso_hist=epso.evolute(ngen=100, LP=3, verbose=1)

Notes

  • The number of particles in the exploration subgroup (g1) and exploitation subgroup (g2) are needed for EPSO. In the original algorithm, g1 tends to be smaller than g2.

  • For EPSO, in the first 90% of the generations, both exploration and exploitation subgroups are involved, where g1 is controlled by CLPSO and g2 is controlled by all five variants. In the last 10% of the generations, the search focuses on exploitation only, where both g1 + g2 are controlled by the five variants.

  • The value of LP represents the learning period at which the success and fail memories are updated to calculate the success rate for each PSO variant. The success rate represents the probability for each PSO variant to update the position and velocity of the next particle in the group. LP=3 means the update will occur every 3 generations.

  • Look for an optimal balance between g1, g2, and ngen, it is recommended to minimize particle size to allow for more generations.

  • Total number of cost evaluations for EPSO is (g1 + g2) * (ngen + 1).