Actor-Critic with Experience Replay (ACER)¶
Sample Efficient Actor-Critic with Experience Replay (ACER) combines concepts of parallel agents from A2C and provides a replay memory as in DQN. ACER also includes truncated importance sampling with bias correction, stochastic dueling network architectures, and a new trust region policy optimization method.
Original paper: https://arxiv.org/abs/1611.01224
What can you use?¶
Multi processing: ✔️
Discrete spaces: ✔️
Continuous spaces: ❌
Mixed Discrete/Continuous spaces: ❌
Parameters¶
-
class
neorl.rl.baselines.acer.
ACER
(policy, env, gamma=0.99, n_steps=20, q_coef=0.5, ent_coef=0.01, max_grad_norm=10, learning_rate=0.0007, lr_schedule='linear', buffer_size=5000, replay_ratio=4, replay_start=1000, verbose=0, seed=None, _init_setup_model=True)[source]¶ The ACER (Actor-Critic with Experience Replay) model class
- Parameters
policy – (ActorCriticPolicy or str) The policy model to use (MlpPolicy, CnnPolicy, CnnLstmPolicy, …)
env – (NEORL environment or Gym environment) The environment to learn with PPO, either use NEORL method
CreateEnvironment
(see below) or construct your custom Gym environmentgamma – (float) The discount value
n_steps – (int) The number of steps to run for each environment per update (i.e. batch size is n_steps * n_env where n_env is number of environment copies running in parallel)
q_coef – (float) The weight for the loss on the Q value
ent_coef – (float) The weight for the entropy loss
max_grad_norm – (float) The clipping value for the maximum gradient
learning_rate – (float) The initial learning rate for the RMS prop optimizer
lr_schedule – (str) The type of scheduler for the learning rate update (‘linear’, ‘constant’, ‘double_linear_con’, ‘middle_drop’ or ‘double_middle_drop’)
buffer_size – (int) The buffer size in number of steps
replay_ratio – (float) The number of replay learning per on policy learning on average, using a poisson distribution
replay_start – (int) The minimum number of steps in the buffer, before experience replay starts
verbose – (int) the verbosity level: 0 none, 1 training information, 2 tensorflow debug
seed – (int) Seed for the pseudo-random generators (python, numpy, tensorflow). If None (default), use random seed.
-
learn
(total_timesteps, callback=None, log_interval=100, tb_log_name='ACER', reset_num_timesteps=True)[source]¶ Return a trained model.
- Parameters
total_timesteps – (int) The total number of samples to train on
callback – (Union[callable, [callable], BaseCallback]) function called at every steps with state of the algorithm. It takes the local and global variables. If it returns False, training is aborted. When the callback inherits from BaseCallback, you will have access to additional stages of the training (training start/end), please read the documentation for more details.
log_interval – (int) The number of timesteps before logging.
tb_log_name – (str) the name of the run for tensorboard log
reset_num_timesteps – (bool) whether or not to reset the current timestep number (used in logging)
- Returns
(BaseRLModel) the trained model
-
classmethod
load
(load_path, env=None, custom_objects=None, **kwargs)¶ Load the model from file
- Parameters
load_path – (str or file-like) the saved parameter location
env – (Gym Environment) the new environment to run the loaded model on (can be None if you only need prediction from a trained model)
custom_objects – (dict) Dictionary of objects to replace upon loading. If a variable is present in this dictionary as a key, it will not be deserialized and the corresponding item will be used instead. Similar to custom_objects in keras.models.load_model. Useful when you have an object in file that can not be deserialized.
kwargs – extra arguments to change the model when loading
-
predict
(observation, state=None, mask=None, deterministic=False)¶ Get the model’s action from an observation
- Parameters
observation – (np.ndarray) the input observation
state – (np.ndarray) The last states (can be None, used in recurrent policies)
mask – (np.ndarray) The last masks (can be None, used in recurrent policies)
deterministic – (bool) Whether or not to return deterministic actions.
- Returns
(np.ndarray, np.ndarray) the model’s action and the next state (used in recurrent policies)
-
class
neorl.rl.make_env.
CreateEnvironment
(method, fit, bounds, ncores=1, mode='max', episode_length=50)[source] A module to construct a fitness environment for certain algorithms that follow reinforcement learning approach of optimization
- Parameters
method – (str) the supported algorithms, choose either:
dqn
,ppo
,acktr
,acer
,a2c
.fit – (function) the fitness function
bounds – (dict) input parameter type and lower/upper bounds in dictionary form. Example:
bounds={'x1': ['int', 1, 4], 'x2': ['float', 0.1, 0.8], 'x3': ['float', 2.2, 6.2]}
ncores – (int) number of parallel processors
mode – (str) problem type, either
min
for minimization problem ormax
for maximization (RL is default tomax
)episode_length – (int): number of individuals to evaluate before resetting the environment to random initial guess.
-
class
neorl.utils.neorlcalls.
RLLogger
(check_freq=1, plot_freq=None, n_avg_steps=10, pngname='history', save_model=False, model_name='bestmodel.pkl', save_best_only=True, verbose=False)[source] Callback for logging data of RL algorathims (x,y), compatible with: A2C, ACER, ACKTR, DQN, PPO
- Parameters
check_freq – (int) logging frequency, e.g. 1 will record every time step
plot_freq – (int) frequency of plotting the fitness progress (if
None
, plotter is deactivated)n_avg_steps – (int) if
plot_freq
is NOTNone
, then this is the number of timesteps to group to draw statistics for the plotter (e.g. 10 will group every 10 time steps to estimate min, max, mean, and std).pngname – (str) name of the plot that will be saved if
plot_freq
is NOTNone
.save_model – (bool) whether or not to save the RL neural network model (model is saved every
check_freq
)model_name – (str) name of the model to be saved if
save_model=True
save_best_only – (bool) if
save_model = True
, then this flag only saves the model if the fitness value improves.verbose – (bool) print updates to the screen
Example¶
Train an ACER agent to optimize the 5-D discrete sphere function
from neorl import ACER
from neorl import MlpPolicy
from neorl import RLLogger
from neorl import CreateEnvironment
def Sphere(individual):
"""Sphere test objective function.
F(x) = sum_{i=1}^d xi^2
d=1,2,3,...
Range: [-100,100]
Minima: 0
"""
return sum(x**2 for x in individual)
nx=5
bounds={}
for i in range(1,nx+1):
bounds['x'+str(i)]=['int', -100, 100]
if __name__=='__main__': #use this "if" block for parallel ACER!
#create an enviroment class
env=CreateEnvironment(method='acer', fit=Sphere,
bounds=bounds, mode='min', episode_length=50)
#create a callback function to log data
cb=RLLogger(check_freq=1)
#create an acer object based on the env object
acer = ACER(MlpPolicy, env=env, n_steps=25, q_coef=0.55, ent_coef=0.02, seed=1)
#optimise the enviroment class
acer.learn(total_timesteps=2000, callback=cb)
#print the best results
print('--------------- ACER results ---------------')
print('The best value of x found:', cb.xbest)
print('The best value of y found:', cb.rbest)
Notes¶
ACER can be observed as the parallel version of DQN with additional enhancements. ACER is also restricted to discrete spaces.
ACER shows sensitivity to
n_steps
,q_coef
, andent_coef
. It is always good to consider tuning these hyperparameters before using for optimization. In particular,n_steps
is considered the most important parameter to tune.The cost of ACER equals to the
total_timesteps
in thelearn
function, where the original fitness function will be accessedtotal_timesteps
times.See how ACER is used to solve two common combinatorial problems in TSP and KP.
Acknowledgment¶
Thanks to our fellows in stable-baselines, as we used their standalone RL implementation, which is utilized as a baseline to leverage advanced neuroevolution algorithms.
Hill, Ashley, et al. “Stable baselines.” (2018).