Welcome to NEORL Website!

Latest News:

  • November 24, 2021: Stable release 1.7 is out.

  • September 10, 2021: First NEORL stable release 1.6 is out.

Primary contact to report bugs/issues: Majdi I. Radaideh (radaideh@mit.edu)

alternate text

NEORL (NeuroEvolution Optimisation with Reinforcement Learning) is a set of implementations of hybrid algorithms combining neural networks and evolutionary computation based on a wide range of machine learning and evolutionary intelligence architectures. NEORL aims to solve large-scale optimisation problems relevant to operation & optimisation research, engineering, business, and other disciplines.

Github repository: https://github.com/mradaideh/neorl

Projects/Papers Using NEORL

1- Radaideh, M. I., Wolverton, I., Joseph, J., Tusar, J. J., Otgonbaatar, U., Roy, N., Forget, B., Shirvan, K. (2021). Physics-informed reinforcement learning optimization of nuclear assembly design. Nuclear Engineering and Design, 372, p. 110966 [LINK1].

2- Radaideh, M. I., Shirvan, K. (2021). Rule-based reinforcement learning methodology to inform evolutionary algorithms for constrained optimization of engineering applications. Knowledge-Based Systems, 217, p. 106836 [LINK2].

3- Radaideh, M. I., Forget, B., Shirvan, K. (2021). Large-scale design optimisation of boiling water reactor bundles with neuroevolution. Annals of Nuclear Energy, 160, 108355 [LINK3].

Citing the Project

To cite this repository in publications:

  title={NEORL: NeuroEvolution Optimization with Reinforcement Learning},
  author={Radaideh, Majdi I and Du, Katelin and Seurin, Paul and Seyler, Devin and Gu, Xubo and Wang, Haijia and Shirvan, Koroush},
  journal={arXiv preprint arXiv:2112.07057},


NEORL was established in MIT back to 2020 with feedback, validation, and usage of different colleagues: Issac Wolverton (MIT Quest for Intelligence), Joshua Joseph (MIT Quest for Intelligence), Benoit Forget (MIT Nuclear Science and Engineering), Ugi Otgonbaatar (Exelon Corporation).