skrl: Modular and Flexible Library for Reinforcement Learning

  1. Antonio Serrano-Muñoz 1
  2. Dimitrios Chrysostomou 2
  3. Simon Bøgh 2
  4. Nestor Arana-Arexolaleiba 12
  1. 1 Universidad de Mondragón/Mondragon Unibertsitatea
    info

    Universidad de Mondragón/Mondragon Unibertsitatea

    Mondragón, España

    ROR https://ror.org/00wvqgd19

  2. 2 Aalborg University
    info

    Aalborg University

    Aalborg, Dinamarca

    ROR https://ror.org/04m5j1k67

Revista:
Journal of Machine Learning Research

ISSN: 1532-4435

Ano de publicación: 2022

Volume: 23

Tipo: Artigo

DOI: 10.48550/ARXIV.2202.03825 GOOGLE SCHOLAR lock_openAcceso aberto editor

Outras publicacións en: Journal of Machine Learning Research

Obxectivos de Desenvolvemento Sustentable

Resumo

skrl is an open-source modular library for reinforcement learning written in Python and designed with a focus on readability, simplicity, and transparency of algorithm implementations. In addition to supporting environments that use the traditional interfaces from OpenAI Gym / Farama Gymnasium, DeepMind and others, it provides the facility to load, configure, and operate NVIDIA Isaac Gym, Isaac Orbit, and OmniverseIsaac Gym environments. Furthermore, it enables the simultaneous training of several agents with customizable scopes (subsets of environments among all available ones), which may or may not share resources, in the same run. The library’s documentation can befound at https://skrl.readthedocs.io and its source code is available on GitHub athttps://github.com/Toni-SM/skrl

Información de financiamento

We would like to express our gratitude for the funding and support received from NVIDIA under a collaboration agreement with the Mondragon Unibertsitatea.

Referencias bibliográficas

  • Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
  • Joshua Achiam. Spinning Up in Deep Reinforcement Learning. https://github.com/ openai/spinningup, 2018.
  • Lukas Biewald. Experiment tracking with weights and biases, 2020. URL https://www. wandb.com/. Software available from wandb.com.
  • Albert Bou and Gianni De Fabritiis. Pytorchrl: Modular and distributed reinforcement learning in pytorch. arXiv preprint arXiv:2007.02622, 2020.
  • James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax.
  • Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
  • Erwin Coumans and Yunfei Bai. Pybullet, a python module for physics simulation for games, robotics and machine learning. http://pybullet.org, 2016–2021.
  • Carlo D’Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli, and Jan Peters. Mush- roomrl: Simplifying reinforcement learning research. The Journal of Machine Learning Research, 22(1):5867–5871, 2021.
  • Zihan Ding, Tianyang Yu, Hongming Zhang, Yanhua Huang, Guo Li, Quancheng Guo, Luo Mai, and Hao Dong. Efficient reinforcement learning development with rlzoo. In Proceedings of the 29th ACM International Conference on Multimedia, pages 3759–3762, 2021.
  • Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Madry. Implementation matters in deep rl: A case study on ppo and trpo. In International conference on learning representations, 2019.
  • Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International Conference on Machine Learning, pages 1587–1596. PMLR, 2018.
  • Yasuhiro Fujita, Prabhat Nagarajan, Toshiki Kataoka, and Takahiro Ishikawa. Chainerrl: A deep reinforcement learning library. Journal of Machine Learning Research, 22(77): 1–14, 2021.
  • Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off- policy maximum entropy deep reinforcement learning with a stochastic actor. In Inter- national conference on machine learning, pages 1861–1870. PMLR, 2018.
  • Shengyi Huang, Rousslan Fernand Julien Dossa, Chang Ye, Jeff Braga, Dipam Chakraborty, Kinal Mehta, and Jo˜ao G.M. Arau´jo. Cleanrl: High-quality single-file implementations of deep reinforcement learning algorithms. Journal of Machine Learning Research, 23(274): 1–18, 2022. URL http://jmlr.org/papers/v23/21-1342.html.
  • Marian K¨orber, Johann Lange, Stephan Rediske, Simon Steinmann, and Roland Glu¨ck. Comparing popular simulation environments in the scope of robotics and reinforcement learning. arXiv preprint arXiv:2103.04616, 2021.