Skip to content

osudrl/ASLIP-RL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

apex


Apex is a small, modular library that contains some implementations of continuous reinforcement learning algorithms. Fully compatible with OpenAI gym.

Running experiments

Basics

Any algorithm can be run from the apex.py entry point.

To run DDPG on Walker2d-v2,

python apex.py ddpg --env_name Walker2d-v2 --batch_size 64

Logging details / Monitoring live training progress

Tensorboard logging is enabled by default for all algorithms. The logger expects that you supply an argument named logdir, containing the root directory you want to store your logfiles in, and an argument named seed, which is used to seed the pseudorandom number generators.

A basic command line script illustrating this is:

python apex.py ars --logdir logs/ars --seed 1337

The resulting directory tree would look something like this:

logs/
├── ars
│   └── experiments
│       └── [New Experiment Logdir]
├── ppo
├── synctd3
└── ddpg

Using tensorboard makes it easy to compare experiments and resume training later on.

To see live training progress

Run $ tensorboard --logdir logs/ --port=8097 then navigate to http://localhost:8097/ in your browser

Unit tests

You can run the unit tests using pytest.

To Do

  • Sphinx documentation and github wiki
  • Make logger as robust and pythonic as possible
  • Fix some hacks having to do with support for parallelism (namely Vectorize, Normalize and Monitor)
  • Improve/Tune implementations of TD3

Notes

Troubleshooting: X module not found? Make sure PYTHONPATH is configured. Make sure you run examples from root directory.

Features:

  • Parallelism with Ray
  • GAE/TD(lambda) estimators
  • PPO, VPG with ratio objective and with log likelihood objective
  • TD3
  • DDPG
  • RDPG
  • ARS
  • Parameter Noise Exploration (for TD3 only)
  • Entropy based exploration bonus
  • advantage centering (observation normalization WIP)

To be implemented long term:

Maybe implemented in future:

  • DXNN
  • ACER and other off-policy methods
  • Model-based methods

Acknowledgements

Thanks to @ikostrikov's whose great implementations were used for debugging. Also thanks to @rll for rllab, which inspired a lot of the high level interface and logging for this library, and to @OpenAI for the original PPO tensorflow implementation. Thanks to @sfujim for the clean implementations of TD3 and DDPG in PyTorch. Thanks @modestyachts for the easy to understand ARS implementation.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published