Automatic, Readable, Reusable, Extendable
Machin is a reinforcement library designed for pytorch.
Platform | Status |
---|---|
Linux | |
Windows |
Anything, including recurrent networks.
Currently Machin has implemented the following algorithms, the list is still growing:
- Deep Q-Network (DQN)
- Double DQN
- Dueling DQN
- RAINBOW
- Deep Deterministic policy Gradient (DDPG)
- Twin Delayed DDPG (TD3)
- Hystereric DDPG (Modified from Hys-DQN)
- Advantage Actor-Critic (A2C)
- Proximal Policy Optimization (PPO)
- Trust Region Policy Optimization (TRPO)
- Soft Actor Critic (SAC)
- Prioritized Experience Replay (PER)
- Generalized Advantage Estimation (GAE)
- Recurrent networks in DQN, etc.
- Evolution Strategies
- QMIX (multi agent)
- Model-based methods
Starting from version 0.4.0, Machin now supports automatic config generation, you can get a configuration through:
python -m machin.auto generate --algo DQN --env openai_gym --output config.json
And automatically launch the experiment with pytorch lightning:
python -m machin.auto launch --config config.json
Compared to other reinforcement learning libraries such as the famous rlpyt, ray, and baselines. Machin tries to just provide a simple, clear implementation of RL algorithms.
All algorithms in Machin are designed with minimial abstractions and have very detailed documents, as well as various helpful tutorials.
Machin takes a similar approach to that of pytorch, encasulating algorithms, data structures in their own classes. Users do not need to setup a series of data collectors
, trainers
, runners
, samplers
... to use them, just import.
The only restriction placed on your models is their input / output format, however, these restrictions are minimal, making it easy to adapt algorithms to your custom environments.
Machin is built upon pytorch, it and thanks to its powerful rpc api, we may construct complex distributed programs. Machin provides implementations for enhanced parallel execution pools, automatic model assignment, role based rpc scaling, rpc service discovery and registration, etc.
Upon these core functions, Machin is able to provide tested high-performance distributed training algorithm implementations, such as A3C, APEX, IMPALA, to ease your design.
Machin is weakly reproducible, for each release, our test framework will directly train every RL framework, if any framework cannot reach the target score, the test will fail directly.
However, currently, the tests are not guaranteed to be exactly the same as the tests in original papers, due to the large variety of different environments used in original research papers.
See here. Examples are located in examples.
Machin is hosted on PyPI. Python >= 3.6 and PyTorch >= 1.6.0 is required. You may install the Machin library by simply typing:
pip install machin
You are suggested to create a virtual environment first if you are using conda to manage your environments, to prevent PIP changes your packages without letting conda know.
conda create -n some_env pip
conda activate some_env
pip install machin
Note: Currently only a fraction of all functions is supported on platforms other than linux (mainly distributed algorithms), to test whether the code is running correctly, you can run the corresponding test script for your platform in the root directory:
run_win_test.bat
run_linux_test.sh
run_macos_test.sh
Some errors may occur due to incorrect setup of libraries, make sure you have installed graphviz
etc.
Any contribution would be great, don't hesitate to submit a PR request to us! Please follow the instructions in this file.
If you have any issues, please use the template markdown files in .github/ISSUE_TEMPLATE folder and format your issue before opening a new one. We would try our best to respond to your feature requests and problems.
We would be very grateful if you can cite our work in your publications:
@misc{machin,
author = {Muhan Li},
title = {Machin},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/iffiX/machin}},
}
Please see Roadmap for the exciting new features we are currently working on!