Skip to content

Commit

Permalink
AgileRL curriculum learning and self-play tutorial (#1124)
Browse files Browse the repository at this point in the history
Co-authored-by: Elliot Tower <[email protected]>
  • Loading branch information
nicku-a and elliottower authored Nov 14, 2023
1 parent e9b4001 commit f27e84b
Show file tree
Hide file tree
Showing 21 changed files with 2,715 additions and 42 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/docs-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,4 @@ jobs:
run: python docs/_scripts/gen_envs_mds.py
- name: Documentation test
run: |
xvfb-run -s "-screen 0 1024x768x24" pytest docs --markdown-docs -m markdown-docs --splits 10 --group ${{ matrix.group }}
xvfb-run -s "-screen 0 1024x768x24" pytest docs --markdown-docs -m markdown-docs --ignore=tutorials --ignore=docs/tutorials --splits 10 --group ${{ matrix.group }}
2 changes: 1 addition & 1 deletion .github/workflows/linux-tutorials-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ jobs:
fail-fast: false
matrix:
python-version: ['3.8', '3.9', '3.10', '3.11']
tutorial: [Tianshou, CustomEnvironment, CleanRL, SB3/kaz, SB3/waterworld, SB3/connect_four, SB3/test] # TODO: add back AgileRL once issue is fixed on their end
tutorial: [Tianshou, CustomEnvironment, CleanRL, SB3/kaz, SB3/waterworld, SB3/connect_four, SB3/test, AgileRL] # TODO: add back Ray once next release after 2.6.2
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
Expand Down
1,253 changes: 1,253 additions & 0 deletions docs/tutorials/agilerl/DQN.md

Large diffs are not rendered by default.

Binary file added docs/tutorials/agilerl/connect_four_self_opp.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions docs/tutorials/agilerl/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

These tutorials provide an introductory guide to using [AgileRL](https://github.com/AgileRL/AgileRL) with PettingZoo. AgileRL's multi-agent algorithms make use of the PettingZoo parallel API and allow users to train multiple-agents in parallel in both competitive and co-operative environments. This tutorial includes the following:

* [DQN](DQN.md): _Train a DQN agent to play Connect Four through curriculum learning and self-play_
* [MADDPG](MADDPG.md): _Train an MADDPG agent to play multi-agent atari games_
* [MATD3](MATD3.md): _Train an MATD3 agent to play multi-particle-environment games_

Expand All @@ -28,6 +29,7 @@ For more information about AgileRL and what else the library has to offer, check
:hidden:
:caption: AgileRL
DQN
MADDPG
MATD3
```
Loading

0 comments on commit f27e84b

Please sign in to comment.