Skip to content

Commit

Permalink
Fix/mujoco (#183)
Browse files Browse the repository at this point in the history
* fix: mujoco

* fix

* update: howto/learn_in_dmc

* update: howto/learn_in_dmc

* update: howto/learn_in_dmc

* fix: dmc default configs
  • Loading branch information
michele-milesi authored Jan 8, 2024
1 parent e3bddd8 commit af40078
Show file tree
Hide file tree
Showing 3 changed files with 21 additions and 8 deletions.
20 changes: 13 additions & 7 deletions howto/learn_in_dmc.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
## Install Gymnasium MuJoCo/DMC environments
First, you should install the proper environments:

- MuJoCo (Gymnasium): you do not need to install extra packages, the `pip install -e .` command is enough to have available all the MuJoCo environments provided by Gymnasium (https://gymnasium.farama.org/environments/mujoco/)
- MuJoCo (Gymnasium): you need to install extra packages: use the `pip install -e .[mujoco]` command to have available all the MuJoCo environments provided by Gymnasium (https://gymnasium.farama.org/environments/mujoco/).
- DMC: you have to install extra packages with the following command: `pip install -e .[dmc]`. (https://github.com/deepmind/dm_control).

## Install OpenGL rendering backands packages
Expand All @@ -13,22 +13,28 @@ For each of them, you need to install some packages:
- OSMesa: `sudo apt-get install libgl1-mesa-glx libosmesa6`
In order to use one of these rendering backends, you need to set the `MUJOCO_GL` environment variable to `"glfw"`, `"egl"`, `"osmesa"`, respectively.

> **Note**
> [!NOTE]
>
> The `libglew2.2` could have a different name, based on your OS (e.g., `libglew2.2` is for Ubuntu 22.04.2 LTS).
>
> It could be necessary to install also the `PyOpenGL-accelerate` package with the `pip install PyOpenGL-accelerate` command and the `mesalib` package with the `conda install conda-forge::mesalib` command.
For more information: [https://github.com/deepmind/dm_control](https://github.com/deepmind/dm_control) and [https://mujoco.readthedocs.io/en/stable/programming/index.html#using-opengl](https://mujoco.readthedocs.io/en/stable/programming/index.html#using-opengl)
For more information: [https://github.com/deepmind/dm_control](https://github.com/deepmind/dm_control) and [https://mujoco.readthedocs.io/en/stable/programming/index.html#using-opengl](https://mujoco.readthedocs.io/en/stable/programming/index.html#using-opengl).

## MuJoCo Gymnasium
In order to train your agents on the [MuJoCo environments](https://gymnasium.farama.org/environments/mujoco/) provided by Gymnasium, it is sufficient to select the *GYM* environment (`env=gym`) and set the `env.id` to the name of the environment you want to use. For instance, `"Walker2d-v4"` if you want to train your agent in the *walker walk* environment.
In order to train your agents on the [MuJoCo environments](https://gymnasium.farama.org/environments/mujoco/) provided by Gymnasium, it is sufficient to select the *MuJoCo* environment (`env=mujoco`) and set the `env.id` to the name of the environment you want to use. For instance, `"Walker2d-v4"` if you want to train your agent in the *walker walk* environment.

```bash
python sheeprl.py exp=dreamer_v3 env=gym env.id=Walker2d-v4 algo.cnn_keys.encoder=[rgb]
python sheeprl.py exp=dreamer_v3 env=mujoco env.id=Walker2d-v4 algo.cnn_keys.encoder=[rgb]
```

## DeepMind Control
In order to train your agents on the [DeepMind control suite](https://github.com/deepmind/dm_control/blob/main/dm_control/suite/README.md), you have to select the *DMC* environment (`env=dmc`) and to set the id of the environment you want to use. A list of the available environments can be found [here](https://arxiv.org/abs/1801.00690). For instance, if you want to train your agent on the *walker walk* environment, you need to set the `env.id` to `"walker_walk"`.
In order to train your agents on the [DeepMind control suite](https://github.com/deepmind/dm_control/blob/main/dm_control/suite/README.md), you have to select the *DMC* environment (`env=dmc`) and set the id of the environment you want to use. A list of the available environments can be found [here](https://arxiv.org/abs/1801.00690). For instance, if you want to train your agent on the *walker walk* environment, you need to set the `env.id` to `"walker_walk"`.

```bash
python sheeprl.py exp=dreamer_v3 env=dmc env.id=walker_walk algo.cnn_keys.encoder=[rgb]
```
```

> [!NOTE]
>
> By default the `env.sync_env` parameter is set to `True`. We recommend not changing this value for the MuJoCo environments to work properly.
3 changes: 2 additions & 1 deletion sheeprl/configs/env/dmc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ defaults:
id: walker_walk
action_repeat: 1
max_episode_steps: 1000
sync_env: True

# Wrapper to be instantiated
wrapper:
Expand All @@ -15,4 +16,4 @@ wrapper:
height: ${env.screen_size}
seed: null
from_pixels: True
from_vectors: False
from_vectors: True
6 changes: 6 additions & 0 deletions sheeprl/configs/env/mujoco.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
defaults:
- gym
- _self_

id: Walker2d-v4
sync_env: True

0 comments on commit af40078

Please sign in to comment.