We introduce myGym, a toolkit suitable for fast prototyping of neural networks in the area of robotic manipulation and navigation. Our toolbox is fully modular, so that you can train your network with different robots, in several environments and on various tasks. You can also create a curriculum of tasks with increasing complexity and test your network on them. We also included an automatic evaluation and benchmark tool for your developed model. We have pretained the Yolact network for visual recognition of all objects in the simulator, so that you can reward your networks based on visual sensors only.
We keep training the current state-of-the-art algorithms to provide baselines for the tasks in the toolbox. There is also a leaderboard showing algorithms with the best generalization capability, tested on the tasks in our basic curriculum. From version 2.0 it is possible to train multiple networks within one task and switch between them based on reward or adaptively. The number of neteworks is specified in config file.
Learn more about the toolbox in our documentation
The latest version introduces new features:
- new robots (Nico, Tiago, HSR)
- new workspaces (human collaborative, Tiago table, Nico table)
- new algorithms for multi-step training
- visualization of multiple tranings in one graph
- sim2real for Nico robot
- new compositional rewards
This is the last version of myGym compactible with Stable Baselines and Python 3.7. Next version will be based on TF2 and Torch and Python 3.10.
- Separate modules for fast prototyping (task.py, reward.py, env.py)
- Pretrained vision for instance wise semantic segmentation
- Human-robot collaboration environments
From version 2.1:
- Multi-step tasks defined inside config file with customizable observations
- Multi-goal rewards for training long horizon tasks
- REAL robotic gripping based on friction or containment
- Multi-network training - three networks switching in Pick and rotate task
Clone the repository:
git clone https://github.com/incognite-lab/mygym.git
cd mygym
Create Python 3.7 conda env as follows (later Python versions does not support TF 0.15.5 neccesary for Stable baselines ):
conda env create -f environment.yml
conda activate mygym
Install myGym:
python setup.py develop
If you face troubles with mpi4py dependency install the lib:
sudo apt install libopenmpi-dev
If you want to use the pretrained visual modules, please download them first:
cd myGym
sh download_vision.sh
If you want to use the pretrained baseline models, download them here:
cd myGym
sh download_baselines.sh
Check, whether the toolbox works:
sh ./speed_checker.sh
If everything is correct, the toolbox will train for two minutes without GUI and then shows the test results (at least 30% success rate)
Environment | Gym-v0 is suitable both single-step and multi-step manipulation and navigation |
---|---|
Workspaces | Table, Collaborative table, Maze, Vertical maze, Drawer, Darts, Football, Fridge, Stairs, Baskets |
Vision | Cartesians, RGB, Depth, Class, Centroid, Bounding Box, Semantic Mask, Latent Vector |
Robots | 9 robotic arms, 2 dualarms, humanoid |
Robot actions | Absolute, Relative, Joints |
Objects | 54 objects in 5 categories |
Tasks | Reach, Press, Switch, Turn, Push, Pick, Place, PicknPlace, Poke,MultiReach, MultiPNP |
Randomizers | Light, Texture, Size, Camera position |
Baselines | Tensorflow, Pytorch |
Physics | Bullet, Mujoco deprecated from version 2.0 |
You can visualize the virtual gym env prior to the training.
python test.py
There will be the default workspace activated.
EXPERIMENTAL - You can control the robot and gripper from keyboard (arrows and A and Z for third axis in caartesian), spawn object to test the task (WIP)
There are also visual outputs from the active cameras (both RGB and Depth):
Find more details about this function in the documentation
Run the default training without specifying the parameters:
python train.py
The training will start with the GUI window and a standstill visualization. Wait until the first evaluation to check the progress:
There are more training tutorials in the documentation
python train.py --config ./configs/train_reach.json
python train.py --config ./configs/train_press.json
For details see documentation
python train.py --config ./configs/train_switch.json
For details see documentation
python train.py --config ./configs/train_turn.json
For details see documentation
python train.py --config ./configs/train_push.json
python train.py --config ./configs/train_poke.json
python train.py --config ./configs/train_pnp_1n.json
python train.py --config ./configs/train_reach_multitask.json
python train.py --config ./configs/train_pnp_3n_multitask2.json
python train.py --config ./configs/train_pnp_3n_multitask4.json
As myGym is modular, you can easily train with different robots:
python train.py --robot jaco
You can also change the workspace within the gym, the task or the goal object. If you want to store an ouput video, just add the record parameter:
python train.py --workspace collabtable --robot panda --task push --task_objects wrench --record 1
Learn more about the simulation parameters in the documentation
We have developed scripts for parallel training to speed up this process. You can edit the desired parameter in train_parallel.py and run it:
python train_parallel.py
You can use the test script for the visualization of pretrained models:
python test.py --config ./trained_models/yourmodel/train.json
It will load the pretrained model and test it in the task and workspace defined in the config file.
There is automatic evaluation and logging included in the train script. It is controlled by parameters --eval_freq and --eval_episodes. The log files are stored in the folder with the trained model and you can easily visualize the learning progress after the training. There are also gifs for each eval period stored to compare the robot performance during training. We have also implemented evaluation in tensorboard:
tensorboard --logdir ./trained_models/yourmodel
If you want to interactively compare different parameters, just run tensorboard without model dir specification:
There are also other visualization scripts (Documerntation in preparation)
As myGym allows curriculum learning, the workspaces and tasks are concentrated in single gym, so that you can easily transfer the robot. The basic environment is called Gym-v0. There are more gyms for navigation and multi-agent collaboration in preparation.
Robot | Type | Gripper | DOF | Parameter value |
---|---|---|---|---|
UR-3 | arm | no gripper | 6 | ur3 |
UR-5 | arm | no gripper | 6 | ur5 |
UR-10 | arm | no gripper | 6 | ur10 |
Kuka IIWA | arm | magnetic, gripper | 6 | kuka |
Reachy | arm | passive palm | 7 | reachy |
Leachy | arm | passive palm | 7 | leachy |
Franka-Emica | arm | gripper | 7 | panda |
Jaco arm | arm | two finger | 13 | jaco |
Gummiarm | arm | passive palm | 13 | gummi |
Human Support Robot (HSR) | arm | gripper | 7 | hsr |
ABB Yumi | dualarm | two finger | 12 | yumi |
ReachyLeachy | dualarm | passive palms | 14 | reachy_and_leachy |
Pepper | humanoid | -- | 20 | WIP |
Thiago | humanoid | -- | 19 | WIP |
Atlas | humanoid | -- | 28 | WIP |
Name | Type | Suitable tasks | Parameter value |
---|---|---|---|
Tabledesk | manipulation | Reach,Press, Switch, Turn, PicknPlace | table |
Drawer | manipulation | Pick, Place, PicknPlace | drawer |
Fridge | manipulation | Push, Pick | fridge |
Baskets | manipulation | Throw, Hit | baskets |
Darts | manipulation | Throw, Hit | darts |
Football | manipulation | Throw, Hit | football |
Collaborative table | collaboration | Give, Hold, Move together | collabtable |
Vertical maze | planning | -- | veticalmaze |
Maze | navigation | -- | maze |
Stairs | navigation | -- | stairs |
Workspace | Reach | Pick | Place | PicknPlace | Push | Press | Switch | Turn | Poke | Multistep PNP | Multistep Reach |
---|---|---|---|---|---|---|---|---|---|---|---|
Tabledesk | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
Drawer | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
Collaborative table | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
The new global evaluation metric, which we call \textit{learnability}, allows the user to evaluate and compare algorithms in a more systematic fashion. Learnability is defined as a general ability to learn irrespective of environmental conditions. The goal is to test an algorithm with respect to the complexity of environment. We have decomposed the environment complexity into independent scales. The first scale is dedicated to the complexity of the task. Second scale exploits the complexity of the robotic body that is controlled by the neural network. The third scale stands for the temporal complexity of the environment.
Learnability is represented as a single value metric that evaluates algorithms under various conditions, allowing us to compare different RL algorithms. The number of conditions is limited for practical reasons, as the number of training configurations grows exponentially with each new condition, and each configuration requires standalone training and evaluation. Therefore, we limited the total number of combinations to
Pos. | Algorhitm | Score |
---|---|---|
1. | PPO2 | 30.11 |
2. | TRPO | 28.75 |
3. | ACKTR | 27.5 |
4. | SAC | 27.43 |
5. | PPO | 27.21 |
5. | myAlgo | 15.00 |
Core team:
Contributors:
Radoslav Skoviera, Peter Basar, Michael Tesar, Vojtech Pospisil, Jiri Kulisek, Anastasia Ostapenko, Sara Thu Nguyen
'@INPROCEEDINGS{9643210, author={Vavrecka, Michal and Sokovnin, Nikita and Mejdrechova, Megi and Sejnova, Gabriela},
booktitle={2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI)},
title={MyGym: Modular Toolkit for Visuomotor Robotic Tasks},
year={2021}, volume={}, number={}, pages={279-283},
doi={10.1109/ICTAI52525.2021.00046}}'