![](https://private-user-images.githubusercontent.com/115161827/249783080-a2a022a9-b1b1-4231-9a8d-37e4d3898acf.jpg?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0NjY0ODYsIm5iZiI6MTczOTQ2NjE4NiwicGF0aCI6Ii8xMTUxNjE4MjcvMjQ5NzgzMDgwLWEyYTAyMmE5LWIxYjEtNDIzMS05YThkLTM3ZTRkMzg5OGFjZi5qcGc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjEzJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxM1QxNzAzMDZaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT00ZmQ5OGNjNjZhZGE0MzRmOGMzYzBjYTA2OTM5NzI3ZTg0ZjQyZDAzZDE2NjNmOWI4MGI4YzhmOTlmMDZlNDIwJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.pwb7ZWGnnvBN1uHZwzjLFKT5wkTtP8Kc2ieo0QboeN8)
Overview • How To Run • Obtain saved checkpoints • How To Use Custom Model Outside The Platform • Acknowledgment
Train MMDetection 3.0 models in Supervisely.
Application key points:
- The app supports Object Detection and Instance Segmentation tasks
- There are almost all Object Detection and Instance Segmentation models from MMDetection 3.0
- You can compare all the models performance and metrics in the Model Leaderboard table
- Fine-tune pretrained models or train it from scratch
- Define Train / Validation splits
- Select classes for training
- Define augmentations
- Tune hyperparameters
- Preview LR schedulers before start the training
- Watch the training progress, losses, metrics in charts
- Save training checkpoints to Team Files
The app only supports the models in Object Detection and Instance Segmentation tasks (origianl model zoo):
Backbones | Necks | Loss | Common |
|
Step 1. Run the app from context menu of the project with annotations or from the Ecosystem
Step 2. Select the MMDetection task you need to solve
![](https://private-user-images.githubusercontent.com/115161827/250084650-136a8a5e-4066-4a1f-86f8-b14e266527b7.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0NjY0ODYsIm5iZiI6MTczOTQ2NjE4NiwicGF0aCI6Ii8xMTUxNjE4MjcvMjUwMDg0NjUwLTEzNmE4YTVlLTQwNjYtNGExZi04NmY4LWIxNGUyNjY1MjdiNy5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjEzJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxM1QxNzAzMDZaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT00NmYyZDlkMjVjNTcyZTBhYmZhYzViZTVmMzRiYjgyZDJjODhlOTgzNWMyZThkMjk3OTRiMWY4YTQ3MGZlOTNmJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.puQ8u5Vj11qHn1GPkBqKQFRIinsxxSUSYVZpxJli0BA)
Step 3. Choose the pretrained or custom object detection model
![](https://private-user-images.githubusercontent.com/115161827/250084664-b07114db-d620-469b-893f-202d3ce356c6.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0NjY0ODYsIm5iZiI6MTczOTQ2NjE4NiwicGF0aCI6Ii8xMTUxNjE4MjcvMjUwMDg0NjY0LWIwNzExNGRiLWQ2MjAtNDY5Yi04OTNmLTIwMmQzY2UzNTZjNi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjEzJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxM1QxNzAzMDZaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1hYTE2N2RlZjE2YTI3NzQwYWE4OTgyNzMwYjUyODNjYjA3M2MzNGUyOWQ0MGE3ZTE5OGI5OGI1ZWFiMDU2Mjg5JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.S110VuuBqOygR0z8laGvGvNL7zRDW8bJzZwFlH_RqL8)
Step 4. Select the classes you want to train MMDetection on
![](https://private-user-images.githubusercontent.com/115161827/250084669-29b0b4ab-44a5-4d1f-92f6-d5f0aee54b77.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0NjY0ODYsIm5iZiI6MTczOTQ2NjE4NiwicGF0aCI6Ii8xMTUxNjE4MjcvMjUwMDg0NjY5LTI5YjBiNGFiLTQ0YTUtNGQxZi05MmY2LWQ1ZjBhZWU1NGI3Ny5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjEzJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxM1QxNzAzMDZaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1lYzBjYWZjMjNlZDgwZTU0Zjg1ODlmYTc1NTI5ZjViODBlZWNmOTc3YjIyYjkxZTdhOTU4OTY1MDA4MDhkZDk5JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.XuTY4HqVuHKMjNwNT7D7NplrbaiNKmSLVo-rQF3bcmw)
Step 5. Define the train/val splits
![](https://private-user-images.githubusercontent.com/115161827/250084679-ea58fe7d-c592-43b0-8492-8535117d5a06.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0NjY0ODYsIm5iZiI6MTczOTQ2NjE4NiwicGF0aCI6Ii8xMTUxNjE4MjcvMjUwMDg0Njc5LWVhNThmZTdkLWM1OTItNDNiMC04NDkyLTg1MzUxMTdkNWEwNi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjEzJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxM1QxNzAzMDZaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1hZDViYzE1Mzg4YzYzZDdhNTUzOWM4ZmJmOTdiYTFhODVjZjVjZDQyZTM3ZGM3ZjM5NDgzNzZjMTQzM2U2ZmExJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.qDcu7DNRMtbqgxWe_pFUffKCzs8ByACRnJOacY51lVA)
Step 6. Choose either ready-to-use augmentation template or provide custom pipeline
![](https://private-user-images.githubusercontent.com/115161827/250084690-18664fa4-398c-4848-b252-9f22b93055d5.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0NjY0ODYsIm5iZiI6MTczOTQ2NjE4NiwicGF0aCI6Ii8xMTUxNjE4MjcvMjUwMDg0NjkwLTE4NjY0ZmE0LTM5OGMtNDg0OC1iMjUyLTlmMjJiOTMwNTVkNS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjEzJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxM1QxNzAzMDZaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1mZTUxNjRiYTBmOGM0MDFhZGIxY2VhNTM5MDI0ZjUwMmFkZGJjYzZiMWZkMGVhZTMzNGNhMjQzMGQ0YmU0YWE1JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.gT24Lf_6bIC5SCfbCo3WsoZsMpOpgrI1BGtB1X5Jp94)
Step 7. Configure the training parameters
![](https://private-user-images.githubusercontent.com/115161827/250084699-a7c9e642-0488-4175-967a-e1d1f2727efb.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0NjY0ODYsIm5iZiI6MTczOTQ2NjE4NiwicGF0aCI6Ii8xMTUxNjE4MjcvMjUwMDg0Njk5LWE3YzllNjQyLTA0ODgtNDE3NS05NjdhLWUxZDFmMjcyN2VmYi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjEzJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxM1QxNzAzMDZaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0wZWI4YzlmOTZkMWUzM2ZjYjhkMzI0ZGJlMDkxYzljZWMzMDZlZTllZjZmNGM4N2QxODZlNTQyMzk1NzY3ODQ0JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.3vr2_mCS96vCeXNflwD_MTlSWyRUpci5Rp5ZZ7Z3X5o)
Step 8. Click Train
button and observe the training progress, metrics charts and visualizations
![](https://private-user-images.githubusercontent.com/115161827/250085648-6354d252-a1ee-4046-9d66-1881ad64c17f.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0NjY0ODYsIm5iZiI6MTczOTQ2NjE4NiwicGF0aCI6Ii8xMTUxNjE4MjcvMjUwMDg1NjQ4LTYzNTRkMjUyLWExZWUtNDA0Ni05ZDY2LTE4ODFhZDY0YzE3Zi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjEzJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxM1QxNzAzMDZaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1lY2I5MTg5ZjJjM2M5N2RkYjE1ZTIyMWEzNjljYTNhMTI5NzBiNzVhMDAwY2UxZDhmNzJiOTI5MDA4ZjVhOGFiJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.RhuVGVF3gXeQD7aZJmcchtSmk-Udq5qFE7dSBvQOsGc)
All the trained checkpoints, that are generated through the process of training models are stored in Team Files in the folder mmdetection-3.
To navigate to team files, go to the Start
menu and press the Team files
button
You can use your trained models outside Supervisely platform without any dependencies on Supervisely SDK. You just need to download config file and model weights (.pth) from Team Files, and then you can build and use the model as a normal model in mmdetection 3.0. See this Jupyter Notebook for details.
A base code example is here:
# Put your paths here:
img_path = "demo_data/image_01.jpg"
config_path = "app_data/work_dir/config.py"
weights_path = "app_data/work_dir/epoch_8.pth"
device = "cuda:0"
import mmcv
from mmengine import Config
from mmdet.apis import inference_detector, init_detector
from mmdet.registry import VISUALIZERS
from mmdet.visualization.local_visualizer import DetLocalVisualizer
from PIL import Image
# build the model
cfg = Config.fromfile(config_path)
model = init_detector(cfg, weights_path, device=device, palette='random')
# predict
result = inference_detector(model, img_path)
print(result)
# visualize
img = mmcv.imread(img_path, channel_order="rgb")
visualizer: DetLocalVisualizer = VISUALIZERS.build(model.cfg.visualizer)
visualizer.dataset_meta = model.dataset_meta
visualizer.add_datasample("result", img, data_sample=result, draw_gt=False, wait_time=0, show=False)
res_img = visualizer.get_image()
Image.fromarray(res_img)
This app is based on the great work MMDetection
(github).