Skip to content

supervisely-ecosystem/train-mmdetection-v3

Repository files navigation

Overview

Train MMDetection 3.0 models in Supervisely.

Application key points:

  • The app supports Object Detection and Instance Segmentation tasks
  • There are almost all Object Detection and Instance Segmentation models from MMDetection 3.0
  • You can compare all the models performance and metrics in the Model Leaderboard table
  • Fine-tune pretrained models or train it from scratch
  • Define Train / Validation splits
  • Select classes for training
  • Define augmentations
  • Tune hyperparameters
  • Preview LR schedulers before start the training
  • Watch the training progress, losses, metrics in charts
  • Save training checkpoints to Team Files

The app only supports the models in Object Detection and Instance Segmentation tasks (origianl model zoo):

Architectures
Object Detection Instance Segmentation
Components
Backbones Necks Loss Common

How to Run

Step 1. Run the app from context menu of the project with annotations or from the Ecosystem

Step 2. Select the MMDetection task you need to solve

Step 3. Choose the pretrained or custom object detection model

Step 4. Select the classes you want to train MMDetection on

Step 5. Define the train/val splits

Step 6. Choose either ready-to-use augmentation template or provide custom pipeline

Step 7. Configure the training parameters

Step 8. Click Train button and observe the training progress, metrics charts and visualizations

Obtain saved checkpoints

All the trained checkpoints, that are generated through the process of training models are stored in Team Files in the folder mmdetection-3.

To navigate to team files, go to the Start menu and press the Team files button

screenshot-dev-supervise-ly-workspaces-1689091116341 copy

How To Use Custom Model Outside The Platform

You can use your trained models outside Supervisely platform without any dependencies on Supervisely SDK. You just need to download config file and model weights (.pth) from Team Files, and then you can build and use the model as a normal model in mmdetection 3.0. See this Jupyter Notebook for details.

A base code example is here:

# Put your paths here:
img_path = "demo_data/image_01.jpg"
config_path = "app_data/work_dir/config.py"
weights_path = "app_data/work_dir/epoch_8.pth"

device = "cuda:0"

import mmcv
from mmengine import Config
from mmdet.apis import inference_detector, init_detector
from mmdet.registry import VISUALIZERS
from mmdet.visualization.local_visualizer import DetLocalVisualizer
from PIL import Image

# build the model
cfg = Config.fromfile(config_path)
model = init_detector(cfg, weights_path, device=device, palette='random')

# predict
result = inference_detector(model, img_path)
print(result)

# visualize
img = mmcv.imread(img_path, channel_order="rgb")
visualizer: DetLocalVisualizer = VISUALIZERS.build(model.cfg.visualizer)
visualizer.dataset_meta = model.dataset_meta
visualizer.add_datasample("result", img, data_sample=result, draw_gt=False, wait_time=0, show=False)
res_img = visualizer.get_image()
Image.fromarray(res_img)

Acknowledgment

This app is based on the great work MMDetection (github). GitHub Org's stars