Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Community] Add IDOL, VITA to project #835

Open
wants to merge 1 commit into
base: dev-1.x
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
120 changes: 120 additions & 0 deletions projects/VIS_SOTA/IDOL/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
# IDOL: In Defense of Online Models for Video Instance Segmentation

## Description

This is an implementation of [IDOL](https://github.com/wjf5203/VNext.git) based on [MMTracking](https://github.com/open-mmlab/mmtracking/tree/1.x), [MMDetection](https://github.com/open-mmlab/mmdetection/tree/3.x), [MMCV](https://github.com/open-mmlab/mmcv), and [MMEngine](https://github.com/open-mmlab/mmengine).

In recent years, video instance segmentation (VIS) has been largely advanced by offline models, while online models are usually inferior to the contemporaneous offline models by over 10 AP, which is a huge drawback. By dissecting current online models and offline models, we demonstrate that the main cause of the performance gap is the error-prone association and propose IDOL, which outperforms all online and offline methods on three benchmarks. IDOL won first place in the video instance segmentation track of the 4th Large-scale Video Object Segmentation Challenge (CVPR2022).

<center>
<img src="https://github.com/wjf5203/VNext/blob/main/assets/IDOL/arch.png">
</center>

## Usage

<!-- For a typical model, this section should contain the commands for training and testing. You are also suggested to dump your environment specification to env.yml by `conda env export > env.yml`. -->

### Training commands

In MMTracking's root directory, run the following command to train the model:

```bash
python tools/train.py projects/VIS_SOTA/IDOL/configs/idol_r50_8xb4-12k_youtubevis2019.py
```

For multi-gpu training, run:

```bash
python -m torch.distributed.launch --nnodes=1 --node_rank=0 --nproc_per_node=${NUM_GPUS} --master_port=29506 --master_addr="127.0.0.1" tools/train.py projects/VIS_SOTA/IDOL/configs/idol_r50_8xb4-12k_youtubevis2019.py
```

### Testing commands

In MMTracking's root directory, run the following command to test the model:

```bash
python tools/test.py projects/VIS_SOTA/IDOL/configs/idol_r50_8xb4-12k_youtubevis2019.py ${CHECKPOINT_PATH}
```

## Results

#### YouTubeVIS-2019

| Method | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | AP | Config | Download |
| :----: | :------: | :-----: | :-----: | :------: | :------------: | :--: | :--------------------------------------------------------------------------: | :----------------------: |
| IDOL | R-50 | pytorch | 12k | 27.0 | - | 49.3 | [config](projects/VIS_SOTA/IDOL/configs/idol_r50_8xb4-12k_youtubevis2019.py) | [model](<>) \| [log](<>) |

#### OVIS

| Method | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | AP | Config | Download |
| :----: | :------: | :-----: | :-----: | :------: | :------------: | :--: | :----------: | :----------------------: |
| IDOL | R-50 | pytorch | 12k | 27.0 | - | 29.7 | [config](<>) | [model](<>) \| [log](<>) |

## Citation

If you find IDOL is useful in your research or applications, please consider giving a star 🌟 to the [official repository](https://github.com/wjf5203/VNext) and citing IDOL by the following BibTeX entry.

```BibTeX
@inproceedings{IDOL,
title={In Defense of Online Models for Video Instance Segmentation},
author={Wu, Junfeng and Liu, Qihao and Jiang, Yi and Bai, Song and Yuille, Alan and Bai, Xiang},
booktitle={ECCV},
year={2022},
}

```

## Checklist

<!-- Here is a checklist illustrating a usual development workflow of a successful project, and also serves as an overview of this project's progress. The PIC (person in charge) or contributors of this project should check all the items that they believe have been finished, which will further be verified by codebase maintainers via a PR.
OpenMMLab's maintainer will review the code to ensure the project's quality. Reaching the first milestone means that this project suffices the minimum requirement of being merged into 'projects/'. But this project is only eligible to become a part of the core package upon attaining the last milestone.
Note that keeping this section up-to-date is crucial not only for this project's developers but the entire community, since there might be some other contributors joining this project and deciding their starting point from this list. It also helps maintainers accurately estimate time and effort on further code polishing, if needed.
A project does not necessarily have to be finished in a single PR, but it's essential for the project to at least reach the first milestone in its very first PR. -->

- [x] Milestone 1: PR-ready, and acceptable to be one of the `projects/`.

- [x] Finish the code

<!-- The code's design shall follow existing interfaces and convention. For example, each model component should be registered into `mmdet.registry.MODELS` and configurable via a config file. -->

- [x] Basic docstrings & proper citation

<!-- Each major object should contain a docstring, describing its functionality and arguments. If you have adapted the code from other open-source projects, don't forget to cite the source project in docstring and make sure your behavior is not against its license. Typically, we do not accept any code snippet under GPL license. [A Short Guide to Open Source Licenses](https://medium.com/nationwide-technology/a-short-guide-to-open-source-licenses-cf5b1c329edd) -->

- [x] Test-time correctness

<!-- If you are reproducing the result from a paper, make sure your model's inference-time performance matches that in the original paper. The weights usually could be obtained by simply renaming the keys in the official pre-trained weights. This test could be skipped though, if you are able to prove the training-time correctness and check the second milestone. -->

- [x] A full README

<!-- As this template does. -->

- [x] Milestone 2: Indicates a successful model implementation.

- [x] Training-time correctness

<!-- If you are reproducing the result from a paper, checking this item means that you should have trained your model from scratch based on the original paper's specification and verified that the final result matches the report within a minor error range. -->

- [ ] Milestone 3: Good to be a part of our core package!

- [ ] Type hints and docstrings

<!-- Ideally *all* the methods should have [type hints](https://www.pythontutorial.net/python-basics/python-type-hints/) and [docstrings](https://google.github.io/styleguide/pyguide.html#381-docstrings). [Example](https://github.com/open-mmlab/mmdetection/blob/5b0d5b40d5c6cfda906db7464ca22cbd4396728a/mmdet/datasets/transforms/transforms.py#L41-L169) -->

- [ ] Unit tests

<!-- Unit tests for each module are required. [Example](https://github.com/open-mmlab/mmdetection/blob/5b0d5b40d5c6cfda906db7464ca22cbd4396728a/tests/test_datasets/test_transforms/test_transforms.py#L35-L88) -->

- [ ] Code polishing

<!-- Refactor your code according to reviewer's comment. -->

- [ ] Metafile.yml

<!-- It will be parsed by MIM and Inferencer. [Example](https://github.com/open-mmlab/mmdetection/blob/3.x/configs/faster_rcnn/metafile.yml) -->

- [ ] Move your modules into the core package following the codebase's file hierarchy structure.

<!-- In particular, you may have to refactor this README into a standard one. [Example](https://github.com/open-mmlab/mmdetection/blob/3.x/configs/faster_rcnn/README.md) -->

- [ ] Refactor your modules into the core package following the codebase's file hierarchy structure.
64 changes: 64 additions & 0 deletions projects/VIS_SOTA/IDOL/configs/coco_instance.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# dataset settings
dataset_type = 'mmdet.CocoDataset'
data_root = 'data/coco/'

# file_client_args = dict(
# backend='petrel',
# path_mapping=dict({
# './data/': 's3://openmmlab/datasets/detection/',
# 'data/': 's3://openmmlab/datasets/detection/'
# }))
file_client_args = dict(backend='disk')

train_pipeline = [
dict(type='LoadImageFromFile', file_client_args=file_client_args),
dict(type='LoadTrackAnnotations', with_bbox=True, with_mask=True),
dict(type='mmdet.Resize', scale=(1333, 800), keep_ratio=True),
dict(type='mmdet.RandomFlip', prob=0.5),
dict(type='PackTrackInputs')
]
test_pipeline = [
dict(type='LoadImageFromFile', file_client_args=file_client_args),
dict(type='mmdet.Resize', scale=(2000, 640), keep_ratio=True),
dict(
type='LoadTrackAnnotations',
with_instance_id=False,
with_bbox=True,
with_mask=True),
dict(type='PackTrackInputs', pack_single_img=True)
]
train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
batch_sampler=dict(type='mmdet.AspectRatioBatchSampler'),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/instances_train2017.json',
data_prefix=dict(img='train2017/'),
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=train_pipeline))
val_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/instances_val2017.json',
data_prefix=dict(img='val2017/'),
test_mode=True,
pipeline=test_pipeline))
test_dataloader = val_dataloader

val_evaluator = dict(
_scope_='mmdet',
type='CocoMetric',
ann_file=data_root + 'annotations/instances_val2017.json',
metric=['bbox', 'segm'],
format_only=False)
test_evaluator = val_evaluator
136 changes: 136 additions & 0 deletions projects/VIS_SOTA/IDOL/configs/idol_r50_8xb2-16e_coco-seq.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
_base_ = [
'./coco_instance.py', # noqa: E501
'../../../../configs/_base_/default_runtime.py'
]

custom_imports = dict(imports=['projects.VIS_SOTA.IDOL.idol_src'], )

model = dict(
type='IDOL',
data_preprocessor=dict(
type='TrackDataPreprocessor',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True,
pad_mask=True,
pad_size_divisor=32),
backbone=dict(
type='mmdet.ResNet',
depth=50,
num_stages=4,
out_indices=(1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=False),
norm_eval=True,
style='pytorch',
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
neck=dict(
type='mmdet.ChannelMapper',
in_channels=[512, 1024, 2048],
kernel_size=1,
out_channels=256,
act_cfg=None,
norm_cfg=dict(type='GN', num_groups=32),
num_outs=4),
track_head=dict(
_scope_='mmdet',
type='mmtrack.IDOLTrackHead',
num_query=300,
num_classes=80,
in_channels=2048,
with_box_refine=True,
sync_cls_avg_factor=True,
as_two_stage=False,
transformer=dict(
type='mmtrack.DeformableDetrTransformer',
encoder=dict(
type='DetrTransformerEncoder',
num_layers=6,
transformerlayers=dict(
type='BaseTransformerLayer',
attn_cfgs=dict(
type='MultiScaleDeformableAttention', embed_dims=256),
feedforward_channels=1024,
ffn_dropout=0.1,
operation_order=('self_attn', 'norm', 'ffn', 'norm'))),
decoder=dict(
type='DeformableDetrTransformerDecoder',
num_layers=6,
return_intermediate=True,
transformerlayers=dict(
type='DetrTransformerDecoderLayer',
attn_cfgs=[
dict(
type='MultiheadAttention',
embed_dims=256,
num_heads=8,
dropout=0.1),
dict(
type='MultiScaleDeformableAttention',
embed_dims=256)
],
feedforward_channels=1024,
ffn_dropout=0.1,
operation_order=('self_attn', 'norm', 'cross_attn', 'norm',
'ffn', 'norm')))),
positional_encoding=dict(
type='SinePositionalEncoding',
num_feats=128,
normalize=True,
offset=-0.5),
loss_cls=dict(
type='FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=2.0),
loss_bbox=dict(type='L1Loss', loss_weight=5.0),
loss_iou=dict(type='GIoULoss', loss_weight=2.0)),
# training and testing settings
# can't del 'mmtrack'
train_cfg=dict(
assigner=dict(type='mmtrack.SimOTAAssigner', center_radius=2.5),
cur_train_mode='COCO_Video'),
)

# optimizer
embed_multi = dict(lr_mult=1.0, decay_mult=0.0)
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(
type='AdamW',
lr=0.0001,
weight_decay=0.05,
eps=1e-8,
betas=(0.9, 0.999)),
paramwise_cfg=dict(
custom_keys={
'backbone': dict(lr_mult=0.1, decay_mult=1.0),
'query_embed': embed_multi,
'query_feat': embed_multi,
'level_embed': embed_multi,
},
norm_decay_mult=0.0),
clip_grad=dict(max_norm=0.01, norm_type=2))

# learning policy
max_iters = 6000
param_scheduler = dict(
type='MultiStepLR',
begin=0,
end=max_iters,
by_epoch=False,
milestones=[
4000,
],
gamma=0.1)
# runtime settings
train_cfg = dict(
type='IterBasedTrainLoop', max_iters=max_iters, val_interval=6001)
val_cfg = dict(type='ValLoop')
test_cfg = dict(type='TestLoop')

default_hooks = dict(
checkpoint=dict(
type='CheckpointHook', by_epoch=False, save_last=True, interval=2000))
log_processor = dict(type='LogProcessor', window_size=50, by_epoch=False)
Loading