Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SSD in models #440

Closed
lemairecarl opened this issue Mar 6, 2018 · 21 comments · Fixed by #3403
Closed

Add SSD in models #440

lemairecarl opened this issue Mar 6, 2018 · 21 comments · Fixed by #3403

Comments

@lemairecarl
Copy link

I've been working with PyTorch for several months, and with SSD for a few months. I'd like to add SSD to torchvision's "model zoo".

I will combine the good parts of https://github.com/amdegroot/ssd.pytorch and https://github.com/kuangliu/torchcv. Both implementation have some problems and refactoring will be needed to come to the level of refinement expected from torchvision.

I will begin in the coming weeks if there's no opposition.
@fmassa

@fmassa
Copy link
Member

fmassa commented Mar 6, 2018

Hi,

Thanks!
Yes, having training / evaluation code is definitely necessary for object detection, as the models are not enough.

I'm still figuring out the right balance between having things in torchvision and in external repos. I think everything that is quite generic and reusable should come here.
So if the training code doesn't live here, then I'm unsure if the models should be here as well or in the repo where the training code lives. For imagenet classification, all the models that we have in torchvision (with only a few exceptions) were trained using examples/imagenet.

One thing that needs to be improved is support for other data types than images (like bounding boxes). We've addressed that with the functional interface in some way, but we are still missing a good story on how to tie things together.

What do you think? If you find time to start working on SSD, it might be good to list here your proposed action points so we can discuss.

@lemairecarl
Copy link
Author

lemairecarl commented Mar 6, 2018

  1. I would add a COCO/VOC example that would contain the training procedure, just like for classification nets. I think it's a good practice. The code will be more modular than the imagenet example.
  2. Could you tell me a bit more how you added support for bounding boxes? And how far would you like to go with bounding box support? Do you mean add utils like Jaccard overlap and that kind of stuff? Or maybe you're thinking about transforms? We can start by having the bounding box utils with the example code, and then decide which parts we want to integrate to pytorch or torchvision.

TODO

  • Add COCO/VOC example (in pytorch/examples). Things will be modular, e.g. a VisdomHelper module. I already have good training/evaluation scripts
  • Make sure that the results are satisfying
  • Move the model to torchvision/models
  • Move the dataset code to torchvision/datasets
  • Move the data augmentation code to torchvision/transforms
  • Upload weights trained on COCO/VOC

@fmassa
Copy link
Member

fmassa commented Mar 6, 2018

The thing I noticed while writing an initial version of Faster R-CNN in 2016 was that it requires a lot of code. This made me unsure if the examples repo was the good place to put it.

About bounding box, I was mostly thinking about basic transforms, but other functionality (like intersection over union) might be worth considering.

@lemairecarl
Copy link
Author

There is clearly code that is not part of the model but that is needed to use it. With SSD there is the box encoding/decoding procedure which is specific to this model, and that is quite heavy. There is the prior box management, the non-max suppression in the decoding... Do you think we could have a ssd folder in torchvision/models instead of only one script?

Something like:

ssd/
    ssd.py              model, layers
    box_coder.py        bounding box encoding/decoding using prior boxes
    multibox_loss.py
    utils.py            some parts may later be integrated in the library

I think that's reasonable.

@amdegroot
Copy link

Hi guys,
Just read #425. I think it would be awesome if segmentation and detection could be included in torchvision.

In ssd.pytorch I've tried to avoid modifying the current torchvision datasets (COCO, Pascal) and transformations as much as possible, but I think there would still need to be some slight modifications made to the current torchvision code to support detection, which I would be more than happy to help with, if we end up deciding on this route.

That being said, I agree Francisco that most detection implementations out there are a little heavier and would potentially require a lot more to support (e.g. yolo, faster-rcnn), so I think it makes sense to consider whether or not a torchvision "detection module" could be made extensible to more than just SSD before we jump into it.

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Mar 14, 2018

@lemairecarl what are your thoughts on how to

  • Move the data augmentation code to torchvision/transforms

Do you want to merge the current transforms.py with ssd.pytorch augmentations or create a separate a file with similar classes like Compose, RandomCrop etc ?

@lemairecarl
Copy link
Author

I think we'll see in the process if we need to break things into multiple files. I'll rely on the pull request comments. I want to start working on this next week.

@fmassa
Copy link
Member

fmassa commented Mar 14, 2018

Yes. I'm still thinking on how to integrate the transforms in a way that it fits nicely with torchvision, while not requiring much boilerplate and being generic.

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Mar 15, 2018

We can inspire from tensorpack.

For example, there is a proxy dataset that applies transformations on a dataset according to the indices:

class XYTransformedDataset(Dataset):

    def __init__(self, dataset, transformations, img_index=(0, 1), coords_index=(2, )):
        self.ds = dataset
        self.transformations = transformations
        self.img_index = img_index
        self.coords_index = coords_index

    def __getitem__(self, index):

        dp = self.ds[index]  # For example, dp = (im, mask, polygons, labels)
        output_dp = list(dp)
        # HERE WE NEED TO PASS input image AS PARAMETER
        params = self.transformations.get_params(dp[0])

        # Transform images:
        for idx in self.img_index:
            output_dp[idx] = self.transformations(dp[idx], params)

        # Transform coords:
        for idx in self.coords_index:
            output_dp[idx] = self.transformations.transform_coords(dp[idx], params)
        return output_dp

A base class for transformations:

class BaseRandomTransformation:

    def get_params(self):
        return None

    def __call__(self, img, params=None):
        raise NotImplementedError()

    def transform_coords(self, coords, params):
        raise NotImplementedError()

such that all other classes from torchvision derive from it. For example, Compose

class Compose(BaseRandomTransformation):

    def __init__(self, transforms):
        self.transforms = transforms

    def __call__(self, img, params=None):
        for t in self.transforms:
            img = t(img, params=params)
        return img

    def transform_coords(self, coords, params):        
        for t, p in zip(self.transforms, params):
            coords = t.transform_coords(coords, params=p)
        return coords

    def get_params(self, img):
        return [t.get_params(img) for t in self.transforms]

and another example:

class RandomCrop(BaseRandomTransformation):

    def __init__(self, size, padding=0):
        # Same as in torchvision.RandomCrop
        pass

    def get_params(self, img):
        return _get_params(img, self.size):

    @staticmethod
    def _get_params(img, output_size):
        # Same as in torchvision.RandomCrop
        w, h = img.size
        th, tw = output_size
        if w == tw and h == th:
            return 0, 0, h, w

        i = random.randint(0, h - th)
        j = random.randint(0, w - tw)
        return i, j, th, tw

    def __call__(self, img, params=None):
        if self.padding > 0:
            img = F.pad(img, self.padding)

        if params is None:
            params = self._get_params(img, self.size)
        
        i, j, h, w = params

        return F.crop(img, i, j, h, w)

    def transform_coords(self, coords, params):    
        i, j, h, w = params
        return F.crop_coords(coords, i, j, h, w)

Here, the problem is that some transformation parameters can not be generated without the input image.

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Apr 24, 2018

fmassa commented on 14 Mar
Yes. I'm still thinking on how to integrate the transforms in a way that it fits nicely with torchvision, while not requiring much boilerplate and being generic.

@fmassa any updates on this ?

@fmassa
Copy link
Member

fmassa commented Apr 24, 2018

@vfdev-5 Yes, I have some proof-of-concept implementations.

I'm holding on on making a PR yet because I want to see how well they fit the object detection framework I'm writing (I have Fast R-CNN, Faster R-CNN, and FPN working for training and evaluation, I'm now implementing Mask R-CNN).

If I'm happy with how they mix with the framework, I'll be pushing them as is to torchvision.

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Apr 24, 2018

@fmassa that's super!
For impatient people like me, can we take a look on your work to make an idea, please, if it is opensourced somewhere ?

@fmassa
Copy link
Member

fmassa commented Apr 24, 2018

It's not yet open-source, but it will be open-sourced. Stay tuned!

@devforfu
Copy link

devforfu commented Sep 19, 2018

@vfdev-5 One can find an SSD implementation in the fast.ai course lectures also. However, it is a bit hidden under the hood of author's wrappers.

@fmassa Is your library something similar to the Detectron?

@fmassa
Copy link
Member

fmassa commented Sep 21, 2018

@devforfu yes, it is going to be similar to Detectron.

@varunagrawal
Copy link
Contributor

@fmassa I've built a generic bounding box library for both 2D and 3D bounding boxes. I need to get some legal stuff taken care of before I can release it, but I believe it has everything you would need for object detection in general (including IoU computation).

@fmassa
Copy link
Member

fmassa commented Oct 16, 2018

@varunagrawal we will be releasing in one week a library for object detection which will contain bounding box abstractions, and once it gets a bit more mature we might move it to torchvision.

@mattans
Copy link

mattans commented Dec 4, 2018

Hi @fmassa, was it released?

@fmassa
Copy link
Member

fmassa commented Dec 6, 2018

@lemairecarl
Copy link
Author

I'm closing this for now, since I have moved on to other projects. I might come back to it later.

@tczhangzhi
Copy link
Contributor

I look forward to this project going on, as I always love torchvision's elegant implementation.
But for a NewBee, I think torch.hub works well:

import torch 
precision = 'fp32'
ssd_model = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_ssd', model_math=precision)

@datumbox datumbox reopened this Apr 22, 2021
larryliu0820 added a commit to larryliu0820/vision that referenced this issue Sep 30, 2021
Summary:
Pull Request resolved: pytorch/kineto#440

Fixing windows build for `torchvision` and `kineto`.

Differential Revision: D31295975

fbshipit-source-id: a2049218f46beb46bbaeb0a3b39d7633e695a799
larryliu0820 added a commit to larryliu0820/vision that referenced this issue Oct 7, 2021
Summary:
Pull Request resolved: pytorch/kineto#440

Fixing windows build for `torchvision`. In `csrc/vision.cpp`, since `PyMODINIT_FUNC` depends on `Python.h` I added the same condition for `PyMODINIT_FUNC` as the one for `import <PyTorch.h>`.

Differential Revision: D31488734

fbshipit-source-id: 0ca13c7d8de81f27eb63d3f7e54f8777128312c7
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants