Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data preprocessing and object size #20

Open
hoqqanen opened this issue Feb 15, 2017 · 0 comments
Open

Data preprocessing and object size #20

hoqqanen opened this issue Feb 15, 2017 · 0 comments

Comments

@hoqqanen
Copy link

It looks like only masks with larger dimension exactly 128 (as they originally exist in coco) are being taken as canonical positive examples: https://github.com/abbypa/NNProject_DeepMask/blob/master/ExamplesGenerator.py#L157
When I run it this results in under 30K positive examples. Given 80K coco images each with many segments this seems like less data than I'd expect.

Looking at the original deepmask data sampler https://github.com/facebookresearch/deepmask/blob/master/DataSampler.lua#L80 it looks like they're choosing canonicalized versions of objects that are scaled appropriately.

(PS I realize that the paper reads "During training, an input patch x_k is considered to contain a ‘canonical’ positive example if an object is precisely centered in the patch and has maximal dimension equal to exactly 128 pixels", but it fails to mention whether objects of different original size are canonicalized. Given that at inference it seems they pass many scales of the same image, https://github.com/facebookresearch/deepmask/blob/master/InferDeepMask.lua#L59, it seems likely this is for recognizing e.g. a 64px object in its canonicalized form when it is upsampled.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant