Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port test_quantized_models.py to pytest #4034

Merged
merged 7 commits into from
Jun 10, 2021

Conversation

NicolasHug
Copy link
Member

I'm taking care of this one myself because it's a bit more subtle than the rest

Copy link
Contributor

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NicolasHug Thanks for the PR.

I think you caught an issue on our previous test, see below.

test/test_quantized_models.py Outdated Show resolved Hide resolved
def test_classification_model(name):

# First check if quantize=True provides models that can run with input data
input_shape = (1, 3, 224, 224)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be best both for consistency and functionality reasons to use the same approach as in:

vision/test/test_models.py

Lines 397 to 404 in 44fefe6

defaults = {
'num_classes': 50,
'input_shape': (1, 3, 224, 224),
}
kwargs = {**defaults, **_model_params.get(model_name, {})}
input_shape = kwargs.pop('input_shape')
model = models.__dict__[model_name](**kwargs)

Then InceptionV3 can overwrite its parameters as follows:

vision/test/test_models.py

Lines 179 to 182 in 44fefe6

_model_params = {
'inception_v3': {
'input_shape': (1, 3, 299, 299)
},

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like we can use input_shape = (1, 3, 224, 224) for all models including inception, with no impact on computation time. So let's leave as-is, unless we must use (1, 3, 299, 299) for a reason that I ignore?

Copy link
Contributor

@datumbox datumbox Jun 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is true that most of the models do accept smaller input sizes than the canonicals. But in the case of classification where speed is not a problem, we always use the correct size (hence why the previous code defined 299). Given that we have an investigation of how we can use the actual pre-trained weights (provided we won't introduce flakiness from downloading data over the internet), I would recommend to follow the same logic as above.

The change should be very very easy to make and it will follow the example of all other models, building a stronger case of why it can be part of the test_models.py.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I'll use (1, 3, 299, 299) and move to test_models.py, no need to do that in another PR.

Given that we have an investigation of how we can use the actual pre-trained weights (provided we won't introduce flakiness from downloading data over the internet)

Could you detail what you mean by that?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I'll use (1, 3, 299, 299) and move to test_models.py, no need to do that in another PR.

That works.

Could you detail what you mean by that?

Currently we use random weight init for the models in the tests which means that a) we don't validate against real images and real data and b) we often see flakiness (see on detection). If we had a way to use actual weights without downloading them (maybe exploit caching of CircleCI) we could improve the quality of our tests. Happy to chat offline about that as it's a big topic unrelated to this PR.

def do_test(self, model_name=model_name):
input_shape = (1, 3, 224, 224)
if model_name in ['inception_v3']:
input_shape = (1, 3, 299, 299)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that you removed this because we filtered out inception incorrectly. Let's restore this using the testing approach on the normal models, as described above.

test/test_quantized_models.py Outdated Show resolved Hide resolved
@NicolasHug
Copy link
Member Author

Thanks for the review @datumbox . BTW, do you know why this test is in a file on its own? We might as well move it to test_models.py, right?

@datumbox
Copy link
Contributor

@NicolasHug I would be 100% OK to move it along with the other models. Given that other submodules such as detection and segmentation are in the main models file, I don't see why quantized is any different.

Perhaps we can move them to models on a separate follow up PR?

Copy link
Contributor

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thx!

@NicolasHug NicolasHug merged commit 7326623 into pytorch:master Jun 10, 2021
facebook-github-bot pushed a commit that referenced this pull request Jun 14, 2021
Reviewed By: fmassa

Differential Revision: D29097741

fbshipit-source-id: 6979c9432b3f64b8d464c2b7bfc633509c3a28ec
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants