-
Notifications
You must be signed in to change notification settings - Fork 7.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Caffe2 ONNX export #4295
Add support for Caffe2 ONNX export #4295
Conversation
@ppwwyyxx @FrancescoMandru splitting up Detectron2 Caffe2 ONNX export in a separate (and clean) PR |
It seems the test failures are not caused by this PR, but they are indeed related to caffe2 export. From the error message, |
6dcd01e
to
4ba9841
Compare
This is caused by the ONNX graph code, which is shared by caffe2/onnx-only export |
3a4fa1f
to
1fde678
Compare
We are looking for the right POCs to review. Stay tuned. |
Also, can you please rebase since there seems to be conflicts? Thanks. |
@zhanghang1989 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
1fde678
to
eeff5f2
Compare
@thiagocrepaldi has updated the pull request. You must reimport the pull request before landing. |
Thanks @orionr. Just updated the PR. Let me know if I can help |
@thiagocrepaldi has updated the pull request. You must reimport the pull request before landing. |
389796a
to
ba9497a
Compare
@thiagocrepaldi has updated the pull request. You must reimport the pull request before landing. |
I have pushed a commit with automatic changes from
|
`TracingAdapter` creates extra outputs (through `flatten_to_tuple`) to hold metadata information to rebuild the original data format during deserialization. When exporting a PyTorch model to ONNX, the support to de-serialize the output to the original formatThis is unnecessary during ONNX export as the original data will never be reconstructed to its original format using Schema.__call__ API. This PR suppresses such extra output constants during torch.onnx.export() execution. Outside this API, the behavior is not changed, ensuring BC. Although not stricly necessary to achieve the same numerical results as PyTorch, when a ONNX model schema is compared to PyTorch's, the diffrent number of outputs (ONNX model will have more outputs than PyTorch) may not only confuse users, but also result in false negative when coding model comparison helpers.
ba9497a
to
7930b25
Compare
@thiagocrepaldi has updated the pull request. You must reimport the pull request before landing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks a lot for your patience!
@zhanghang1989 @wat3rBro can you help merge this? |
@orionr This one is also ready to go |
Checking with the team. Thanks. |
Thanks @orionr @zhanghang1989 @wat3rBro. Any update on this merge? |
@zhanghang1989 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Thanks @zhanghang1989 @orionr @wat3rBro The internal test failed after 6h. Could you help me with some backtrace or relevant-non-proprietary log so that i can fix it? I would guess this is a false positive, as Caffe2 export is not currently supported by Detectron2 and the files I changed are only pertinent to Caffe2 export |
gentle ping |
@mcimpoi has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@thiagocrepaldi has updated the pull request. You must reimport the pull request before landing. |
@mcimpoi the fix wasnt pushed to the repo, just my local branch. I have pushed it now (after your reimport) |
@mcimpoi has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
1 similar comment
@mcimpoi has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@mcimpoi Thank you very much for helping with this PR, really appreciate it |
Currently all Caffe2 export tests (under
tests/test_export_caffe2.py
) fail because the latestonnx
releases do not haveonnx.optimizer
submodule anymore (instead, a new moduleonnxoptimizer
was created from it)However,
fuse_bn_into_conv
optimization previously implemented withinonnx.optimizer
is already performed bytorch.onnx.export
too ruing ONNX export. Thereforeonnx.optimizer
dependency can be safely removed from detectron2 code.Depends on pytorch/pytorch#75718
Fixes #3488
Fixes pytorch/pytorch#69674 (PyTorch repo)
ps: Although
Caffe2
support is/will be deprecated, this PR relies on the fact that contributions are welcome as stated at docs/tutorials/deployment.md