-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes ConvNd usability and shape inference for qonnx export on Torch 2.0.1 #692
Conversation
Hey @bwintermann , thanks for the PR, this is definitely something that needs fixing. Do you have an example for why you are adding the FINNQuantConvNdHandler to the handlers of QONNX? I don't think it's necessary but I would like to understand your use case. Thanks |
With the big Torch update to 2.0 the export of certain layers in the QONNX format doesn't work anymore, seemingly due to missing shape information, which, it looks like, could be inferred before, but is now required. Bringing the FINN handler and function into scope was the easiest way to provide the required information. However it is completely possible that I've misunderstood the usage of the export manager and that this is not the proper way to fix it. I would of course also say that importing the handler and function from another export subdirectory is not the cleanest solution, so that if this were to persist, maybe the handlers and functions could be moved into a shared location for both the FINN- and the QONNX-export. |
@@ -42,14 +44,16 @@ class QONNXManager(ONNXBaseManager): | |||
BrevitasDecoupledWeightQuantProxyHandler, | |||
BrevitasDecoupledWeightQuantWithInputProxyHandler, | |||
BrevitasTruncQuantProxyHandler, | |||
BrevitasQuantLSTMLayerHandler] | |||
BrevitasQuantLSTMLayerHandler, | |||
FINNQuantConvNdHandler] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this (FINN ONNX is being deprecated).
|
||
custom_fns = [ | ||
DebugMarkerFunction, | ||
BrevitasQuantFn, | ||
BrevitasBinaryQuantFn, | ||
BrevitasTruncFn, | ||
BrevitasQuantLSTMCellFn] | ||
BrevitasQuantLSTMCellFn, | ||
QuantizedConvNdFn] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this (FINN ONNX is being deprecated).
Included in #760, closing this |
As mentioned in issue #449 when using Torch>=2.0.1, sometimes shape inference doesn't work properly when exporting to QONNX. In some cases it only produces warnings, but for some cases with Conv2D layers specifically the export won't work at all. This would fix both the warnings and errors, atleast regarding the ConvNd layer types. I tested the export on multiple example networks afterwards.