-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support conversion of recognize_digits models #19
Conversation
Aha! Yes, tf2onxx had to the same. Can't just do a passthrough of things. But I think this was inevitable, so I am glad we have gotten to it this way. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really enjoying this..
convert.py
Outdated
|
||
# Append some customized arguments of the node maker here | ||
if op.type == 'conv2d': | ||
kernel_shape = fluid.executor.fetch_var( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is all static, we could make fetch_var
available inside the ops
module, so we don't have to such a conditional here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree. We should make convert.py
more clean. So I change to pass operator and scope as arguments.
fluid_onnx/ops.py
Outdated
@@ -281,6 +301,25 @@ def pad_op(): | |||
pass | |||
|
|||
|
|||
def pool2d_op(inputs, attrs, outputs): | |||
if attrs['global_pooling'] is False: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Verified the conversion's correctness
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated and this pr is ready for review
convert.py
Outdated
|
||
# Append some customized arguments of the node maker here | ||
if op.type == 'conv2d': | ||
kernel_shape = fluid.executor.fetch_var( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree. We should make convert.py
more clean. So I change to pass operator and scope as arguments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some ideas about why I make the change.
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I add a new directory fluid
here to put some modules related to Fluid but independent of ONNX.
@@ -224,8 +252,63 @@ def lppool_op(): | |||
pass | |||
|
|||
|
|||
def matmul_op(inputs, attrs, outputs): | |||
return make_node('MatMul', inputs=inputs, outputs=outputs) | |||
def mul_op(operator, scope): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we can see, sometimes we can't implement the operator in Fluid by only operator in ONNX. So it would be better to make the function name (mul_op
here) consistent with the op's name in Fluid ('mul').
inputs, attrs, outputs = get_op_io_info(operator) | ||
|
||
# Flatten input(X) and input(Y) into 2-D matries | ||
x_flat_out = [inputs['X'][0] + '@flatten_0'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please pay attention to the intermediate tensors between nodes. They should be named uniquely in the ONNX graph.
|
||
- [fit_a_line](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/tests/book/test_fit_a_line.py) | ||
- [machine_translation](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/tests/book/test_machine_translation.py) | ||
- [recognize_digits](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/tests/book/test_recognize_digits.py) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another model recognize_digits_mlp
also has been verified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I think this looks pretty good to me.
@sidgoyal78 Thanks! |
Resolve #18
I add this pull request immediately because I find that the
fit_a_line
model is too simple. And the parameters we passed parameters to the node maker are not enough to handle complex operators such asconv2d
,batch norm_op
covered by the recognize_digits_conv model. So I redesigned the interface and hope that it would be convenient for everyone to the implementation of complex models.