-
Notifications
You must be signed in to change notification settings - Fork 65
Issues: microsoft/onnxconverter-common
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
convert_float_to_float16() produces a model that causes ValidationError with onnx.checker.check_model()
#256
opened Apr 19, 2023 by
SergeySandler
auto_convert_mixed_precision
Error: two nodes with same node name error occurred during
#259
opened Jun 16, 2023 by
KunMengcode
Performance degrade after sess_options.enable_profiling = True
#267
opened Oct 2, 2023 by
Jay19751103
Converting model fp32 to fp16 with auto_mixed_precision_model_path from gets NaN
#249
opened Nov 22, 2022 by
taoisu
Meaning of warning: The maximum opset needed by this model is only
#301
opened Sep 7, 2024 by
ogencoglu
convert float32 model to float16, but memory usage has not decreased
#289
opened May 16, 2024 by
kukugpt
Previous Next
ProTip!
Add no:assignee to see everything that’s not assigned.