1.8.0
API
- Initialize container node_domain_version_pair_sets (#123)
- Fix handling onnx model opset after creating Graph (#125)
- Add CumSum to black list and fix duplicated node name issue (#127)
- Add Softsign activation function. (#135)
- Enforce model to graph opset (#145)
- Upgrade op_version to pass onnx initializer checker (#146)
- Add support for complex number, unsigned integers (#147)
- Add hummingbird installed method (#134)
- Set keepdims=1 as default for ReduceSum (#157)
- Fix rank shift in apply_reducesum and _apply_squeeze_unsqueeze (#158)
Opset 13
- Update default values for opset 13 (#160)
- Update to opset 13 (#156)
- Bump DEFAULT_OPSET_NUMBER = 13 (#159)
Optimizer
- (Optimizer) Remove Matmul from broadcast op (#129)
- (Optimizer) Refine the onnx_fx and optimizer code. (#130)
- Handle len(pred_0.tensors) == 0 in is_same_node_merge (#133)
- Hanlde Split op in is_same_node_merge (#136)
- Fix next.precedence range(1, 5) case in ConvBatchNormOptimizer (#137)
- Add a matmul optimization (#138)
- Pass Max/Min for PushTransposeSolution (#139)
- Support the sub graph and constant in const folding (#122)
PushTranspose
- Combine TransposeOptimizer and PushTransposeOptimizer into one #131
- PushTranspose optimizer for LSTM - Squeeze (#128)
- Fix PushTransposeSolution for a node_transpose_no_pass case #140
- Fix MergeOptimizer for the case Transpose + xxx + Transpose (#142)
- Handle multiple end.precedences for SwapOpSolution (#143)
- Skip PushTranspose when broadcast has two un-init inputs (#144)
float16
- Updated float16 conversion script to maintain sign and finiteness of converted constants #153
- Support >2GB ONNX models for fp16 conversion (#167)
- fix the version which starts to support infer_shapes_path (#168)
onn2py
onnx2py is a tool which converts an ONNX graph into a python script (#161, #162, #164, #165, #166)