Skip to content

Releases: microsoft/onnxconverter-common

1.14

22 Aug 04:29
8d2f85f
Compare
Choose a tag to compare
  • Update pipeline according to security issue #258, the version of protobuf must be 3.20.2.
  • Fix SubGraph bug when convert to fp16 model, the sub graph in loop operator used to be treated as part of main graph, so the converted fp16 model will be invalid. #260
  • After added Cast op, the order of all the ops should be re-sort, in order that the ORT can inference it correctly. Before this PR, the resort need using other tools to do, not friendly to end user. #260
  • Added pyproject.toml, because setup.py was deprecated and cannot be used any longer since last week, now you must use python -m build instead of python setup.py..... #260
  • Add new manually-publish.yml, because project under github/microsoft need to use ESRP process to upload package to PyPI. #263

1.13.0

03 Nov 12:14
b7bcbc9
Compare
Choose a tag to compare

Improvement and new features:

  • upgrade version number to 1.13
  • add warning message when fp32 was truncated to fp16, #246
  • update tasoptions to align with security review #244
  • add new test cases #243
  • add RandomUniformLike op into fp16 converter block list #239
  • shorten CI pipeline by reduce the matrix size #237
  • delay import onnxruntime to avoid ImportError when onnxruntime is not necessary #235
  • create and update OneBranchPipeline-Official.yml for security review #232 #226 #223
  • add 3 descriptions files for OSS #230
  • add auto_mixed_precision_model_path function for large model (larger than 2G) #217 #230
  • fix Resize op convert to fp16 issue #212

Closed issues:
#245 #242 #241 #238 #229 #227 #220 #200 #218 #215 #213 #211 #208 #207 #196 #195 #171 #150

1.9.0

02 Dec 19:24
Compare
Choose a tag to compare

Opset 14

  • Upgrade to opset 14(#183)
  • Fix RNN version in opset 14 change(#186)

Opset 15

  • Update max supported opset to 15.(#198)

float16

  • Temporarily disable fp16 test(#185)
  • Add op_block_list arg to float16 converter(#190)
  • Add node_block_list to fp16 conversion script(#191)
  • Added script for automatic conversion to float16, excluding the minim…(#193)

onnx2py

  • Fix onnx2py for new onnx package(#177)
  • Fix onnx2py to avoid making long paths(#192)
  • Fix onnx2py for seq types(#194)

Bug fixes and CI

  • Replace 'output' with 'input' in RuntimeError(#182)
  • Create nightly CI(#184)
  • Fix nightly CI names(#187)
  • Fix nightly CI(#188)
  • Try to fix nightly CI again(#189)
  • Add InstanceNormalization op to DEFAULT_OP_BLOCK_LIST(#197)

1.8.1

13 Apr 19:35
688dec0
Compare
Choose a tag to compare
v1.8.1

Increment version number to 1.8.1 (#180)

1.8.0

19 Feb 16:01
09827b5
Compare
Choose a tag to compare

API

  • Initialize container node_domain_version_pair_sets (#123)
  • Fix handling onnx model opset after creating Graph (#125)
  • Add CumSum to black list and fix duplicated node name issue (#127)
  • Add Softsign activation function. (#135)
  • Enforce model to graph opset (#145)
  • Upgrade op_version to pass onnx initializer checker (#146)
  • Add support for complex number, unsigned integers (#147)
  • Add hummingbird installed method (#134)
  • Set keepdims=1 as default for ReduceSum (#157)
  • Fix rank shift in apply_reducesum and _apply_squeeze_unsqueeze (#158)

Opset 13

  • Update default values for opset 13 (#160)
  • Update to opset 13 (#156)
  • Bump DEFAULT_OPSET_NUMBER = 13 (#159)

Optimizer

  • (Optimizer) Remove Matmul from broadcast op (#129)
  • (Optimizer) Refine the onnx_fx and optimizer code. (#130)
  • Handle len(pred_0.tensors) == 0 in is_same_node_merge (#133)
  • Hanlde Split op in is_same_node_merge (#136)
  • Fix next.precedence range(1, 5) case in ConvBatchNormOptimizer (#137)
  • Add a matmul optimization (#138)
  • Pass Max/Min for PushTransposeSolution (#139)
  • Support the sub graph and constant in const folding (#122)

PushTranspose

  • Combine TransposeOptimizer and PushTransposeOptimizer into one #131
  • PushTranspose optimizer for LSTM - Squeeze (#128)
  • Fix PushTransposeSolution for a node_transpose_no_pass case #140
  • Fix MergeOptimizer for the case Transpose + xxx + Transpose (#142)
  • Handle multiple end.precedences for SwapOpSolution (#143)
  • Skip PushTranspose when broadcast has two un-init inputs (#144)

float16

  • Updated float16 conversion script to maintain sign and finiteness of converted constants #153
  • Support >2GB ONNX models for fp16 conversion (#167)
  • fix the version which starts to support infer_shapes_path (#168)

onn2py

onnx2py is a tool which converts an ONNX graph into a python script (#161, #162, #164, #165, #166)

v1.7.0

08 Jun 06:44
0030d9f
Compare
Choose a tag to compare

The major update:

  1. supports ONNX 1.7
  2. improve the onnx optimizer
  3. a new tool to create the ONNX model from a python function.

v1.6.1

03 Apr 16:16
Compare
Choose a tag to compare

A patch release for the issue if there is a new onnx release.

v1.6.5

30 Jan 19:51
e42677e
Compare
Choose a tag to compare
v1.6.5 Pre-release
Pre-release

e87c572 port the topology bug fixing. (#49)
2ffbcad Support dynamic inputs for apply_slice (#48)
7185620 A better operator builder for ONNXConverter-Common (#46)
c452903 Support dynamic pads for Pad opset 11 (#45)
9bc1744 Use axes for Squeeze/Unsqueeze (#44)
66f8887 Allow repeats be string type for apply_tile (#43)
47a6c35 Add apply_conv (#42)
98ee169 Add apply_squeeze (#40)
82f08ed Add apply_gather, apply_greater, apply_gru, etc (#39)
672c5f7 Reformat and clean up some codes.
f684aeb Use coordinate_transformation_mode as default parameter (tf 2.0 will change it) (#38)
85e9bcf Add support for detecting if h2o is installed. (#37)
b5f216b Handle dynamic shape in apply_reshape (#36)
b9700d1 Handle case input_name is list for apply_reshape (#35)
511a071 Generate graph after optimizing onnx model (#34)

v1.6.0

22 Oct 18:41
c0c226e
Compare
Choose a tag to compare

Major updates:

  1. Support opset 11
  2. Add fp16 conversion on attributes

v1.5.5

04 Oct 17:58
b6124d7
Compare
Choose a tag to compare

Major updates:

  1. Fix pad computation for auto_pad=NOTSET in MergePadConvSolution
  2. Support min.shape=0 case for apply_clip