Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detect LayerNorm in presence of reciprocal and div of 1 #2609

Merged
merged 2 commits into from
Nov 9, 2023

Conversation

AlexandreEichenberger
Copy link
Collaborator

T5 model apparently use a x * (1 / y) pattern instead of x / y. The new recompose also now check for reciprocal and divide 1 by patterns.

Lit tests added.

Copy link
Collaborator

@tungld tungld left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks!

@tungld tungld merged commit ad2eb8c into onnx:main Nov 9, 2023
5 checks passed
@jenkins-droid
Copy link
Collaborator

Jenkins Linux ppc64le Build #12310 [push] detect LayerNorm in pres... started at 21:23

@jenkins-droid
Copy link
Collaborator

Jenkins Linux s390x Build #13318 [push] detect LayerNorm in pres... started at 21:16

@jenkins-droid
Copy link
Collaborator

Jenkins Linux amd64 Build #13293 [push] detect LayerNorm in pres... started at 20:16

@jenkins-droid
Copy link
Collaborator

Jenkins Linux amd64 Build #13293 [push] detect LayerNorm in pres... passed after 1 hr 4 min

@jenkins-droid
Copy link
Collaborator

Jenkins Linux s390x Build #13318 [push] detect LayerNorm in pres... passed after 1 hr 31 min

@jenkins-droid
Copy link
Collaborator

Jenkins Linux ppc64le Build #12310 [push] detect LayerNorm in pres... passed after 1 hr 54 min

cjvolzka pushed a commit to cjvolzka/onnx-mlir that referenced this pull request Nov 15, 2023
* detect LayerNorm in presence of reciprocal and div of 1 (onnx#2609)

Signed-off-by: Alexandre Eichenberger <[email protected]>

* [NNPA] Use F16 as element type for zTensor (onnx#2611)

* Use f16 as element type for zTensor

Signed-off-by: Tung D. Le <[email protected]>

---------

Signed-off-by: Tung D. Le <[email protected]>

* Layernorm: convert instance norm and group norm to layer norm. (onnx#2595)

Signed-off-by: Alexandre Eichenberger <[email protected]>
Co-authored-by: Tung D. Le <[email protected]>

* Parse and set --mcpu in onnx-mlir-opt command (onnx#2614)

Signed-off-by: Tung D. Le <[email protected]>

* Update sqrt.mlir

* Update sqrt.mlir

* Update invsqrt.mlir

* Update invsqrt.mlir

* Update invsqrt.mlir

* Update invsqrt.mlir

Co-authored-by: Alexandre Eichenberger <[email protected]>
Co-authored-by: Tung D. Le <[email protected]>
Co-authored-by: C-P2PN897 <[email protected]>
cjvolzka added a commit to cjvolzka/onnx-mlir that referenced this pull request Nov 15, 2023
* detect LayerNorm in presence of reciprocal and div of 1 (onnx#2609)

Signed-off-by: Alexandre Eichenberger <[email protected]>

* [NNPA] Use F16 as element type for zTensor (onnx#2611)

* Use f16 as element type for zTensor

Signed-off-by: Tung D. Le <[email protected]>

---------

Signed-off-by: Tung D. Le <[email protected]>

* Layernorm: convert instance norm and group norm to layer norm. (onnx#2595)

Signed-off-by: Alexandre Eichenberger <[email protected]>
Co-authored-by: Tung D. Le <[email protected]>

* Parse and set --mcpu in onnx-mlir-opt command (onnx#2614)

Signed-off-by: Tung D. Le <[email protected]>

* Import dim_param for model inputs and outputs (onnx#2616)

* Import dim_param for model inputs and outputs
* use argument attributes

Signed-off-by: Tung D. Le <[email protected]>

---------

Signed-off-by: Tung D. Le <[email protected]>
Co-authored-by: Alexandre Eichenberger <[email protected]>

* [DialectBuilder] add builder funcrions for ONNXSumOp and ONNXConvOp (onnx#2572)

The DialectBuilder class seems to be missing the function create the
ONNXSumOp and ONNXConOp nodes and check their shape.  This patch adds
the necessary functions.

Signed-off-by: Ashay Rane <[email protected]>
Signed-off-by: Alexandre Eichenberger <[email protected]>
Co-authored-by: Alexandre Eichenberger <[email protected]>

* [StableHLO] Lowers PadOp (constant mode) & GatherElements Op to StableHLO (onnx#2602)

* [Stablehlo] Pad constant mode & GatherElements to Stablehlo

Signed-off-by: chongsong.chen <[email protected]>
Signed-off-by: Yan Xu <[email protected]>
Co-authored-by: chongsong.chen <[email protected]>
Co-authored-by: Alexandre Eichenberger <[email protected]>

* [build] Add cmake option to enable/disable Java components build (onnx#2613)

* Add ONNX_MLIR_ENABLE_JAVA cmake option (default TRUE)

Signed-off-by: Boyana Norris <[email protected]>
Co-authored-by: Alexandre Eichenberger <[email protected]>

Co-authored-by: Alexandre Eichenberger <[email protected]>
Co-authored-by: Tung D. Le <[email protected]>
Co-authored-by: Ashay Rane <[email protected]>
Co-authored-by: Yan Xu <[email protected]>
Co-authored-by: chongsong.chen <[email protected]>
Co-authored-by: Boyana Norris <[email protected]>
cjvolzka added a commit to cjvolzka/onnx-mlir that referenced this pull request Nov 15, 2023
* 'main' of github.ibm.com:zosdev/onnx-mlir:
  Use dim_params in dynamic dimension analysis (onnx#2620)
  Update rapidcheck to include the fix for missing <cstdint> include (onnx#2623)
  Initial changes for llvm uplift (onnx#2568)
  [build] Add cmake option to enable/disable Java components build (onnx#2613)
  [StableHLO] Lowers PadOp (constant mode) & GatherElements Op to StableHLO (onnx#2602)
  [DialectBuilder] add builder funcrions for ONNXSumOp and ONNXConvOp (onnx#2572)
  Import dim_param for model inputs and outputs (onnx#2616)
  Parse and set --mcpu in onnx-mlir-opt command (onnx#2614)
  Layernorm: convert instance norm and group norm to layer norm. (onnx#2595)
  [NNPA] Use F16 as element type for zTensor (onnx#2611)
  detect LayerNorm in presence of reciprocal and div of 1 (onnx#2609)

# Conflicts:
#	test/mlir/conversion/onnx_to_krnl/NN/Normalization_O3_SIMD_canonicalize.mlir
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants