Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[js/webgpu] Enable conv+clip fuse on mobilenetv2-12-f16 #21234

Merged
merged 2 commits into from
Aug 29, 2024

Conversation

axinging
Copy link
Contributor

@axinging axinging commented Jul 3, 2024

There are failures for some inputs.

Description

Motivation and Context

There are failures for some inputs.
@axinging axinging changed the title [js/webgpu] Enable conv+clip fuse on mobilenetv2-f16 [js/webgpu] Enable conv+clip fuse on mobilenetv2-12-f16 Jul 4, 2024
@guschmue guschmue added the ep:WebGPU ort-web webgpu provider label Jul 8, 2024
@axinging axinging marked this pull request as ready for review July 10, 2024 05:27
@axinging
Copy link
Contributor Author

@fs-eire @guschmue, PTAL

@qjia7 @xhcao @hujiajie @jzm-intel @gyagp

@guschmue
Copy link
Contributor

/azp run ONNX Runtime Web CI Pipeline,Windows GPU CI Pipeline,Linux Android Emulator QNN CI Pipeline

@guschmue
Copy link
Contributor

/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline

@guschmue
Copy link
Contributor

/azp run Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline,Big Models

Copy link

Azure Pipelines successfully started running 3 pipeline(s).

Copy link

Azure Pipelines successfully started running 7 pipeline(s).

Copy link

Azure Pipelines successfully started running 9 pipeline(s).

@guschmue guschmue requested a review from skottmckay July 31, 2024 04:10
@guschmue
Copy link
Contributor

I think this was always wrong, fyi Scott for another pair of eyes.

@skottmckay
Copy link
Contributor

I think this was always wrong, fyi Scott for another pair of eyes.

We initially only had opset 13+ for the internal NHWC operators but that has now been expanded.

@fs-eire
Copy link
Contributor

fs-eire commented Aug 16, 2024

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline

@fs-eire
Copy link
Contributor

fs-eire commented Aug 16, 2024

/azp run Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-binary-size-checks-ci-pipeline

@fs-eire
Copy link
Contributor

fs-eire commented Aug 16, 2024

/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Android CI Pipeline,iOS CI Pipeline,ONNX Runtime React Native CI Pipeline

Copy link

Azure Pipelines successfully started running 5 pipeline(s).

Copy link

Azure Pipelines successfully started running 10 pipeline(s).

1 similar comment
Copy link

Azure Pipelines successfully started running 10 pipeline(s).

@guschmue guschmue merged commit 0167338 into microsoft:main Aug 29, 2024
83 of 84 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:WebGPU ort-web webgpu provider
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants