Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WebNN EP] Support WebNN async API with Asyncify #19145

Merged
merged 9 commits into from
Jan 24, 2024

Conversation

Honry
Copy link
Contributor

@Honry Honry commented Jan 15, 2024

No description provided.

@Honry
Copy link
Contributor Author

Honry commented Jan 15, 2024

@guschmue, @fs-eire, cc/ @fdwr, @huningxin,

This is a draft PR intent to test WebNN async API using Emscripten's Asyncify, I tested only two WebNN async API, CreateContext and build, these two methods are both called during ORT session creation. I can successfully call the createContext method and get expected ML context result. But for build method, seems there's race condition in _OrtCreateSession, it fails early before the build completed. See detailed log in following picture:

image

Any thoughts? Looking forward to your feedbacks, thanks!

@fs-eire
Copy link
Contributor

fs-eire commented Jan 17, 2024

This is something bothered me for a long time.

async/await is "infectious" and it eventually reaches the out-most caller. Emscripten took care a lot already, but we still need to do something on the C++/JS boundary.

Please take a look at js_internal_api.js. Currently there are 3 Wasm API that may be async: _OrtRun, _OrtRunWithBinding and _OrtBindInput. To allow build graph async in WebNN, we need to add _OrtCreateSession to the list as well. The current simple async wrapper should work for your requirement.

You need to use build flag --use_jsep to take js_internal_api.js.

BTW, since _OrtRun is already exported with JSEP async wrapper, you should be able to use MLContext.compute() instead of MLContext.computeSync() as well.

@Honry
Copy link
Contributor Author

Honry commented Jan 17, 2024

@fs-eire, thanks much for the information, that's really helpful for me.

You need to use build flag --use_jsep to take js_internal_api.js.

Does that mean WebNN ep should be used together with jsep? e.g. use webgpu.min.js rather than the ort.min.js?

@fs-eire
Copy link
Contributor

fs-eire commented Jan 17, 2024

@fs-eire, thanks much for the information, that's really helpful for me.

You need to use build flag --use_jsep to take js_internal_api.js.

Does that mean WebNN ep should be used together with jsep? e.g. use webgpu.min.js rather than the ort.min.js?

Yes. You can also use ort.all[.min].js, which includes all JS code.

@Honry Honry requested a review from a team as a code owner January 17, 2024 12:52
@Honry
Copy link
Contributor Author

Honry commented Jan 17, 2024

@fs-eire, I just try your method, but still get the same error. Could you help check my latest commit to see if there's anything missed? Thanks!

js/web/script/build.ts Outdated Show resolved Hide resolved
@Honry Honry changed the title [Draft][WebNN EP] Test WebNN async API with Asyncify [WebNN EP] Test WebNN async API with Asyncify Jan 19, 2024
@Honry
Copy link
Contributor Author

Honry commented Jan 19, 2024

@fs-eire, I'm now able to run WebNN async API behind the JSEP, thank you for your great support!

Please help review the rest commits, we are working on performance comparison with sync API, if the result looks good, I think we can drop sync API then.

@huningxin, please also help review the model.cc and model_builder.cc files, as the async API will transfer the ArrayBuffer after compute completed, we have to allocate inputs and outputs TypeArray every compute time now.

fs-eire
fs-eire previously approved these changes Jan 23, 2024
Copy link
Contributor

@fs-eire fs-eire left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took a look at all code except for core/providers/webnn/** and they look good to me.

@fs-eire fs-eire changed the title [WebNN EP] Test WebNN async API with Asyncify [WebNN EP] Support WebNN async API with Asyncify Jan 23, 2024
@fs-eire
Copy link
Contributor

fs-eire commented Jan 23, 2024

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@fs-eire
Copy link
Contributor

fs-eire commented Jan 23, 2024

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-python-checks-ci-pipeline,onnxruntime-binary-size-checks-ci-pipeline,Android CI Pipeline

@fs-eire
Copy link
Contributor

fs-eire commented Jan 23, 2024

/azp run iOS CI Pipeline,ONNX Runtime React Native CI Pipeline

Copy link

Azure Pipelines successfully started running 2 pipeline(s).

Copy link

Azure Pipelines successfully started running 9 pipeline(s).

Copy link

Azure Pipelines successfully started running 10 pipeline(s).

Copy link

@huningxin huningxin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, thanks!

@huningxin
Copy link

huningxin commented Jan 23, 2024

We tested the inference performance of async (with this PR) and sync (without this PR) for the following 38 models with WebNN EP on GPU and CPU of a laptop. According to the result:

  • WebNN GPU: async is even slightly faster than sync - async / sync 103% in average.
  • WebNN CPU: async is slightly slower than sync - async / sync 93% (updated with Chromium canary 123.0.6265.0) in average. (Because WebNN EP on CPU has more ops fallback, the performance impact of async is more.)

@RafaelCintron

Model webnn-cpu async / sync webnn-gpu async / sync
albert-base-v2 100.43 101.54
bart-large-cnn-encoder 103.80 100.35
bert-base-cased 74.63 98.65
bert-base-uncased 98.16 104.29
clip-vit-base-patch16 NA 103.82
densenet-9 93.42 105.23
detr-resnet-50 85.07 104.36
dino-vitb16 98.83 101.86
distilbart-cnn-6-6-encoder 93.35 91.92
distilbert-base-uncased 82.73 98.06
distilgpt2-decoder 92.34 101.25
efficientnet-lite4-11 103.58 110.08
flan-t5-small-encoder 95.98 99.89
emotion-ferplus-8 77.90 103.83
gpt2-decoder 90.04 97.89
m2m100-encoder 93.67 94.61
mobilenetv2-12 118.22 121.99
mobilenetv2-12-f16 103.93 110.40
mobilevit-small 92.39 113.43
msmarco-distilbert-base-v4 79.50 99.52
mt5-small-encoder 80.41 103.84
realesrgan-t256 NA 89.44
realesrgan-t256-f16 NA 99.52
resnet50-v2-7 95.05 113.06
sam-b-decoder 90.19 109.65
sam-h-decoder-f16 NA 113.85
sd15-vae-decoder NA 101.80
sd15-vae-encoder NA 102.29
sd21-vae-decoder-f16 NA 101.36
sd21-vae-encoder NA 102.51
squeezebert-uncased NA 96.94
t5-small-decoder 91.44 103.87
t5-small-encoder 95.69 99.95
tinyyolov2-8 99.05 132.03
vit-base-patch16-224 98.14 100.90
vit-gpt2-image-captioning-encoder 101.56 102.10
whisper-tiny-decoder 86.53 109.54
whisper-tiny-encoder 93.90 100.15
Average async / sync 93.45 103.84

Copy link
Contributor

@fdwr fdwr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yulong can review this better - deferring to him (but nothing looks dubious from my naive eyes).

@fs-eire
Copy link
Contributor

fs-eire commented Jan 23, 2024

It is expected that async API is slightly slower than sync API on CPU, but

WebNN EP on CPU has more ops fallback.

Only by changing the sync/async should not make the fallback behavior different. This is wired

fdwr
fdwr previously approved these changes Jan 23, 2024
@guschmue
Copy link
Contributor

lint is nagging:
image

@Honry Honry dismissed stale reviews from fdwr and fs-eire via d39d258 January 23, 2024 01:52
@huningxin
Copy link

Only by changing the sync/async should not make the fallback behavior different. This is wired

@fs-eire Sorry for the confusion. I didn't mean async leads more ops fallback. I meant because WebNN CPU EP has more ops fallback, the performance impact by using async on WebNN CPU EP is more.

@Honry
Copy link
Contributor Author

Honry commented Jan 23, 2024

@guschmue, lint error fixed.

@guschmue
Copy link
Contributor

/azp run ONNX Runtime Web CI Pipeline

@guschmue
Copy link
Contributor

/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline

@guschmue
Copy link
Contributor

/azp run Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

Copy link

Azure Pipelines successfully started running 7 pipeline(s).

Copy link

Azure Pipelines successfully started running 9 pipeline(s).

@fs-eire fs-eire merged commit 7252c6e into microsoft:main Jan 24, 2024
62 of 64 checks passed
@guschmue guschmue added the ep:WebNN WebNN execution provider label Jan 25, 2024
YUNQIUGUO pushed a commit that referenced this pull request Mar 29, 2024
### Description
This PR is a preview of cherry-picks for ort-web to `rel-1.17.3` based
on `rel-1.17.2`.

<details>

<summary>Changes of ort-web to cherry-pick</summary>

The following commits are from main branch.

`o` stands for pick, and `x` stands for skip.
```
o   2e0a388 [js/webgpu] Add HardSigmoid support (#19215)
o   d226e40 [js/webgpu] set query type in onRunStart (#19202)
o   61610ff [js/webgpu] Add FusedConv clip test case (#18900)
o   a33b5bd [JS/WebGPU] Added Uniforms to SkipLayerNorm. (#18788)
o   591f90c [js/webgpu] Fix issue of timestamp query (#19258)
o   7252c6e [WebNN EP] Support WebNN async API with Asyncify (#19145)
o   5b06505 [js/webgpu] Fix Tanh explosion (#19201)
o   656ca66 [js/webgpu] Support uniforms for conv, conv transpose, conv grouped (#18753)
o   a3f0e24 [js/webgpu] Support f16 uniform (#19098)
o   9e69606 fix f16 for attention, enable slice and flatten for more types (#19262)
o   624b4e2 [js/webgpu] Remove enableShapesUniforms (#19279)
o   90883a3 [js/webgpu] Add hardSigmoid activation for fusedConv (#19233)
o   85cef0a [js/webgpu] Support capture and replay for jsep (#18989)
o   d73131c [js/webgpu] Use DataType as uniform cpu type (#19281)
o   dd1f6cc [js/webgpu] resolve codescan alert (#19343)
o   3a2ab19 [js/webgpu] Refactor createTensorShapeVariables (#18883)
o   efc17e7 [js/webgpu] Fix the undefined push error (#19366)
 x  50806a7 [js/web] support external data in npm test (#19377)
o   ccbe264 [js/webgpu] Add LeakyRelu activation for fusedConv (#19369)
o   5ff27ef [js/webgpu] support customop FastGelu (#19392)
 x  03be65e [js/web] fix types exports in package.json (#19458)
o   06269a3 [js/webgpu] allow uint8 tensors for webgpu (#19545)
o   dfeda90 [JS/WebGPU] Add MatMulNBits (#19446)
o   1b48054 [js/webgpu] Create Split indices helpers by rank, not by shape (#19554)
o   3fe2c13 [js] small fix to workaround formatter (#19400)
 x  70567a4 [js/web] use ApiTensor insteadof onnxjs Tensor in TensorResultValidator (#19358)
o   6e04e36 [js/common] upgrade tsc in common from 4.9.5 to 5.2.2 (#19317)
o   58f4921 [js] changes to allow Float16Array if any polyfill is available (#19305)
o   57d6819 [js/web] Fix fused-conv is not included in npm test (#19581)
o   ebd220b Misspelling in README.md (#19433)
o   38c3432 Bump ip from 1.1.8 to 1.1.9 in /js/react_native (#19582)
o   fe82fcc [js/webgpu] Fix Conv2DTransposeMatMul f16 compilation failure (#19596)
o   76a2a48 Bump ip from 1.1.8 to 1.1.9 in /js/react_native/e2e (#19583)
o   29b1106 [node] Switch to setImmediate to avoid starving the Node.js event loop (#19610)
o   ae3d73c [JS/WebGPU] Fix Split and Where to handle corner cases. (#19613)
o   aec2389 [js/webgpu] allows a ProgramInfo's RunData to use zero sized output (#19614)
o   bb43a0f [js/webgpu] minor fixes to make tinyllama work (#19564)
o   0edb035 [js/web] fix suite test list for zero sized tensor (#19638)
o   3cb81cd [js/common] move 'env.wasm.trace' to 'env.trace' (#19617)
o   e30618d [js/webgpu] use Headless for webgpu test by default (#19702)
o   f06164e [js/web] transfer input buffer back to caller thread (#19677)
 x  a788514 [js/web] dump debug logs for karma for diagnose purpose (#19785)
o   24b72d2 [JS/WebGPU] Preserve zero size input tensor dims. (#19737)
o   4538d31 [js/webgpu] expose a few properties in WebGPU API (#19857)
o   53de2d8 [js/webgpu] Enable GroupedConvVectorize path (#19791)
o   ed250b8 [JS/WebGPU] Optimize MatMulNBits (#19852)
 x  e771a76 [js/test] align web test runner flags with ort.env (#19790)
o   79e50ae [js/web] rewrite backend resolve to allow multiple EPs (#19735)
o   acb0df2 Fix #19931 broken Get Started link of "ONNX Runtime JavaScript API" page (#19932)
o   b29849a [js/common] fix typedoc warnings (#19933)
o   afdab62 Bump follow-redirects from 1.15.4 to 1.15.6 in /js/web (#19949)
o   28ad6c3 Bump follow-redirects from 1.15.4 to 1.15.6 in /js/node (#19951)
o   7e0d424 accumulate in fp32 for Reduce* (#19868)
o   4c6a6a3 [js/webgpu] Fix NAN caused by un-initialized buffer in instance-norm (#19387)
o   01c7aaf [js/webgpu] allow setting env.webgpu.adapter (#19940)
o   c45cff6 [js/webgpu] fix maxpool / fp16 (#19981)
```

</details>

<details>
<summary>Cherry-pick commandlines</summary>

```sh
git cherry-pick 2e0a388
git cherry-pick d226e40
git cherry-pick 61610ff
git cherry-pick a33b5bd
git cherry-pick 591f90c
git cherry-pick 7252c6e
git cherry-pick 5b06505
git cherry-pick 656ca66
git cherry-pick a3f0e24
git cherry-pick 9e69606
git cherry-pick 624b4e2
git cherry-pick 90883a3
git cherry-pick 85cef0a  #<<<<< Note: conflicts
git cherry-pick d73131c
git cherry-pick dd1f6cc
git cherry-pick 3a2ab19
git cherry-pick efc17e7
git cherry-pick ccbe264
git cherry-pick 5ff27ef
git cherry-pick 06269a3
git cherry-pick dfeda90
git cherry-pick 1b48054
git cherry-pick 3fe2c13
git cherry-pick 6e04e36
git cherry-pick 58f4921
git cherry-pick 57d6819
git cherry-pick ebd220b
git cherry-pick 38c3432
git cherry-pick fe82fcc
git cherry-pick 76a2a48
git cherry-pick 29b1106
git cherry-pick ae3d73c
git cherry-pick aec2389
git cherry-pick bb43a0f
git cherry-pick 0edb035
git cherry-pick 3cb81cd
git cherry-pick e30618d
git cherry-pick f06164e
git cherry-pick 24b72d2
git cherry-pick 4538d31
git cherry-pick 53de2d8
git cherry-pick ed250b8
git cherry-pick 79e50ae
git cherry-pick acb0df2
git cherry-pick b29849a
git cherry-pick afdab62
git cherry-pick 28ad6c3
git cherry-pick 7e0d424
git cherry-pick 4c6a6a3
git cherry-pick 01c7aaf
git cherry-pick c45cff6
```
</details>

<details>
<summary>Cherry-pick conflicts</summary>

- 85cef0a #18989
this change is for enabling graph capture feature for JSEP, and it is
done after ROCM EP enabled graph capture feature. However, the ROCM EP
graph capture feature is not cherry-picked in rel-1.17.2.
</details>

---------

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: Jiajia Qin <[email protected]>
Co-authored-by: Xu Xing <[email protected]>
Co-authored-by: satyajandhyala <[email protected]>
Co-authored-by: Yang Gu <[email protected]>
Co-authored-by: Wanming Lin <[email protected]>
Co-authored-by: Jiajie Hu <[email protected]>
Co-authored-by: Guenther Schmuelling <[email protected]>
Co-authored-by: Matttttt <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Segev Finer <[email protected]>
Co-authored-by: Belem Zhang <[email protected]>
siweic0 pushed a commit to siweic0/onnxruntime-web that referenced this pull request May 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:WebNN WebNN execution provider
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants