Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MLBuffer] Support for MLBuffer in graph execution #544

Open
bbernhar opened this issue Jan 31, 2024 · 4 comments
Open

[MLBuffer] Support for MLBuffer in graph execution #544

bbernhar opened this issue Jan 31, 2024 · 4 comments

Comments

@bbernhar
Copy link

bbernhar commented Jan 31, 2024

Purpose/Motivation

Provides a means to execute a MLGraph using MLBuffer. This is a sub-issue of #482.

Proposed API

dictionary MLBufferView {
  required MLBuffer buffer;
  MLSize64 offset = 0;
  MLSize64 size;
};

typedef record<DOMString, MLBufferView> MLNamedMLBufferViews;

[Exposed=(Window, DedicatedWorker), SecureContext]
partial interface MLContext {
  undefined dispatch(
        MLGraph graph, MLNamedMLBufferViews inputs, MLNamedMLBufferViews outputs);
}

Example JS

const bufferA = new MLBuffer({size:4});
const bufferB = new MLBuffer({size:4});
const inputs = {'A': {bufferA, 0, bufferA.size()};
const outputs = {'B': {bufferB, 0, bufferB.size()};
context.dispatch(graph, inputs, outputs);
  • Enqueues a request to compute the graph onto some WebNN timeline
  • Execution cannot start until all input and output MLBuffers are available
  • All input and output MLBuffers are unavailable while execution is in progress
  • All work submitted after this dispatch() call which relies on an input or output MLBuffer will be queued behind this execution

Alternative API proposals

N/A

Opens

  1. Should this method be on the MLGraph (related to API simplification: context owns builder, graph becomes internal slot #303)? @a-sully
  2. Is it valid to pass the same MLBuffer as both an input and output of the same dispatch() call? @a-sully
  3. If the approach is flexible enough to allow for graph execution on all backends. Do we need a separate compute() method? @a-sully
  4. Can dispatch be exclusive to MLBuffer bindings? @bbernhar
@bbernhar bbernhar changed the title [MLBuffer] Support bind to and execute MLBuffer in graph execution [MLBuffer] Support for MLBuffer in graph execution Jan 31, 2024
@bbernhar
Copy link
Author

@anssiko FYI, this issue should be tagged as non-interop

@inexorabletash
Copy link
Member

I was interpreting webgpu interop to apply to anything easing interop between WebNN and WebGPU, which would implicitly cover anything related to MLBuffer.

Is there another interpretation we should be using? And whatever we decide, can @anssiko apply it to the label so it shows at https://github.com/webmachinelearning/webnn/labels ? (I don't have permission to do so.)

@bbernhar
Copy link
Author

Is there another interpretation we should be using?

Good question. Early on, MLBuffer was just for interop. But now MLBuffer will be used without interop too [no WebGPU] like any other WebNN primitive (ex. MLGraph). Since we have a dedicated issue for the interop part now, I don't think the webgpu interop label is as useful.

@inexorabletash
Copy link
Member

Thanks @bbernhar that helps! I'll drop the label, we can always re-add it later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants