-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
op metadata that helps avoid implementation mistakes #243
Comments
The group had this issue on its agenda, but we did not record anything in the minutes. All - please share your feedback for the proposal on this issue. |
We discussed this issue again on our call. It was noted it may not be possible to extract very useful metadata for all ops given the complexity of operations. @quidity, we'd like to hear which of these options would work for you:
Option 1: add metadata into each op definition (using relu as an example):7.7.21. reluCompute the rectified linear function of the input tensor.
Arguments: Returns: an MLOperand. The output tensor of the same shape as x. Metadata: Option 2: add a single table combining metadata across all ops:
|
Having something machine-readable (an appendix, or directly as metadata on the ops) will be more useful when constructing fuzzers or running a hardened debug implementation designed to detect implementation errors. A table is useful to people, but less so to the machines. |
I would like to point out that the Once the graph is fully constructed and compiled, the input shapes into each of the operations in the graph are inferred and finalized. It is part of what the implementation of the |
As @wchao1115 mentioned, The |
In #251 we landed an update to https://www.w3.org/TR/webnn/#security that describes how the chosen API design (graph definition API) minimizes the attack surface for the compiled computational graph. We believe this update makes the benefits of this design choice clearer and addresses this issue. |
Some way to help implementors validate that data is being passed correctly between nodes in the computation graph, has the expected sizes, etc.
Perhaps a little language which describes the required sizes/shapes and can be used to create checks that can be enforced as a graph is constructed, and as a graph is executed.
e.g. fields for ops which indicate:
Does it change input/output tensor contents?
Does it change input/output tensor sizes?
Can the input tensor be the output tensor?
Shape of the input, shape of the output?
Constraints on other input parameters that can be turned into automatic checks.
The text was updated successfully, but these errors were encountered: