Skip to content

Commit

Permalink
Editorial: improve titles with canonical form using interface/method/…
Browse files Browse the repository at this point in the history
…dictionary etc.

Signed-off-by: Zoltan Kis <[email protected]>
  • Loading branch information
zolkis committed Dec 1, 2022
1 parent 29c0323 commit 982fcd2
Showing 1 changed file with 41 additions and 41 deletions.
82 changes: 41 additions & 41 deletions index.bs
Original file line number Diff line number Diff line change
Expand Up @@ -539,7 +539,7 @@ The following table summarizes the types of resource supported by the context cr
API {#api}
=====================

## navigator.ml ## {#api-navigator-ml}
## The navigator.ml interface ## {#api-navigator-ml}

A {{ML}} object is available in the {{Window}} and {{DedicatedWorkerGlobalScope}} contexts through the {{Navigator}}
and {{WorkerNavigator}} interfaces respectively and is exposed via `navigator.ml`:
Expand All @@ -552,7 +552,7 @@ Navigator includes NavigatorML;
WorkerNavigator includes NavigatorML;
</script>

## ML ## {#api-ml}
## The ML interface ## {#api-ml}
<script type=idl>
enum MLDeviceType {
"cpu",
Expand Down Expand Up @@ -612,7 +612,7 @@ This specification defines a <a>policy-controlled feature</a> identified by the
string "<code><dfn data-lt="webnn-feature">webnn</dfn></code>".
Its <a>default allowlist</a> is <code>'self'</code>.

## MLContext ## {#api-mlcontext}
## The MLContext interface ## {#api-mlcontext}
The {{MLContext}} interface represents a global state of neural network compute workload and execution processes. Each {{MLContext}} object has associated [=context type=], [=device type=] and [=power preference=].

The <dfn>context type</dfn> is the type of the execution context that manages the resources and facilitates the compilation and execution of the neural network graph:
Expand Down Expand Up @@ -830,7 +830,7 @@ partial interface MLContext {
**Returns:** {{MLCommandEncoder}}. The command encoder used to record ML workload on the GPU.
</div>

## MLOperandDescriptor ## {#api-mloperanddescriptor}
## The MLOperandDescriptor dictionary ## {#api-mloperanddescriptor}
<script type=idl>
enum MLInputOperandLayout {
"nchw",
Expand Down Expand Up @@ -865,7 +865,7 @@ dictionary MLOperandDescriptor {
1. Return |elementLength| × |elementSize|.
</div>

## MLOperand ## {#api-mloperand}
## The MLOperand interface ## {#api-mloperand}

An {{MLOperand}} represents an intermediary graph being constructed as a result of compositing parts of an operation into a fully composed operation.

Expand All @@ -878,7 +878,7 @@ interface MLOperand {};

See also [[#security-new-ops]]

## MLOperator ## {#api-mloperator}
## The MLOperator interface ## {#api-mloperator}

Objects implementing the {{MLOperator}} interface represent activation function types. As a generic construct, this interface may be reused for other types in a future version of this specification.

Expand All @@ -895,7 +895,7 @@ These activations function types are used to create other operations. One such u
The implementation of the {{MLOperator}} interface can simply be a struct that holds a string type of the activation function along with other properties needed. The actual creation of the activation function e.g. a [[#api-mlgraphbuilder-sigmoid]] or [[#api-mlgraphbuilder-relu]] can then be deferred until when the rest of the graph is ready to connect with it such as during the construction of [[#api-mlgraphbuilder-conv2d]] for example.
</div>

## MLGraphBuilder ## {#api-mlgraphbuilder}
## The MLGraphBuilder interface ## {#api-mlgraphbuilder}

The {{MLGraphBuilder}} interface defines a set of operations as identified by the [[#usecases]] that can be composed into a computational graph. It also represents the intermediate state of a graph building session.

Expand Down Expand Up @@ -937,7 +937,7 @@ interface MLGraphBuilder {
Both {{MLGraphBuilder}}.{{MLGraphBuilder/build()}} and {{MLGraphBuilder}}.{{MLGraphBuilder/buildSync()}} methods compile the graph builder state up to the specified output operands into a compiled graph according to the type of {{MLContext}} that creates it. Since this operation can be costly in some machine configurations, the calling thread of the {{MLGraphBuilder}}.{{MLGraphBuilder/buildSync()}} method must only be a worker thread to avoid potential disruption of the user experience. When the {{[[contextType]]}} of the {{MLContext}} is set to [=default-context|default=], the compiled graph is initialized right before the {{MLGraph}} is returned. This graph initialization stage is important for optimal performance of the subsequent graph executions. See [[#api-mlcommandencoder-graph-initialization]] for more detail.
</div>

### batchNormalization ### {#api-mlgraphbuilder-batchnorm}
### The batchNormalization() method ### {#api-mlgraphbuilder-batchnorm}
Normalize the tensor values of input features across the batch dimension using [[Batch-Normalization]]. For each input feature, the mean and variance values of that feature supplied in this calculation as parameters are previously computed across the batch dimension of the input during the model training phase of this operation.
<script type=idl>
dictionary MLBatchNormalizationOptions {
Expand Down Expand Up @@ -988,7 +988,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### clamp ### {#api-mlgraphbuilder-clamp}
### The clamp() method ### {#api-mlgraphbuilder-clamp}
Clamp the input tensor element-wise within a range specified by the minimum and maximum values.
<script type=idl>
dictionary MLClampOptions {
Expand Down Expand Up @@ -1037,7 +1037,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### concat ### {#api-mlgraphbuilder-concat}
### The concat() method ### {#api-mlgraphbuilder-concat}
Concatenates the input tensors along a given axis.
<script type=idl>
partial interface MLGraphBuilder {
Expand All @@ -1058,7 +1058,7 @@ partial interface MLGraphBuilder {
computed as the sum of all the input sizes of the same dimension.
</div>

### conv2d ### {#api-mlgraphbuilder-conv2d}
### The conv2d() method ### {#api-mlgraphbuilder-conv2d}
Compute a 2-D convolution given 4-D input and filter tensors
<script type=idl>
enum MLConv2dFilterOperandLayout {
Expand Down Expand Up @@ -1139,7 +1139,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### convTranspose2d ### {#api-mlgraphbuilder-convtranspose2d}
### The convTranspose2d() method ### {#api-mlgraphbuilder-convtranspose2d}
Compute a 2-D transposed convolution given 4-D input and filter tensors
<script type=idl>

Expand Down Expand Up @@ -1211,7 +1211,7 @@ partial interface MLGraphBuilder {
*output size = (input size - 1) ** *stride + filter size + (filter size - 1) ** *(dilation - 1) - beginning padding - ending padding + output padding*
</div>

### element-wise binary operations ### {#api-mlgraphbuilder-binary}
### Element-wise binary operations ### {#api-mlgraphbuilder-binary}
Compute the element-wise binary addition, subtraction, multiplication, division,
maximum and minimum of the two input tensors.
<script type=idl>
Expand Down Expand Up @@ -1248,7 +1248,7 @@ partial interface MLGraphBuilder {
- *pow*: Compute the values of the values of the first input tensor to the power of the values of the second input tensor, element-wise.
</div>

### element-wise unary operations ### {#api-mlgraphbuilder-unary}
### Element-wise unary operations ### {#api-mlgraphbuilder-unary}
Compute the element-wise unary operation for input tensor.
<script type=idl>
partial interface MLGraphBuilder {
Expand Down Expand Up @@ -1283,7 +1283,7 @@ partial interface MLGraphBuilder {
- *tan*: Compute the tangent of the input tensor, element-wise.
</div>

### elu ### {#api-mlgraphbuilder-elu}
### The elu() method ### {#api-mlgraphbuilder-elu}
Calculate the <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#ELU"> exponential linear unit function</a> on the input tensor element-wise. The calculation follows the expression `max(0, x) + alpha * (exp(min(0, x)) - 1)`.
<script type=idl>
dictionary MLEluOptions {
Expand Down Expand Up @@ -1322,7 +1322,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### gemm ### {#api-mlgraphbuilder-gemm}
### The gemm() method ### {#api-mlgraphbuilder-gemm}
Calculate the [general matrix multiplication of the Basic Linear Algebra Subprograms](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3). The calculation follows the expression `alpha * A * B + beta * C`, where `A` is a 2-D tensor with shape [M, K] or [K, M], `B` is a 2-D tensor with shape [K, N] or [N, K], and `C` is broadcastable to the shape [M, N]. `A` and `B` may optionally be transposed prior to the calculation.
<script type=idl>
dictionary MLGemmOptions {
Expand Down Expand Up @@ -1365,7 +1365,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### gru ### {#api-mlgraphbuilder-gru}
### The gru() method ### {#api-mlgraphbuilder-gru}
Gated Recurrent Unit [[GRU]] recurrent network using an update gate and a reset gate to compute the hidden state that rolls into the output across the temporal sequence of the Network
<script type=idl>
enum MLRecurrentNetworkWeightLayout {
Expand Down Expand Up @@ -1476,7 +1476,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### gruCell ### {#api-mlgraphbuilder-grucell}
### The gruCell() method ### {#api-mlgraphbuilder-grucell}
A single time step of the Gated Recurrent Unit [[GRU]] recurrent network using an update gate and a reset gate to compute the hidden state that rolls into the output across the temporal sequence of a recurrent network.
<script type=idl>
dictionary MLGruCellOptions {
Expand Down Expand Up @@ -1606,7 +1606,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### hardSigmoid ### {#api-mlgraphbuilder-hard-sigmoid}
### The hardSigmoid() method ### {#api-mlgraphbuilder-hard-sigmoid}
Calculate the <a href="https://en.wikipedia.org/wiki/Hard_sigmoid"> non-smooth function</a> used in place of a sigmoid function on the input tensor.
<script type=idl>
dictionary MLHardSigmoidOptions {
Expand Down Expand Up @@ -1647,7 +1647,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### hardSwish ### {#api-mlgraphbuilder-hard-swish}
### The hardSwish() method ### {#api-mlgraphbuilder-hard-swish}
Computes the nonlinear function `y = x * max(0, min(6, (x + 3))) / 6` that is introduced by [[MobileNetV3]] on the input tensor element-wise.
<script type=idl>
partial interface MLGraphBuilder {
Expand Down Expand Up @@ -1682,7 +1682,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### instanceNormalization ### {#api-mlgraphbuilder-instancenorm}
### The instanceNormalization() method ### {#api-mlgraphbuilder-instancenorm}
Normalize the input features using [[Instance-Normalization]]. Unlike [[#api-mlgraphbuilder-batchnorm]] where the mean and variance values used in the calculation are previously computed across the batch dimension during the model training phase, the mean and variance values used in the calculation of an instance normalization are computed internally on the fly per input feature.
<script type=idl>
dictionary MLInstanceNormalizationOptions {
Expand Down Expand Up @@ -1743,7 +1743,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### leakyRelu ### {#api-mlgraphbuilder-leakyrelu}
### The leakyRelu() method ### {#api-mlgraphbuilder-leakyrelu}
Calculate the <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#Leaky_ReLU"> leaky version of rectified linear function</a> on the input tensor element-wise. The calculation follows the expression `max(0, x) + alpha ∗ min(0, x)`.
<script type=idl>
dictionary MLLeakyReluOptions {
Expand Down Expand Up @@ -1777,7 +1777,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### matmul ### {#api-mlgraphbuilder-matmul}
### The matmul() method ### {#api-mlgraphbuilder-matmul}
Compute the matrix product of two input tensors.
<script type=idl>
partial interface MLGraphBuilder {
Expand Down Expand Up @@ -1810,7 +1810,7 @@ partial interface MLGraphBuilder {
which produces a scalar output.
</div>

### linear ### {#api-mlgraphbuilder-linear}
### The linear() method ### {#api-mlgraphbuilder-linear}
Calculate a linear function `y = alpha * x + beta` on the input tensor.
<script type=idl>
dictionary MLLinearOptions {
Expand Down Expand Up @@ -1847,7 +1847,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### pad ### {#api-mlgraphbuilder-pad}
### The pad() method ### {#api-mlgraphbuilder-pad}
Inflate the tensor with constant or mirrored values on the edges.
<script type=idl>
enum MLPaddingMode {
Expand Down Expand Up @@ -1916,7 +1916,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### pooling operations ### {#api-mlgraphbuilder-pool2d}
### Pooling operations ### {#api-mlgraphbuilder-pool2d}
Compute a *mean*, *L2 norm*, or *max* reduction operation across all the elements within the moving window over the input tensor. See the description of each type of reduction in [[#api-mlgraphbuilder-reduce]].
<script type=idl>
enum MLRoundingType {
Expand Down Expand Up @@ -1989,7 +1989,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### reduction operations ### {#api-mlgraphbuilder-reduce}
### Reduction operations ### {#api-mlgraphbuilder-reduce}
Reduce the input along the dimensions given in *axes*.
<script type=idl>
dictionary MLReduceOptions {
Expand Down Expand Up @@ -2034,7 +2034,7 @@ partial interface MLGraphBuilder {
- *SumSquare*: Compute the sum of the square of all the input values along the axes.
</div>

### relu ### {#api-mlgraphbuilder-relu}
### The relu() method ### {#api-mlgraphbuilder-relu}
Compute the <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)">rectified linear function</a> of the input tensor.
<script type=idl>
partial interface MLGraphBuilder {
Expand All @@ -2061,7 +2061,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### resample2d ### {#api-mlgraphbuilder-resample2d}
### The resample2d() method ### {#api-mlgraphbuilder-resample2d}
Resample the tensor values from the source to the destination spatial dimensions according to the scaling factors.
<script type=idl>
enum MLInterpolationMode {
Expand Down Expand Up @@ -2093,7 +2093,7 @@ partial interface MLGraphBuilder {
**Returns:** an {{MLOperand}}. The output 4-D tensor.
</div>

### reshape ### {#api-mlgraphbuilder-reshape}
### The reshape() method ### {#api-mlgraphbuilder-reshape}
Alter the shape of a tensor to a new shape. Reshape does not copy or change the content of the tensor. It just changes the tensor's logical dimensions for the subsequent operations.
<script type=idl>
partial interface MLGraphBuilder {
Expand All @@ -2115,7 +2115,7 @@ partial interface MLGraphBuilder {
tensor is specified by the *newShape* argument.
</div>

### sigmoid ### {#api-mlgraphbuilder-sigmoid}
### The sigmoid() method ### {#api-mlgraphbuilder-sigmoid}
Compute the <a href="https://en.wikipedia.org/wiki/Sigmoid_function">sigmoid function</a> of the input tensor. The calculation follows the expression `1 / (exp(-x) + 1)`.
<script type=idl>
partial interface MLGraphBuilder {
Expand Down Expand Up @@ -2146,7 +2146,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### slice ### {#api-mlgraphbuilder-slice}
### The slice() method ### {#api-mlgraphbuilder-slice}
Produce a slice of the input tensor.
<script type=idl>
dictionary MLSliceOptions {
Expand All @@ -2170,7 +2170,7 @@ partial interface MLGraphBuilder {
**Returns:** an {{MLOperand}}. The output tensor of the same rank as the input tensor with tensor values stripped to the specified starting and ending indices in each dimension.
</div>

### softmax ### {#api-mlgraphbuilder-softmax}
### The softmax() method ### {#api-mlgraphbuilder-softmax}
Compute the [softmax](https://en.wikipedia.org/wiki/Softmax_function) values of
the 2-D input tensor along axis 1.
<script type=idl>
Expand Down Expand Up @@ -2203,7 +2203,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### softplus ### {#api-mlgraphbuilder-softplus}
### The softplus() method ### {#api-mlgraphbuilder-softplus}
Compute the <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#Softplus">softplus function</a> of the input tensor. The calculation follows the expression `ln(1 + exp(steepness * x)) / steepness`.
<script type=idl>
dictionary MLSoftplusOptions {
Expand Down Expand Up @@ -2239,7 +2239,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### softsign ### {#api-mlgraphbuilder-softsign}
### The softsign() method ### {#api-mlgraphbuilder-softsign}
Compute the <a href="https://pytorch.org/docs/stable/generated/torch.nn.Softsign.html">softsign function</a> of the input tensor. The calculation follows the expression `x / (1 + |x|)`.
<script type=idl>
partial interface MLGraphBuilder {
Expand All @@ -2266,7 +2266,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### split ### {#api-mlgraphbuilder-split}
### The split() method ### {#api-mlgraphbuilder-split}
Split the input tensor into a number of sub tensors along the given axis.
<script type=idl>
dictionary MLSplitOptions {
Expand Down Expand Up @@ -2306,7 +2306,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### squeeze ### {#api-mlgraphbuilder-squeeze}
### The squeeze() method ### {#api-mlgraphbuilder-squeeze}
Reduce the rank of a tensor by eliminating dimensions with size 1 of the tensor shape. Squeeze only affects the tensor's logical dimensions. It does not copy or change the content in the tensor.
<script type=idl>
dictionary MLSqueezeOptions {
Expand All @@ -2326,7 +2326,7 @@ partial interface MLGraphBuilder {
**Returns:** an {{MLOperand}}. The output tensor of the same or reduced rank with the shape dimensions of size 1 eliminated.
</div>

### tanh ### {#api-mlgraphbuilder-tanh}
### The tanh() method ### {#api-mlgraphbuilder-tanh}
Compute the <a href="https://en.wikipedia.org/wiki/Hyperbolic_functions">hyperbolic tangent function</a> of the input tensor. The calculation follows the expression `(exp(2 * x) - 1) / (exp(2 * x) + 1)`.
<script type=idl>
partial interface MLGraphBuilder {
Expand Down Expand Up @@ -2355,7 +2355,7 @@ partial interface MLGraphBuilder {
</div>
</div>

### transpose ### {#api-mlgraphbuilder-transpose}
### The transpose() method ### {#api-mlgraphbuilder-transpose}
Permute the dimensions of the input tensor according to the *permutation* argument.
<script type=idl>
dictionary MLTransposeOptions {
Expand All @@ -2375,7 +2375,7 @@ partial interface MLGraphBuilder {
**Returns:** an {{MLOperand}}. The permuted or transposed N-D tensor.
</div>

## MLGraph ## {#api-mlgraph}
## The MLGraph interface ## {#api-mlgraph}
The {{MLGraph}} interface represents a compiled computational graph. A compiled graph once constructed is immutable and cannot be subsequently changed.

<script type=idl>
Expand Down Expand Up @@ -2403,7 +2403,7 @@ interface MLGraph {};
The underlying implementation provided by the User Agent.
</dl>

## MLCommandEncoder ## {#api-mlcommandencoder}
## The MLCommandEncoder interface ## {#api-mlcommandencoder}
The {{MLCommandEncoder}} interface represents a method of execution that synchronously records the computational workload of a compiled {{MLGraph}} to a {{GPUCommandBuffer}} on the calling thread. Since the workload is not immediately executed, just recorded, this method allows more flexibility for the caller to determine how and when the recorded commands will be submitted for execution on the GPU relative to other GPU workload on the same or different queue.

<script type=idl>
Expand Down

0 comments on commit 982fcd2

Please sign in to comment.