Skip to content

Commit

Permalink
Correct Spelling and Proper Capitalization in Documentation (#21790)
Browse files Browse the repository at this point in the history
This pull request addresses several spelling errors and inconsistencies
in the capitalization of proper nouns within the documentation.

### Motivation and Context
To improve the quality of the documentation, spelling errors and
capitalization mistakes have been corrected. This ensures that the
content is more accurate and easier to read.
  • Loading branch information
emergent authored Aug 19, 2024
1 parent 0321041 commit 99bfaa3
Show file tree
Hide file tree
Showing 17 changed files with 27 additions and 27 deletions.
2 changes: 1 addition & 1 deletion docs/extensions/build.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ check this link https://docs.opensource.microsoft.com/releasing/general-guidance
(v6.9.2)

## Commands
Launch **Developer PowerShell for VS 2022** in Windows Tereminal
Launch **Developer PowerShell for VS 2022** in Windows Terminal
```
. $home\miniconda3\shell\condabin\conda-hook.ps1
conda activate base
Expand Down
2 changes: 1 addition & 1 deletion docs/extensions/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ pip install --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_pa

The onnxruntime-extensions package depends on onnx and onnxruntime.

##### on Linux/MacOS
##### on Linux/macOS

Please make sure the compiler toolkit like gcc(later than g++ 8.0) or clang are installed before the following command

Expand Down
2 changes: 1 addition & 1 deletion docs/get-started/with-java.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ The ONNX runtime provides a Java binding for running inference on ONNX models on
Java 8 or newer

## Builds
Release artifacts are published to **Maven Central** for use as a dependency in most Java build tools. The artifacts are built with support for some popular plaforms.
Release artifacts are published to **Maven Central** for use as a dependency in most Java build tools. The artifacts are built with support for some popular platforms.

![Version Shield](https://img.shields.io/maven-central/v/com.microsoft.onnxruntime/onnxruntime)

Expand Down
6 changes: 3 additions & 3 deletions docs/get-started/with-javascript/web.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ See [ONNX Runtime JavaScript API](../../api/js/index.html){:target="_blank"} for
- [SessionOptions](https://github.com/microsoft/onnxruntime-inference-examples/blob/main/js/api-usage_session-options) - a demonstration of how to configure creation of an InferenceSession instance.
- [ort.env flags](https://github.com/microsoft/onnxruntime-inference-examples/blob/main/js/api-usage_ort-env-flags) - a demonstration of how to configure a set of global flags.

- See also: Typescript declarations for [Inference Session](https://github.com/microsoft/onnxruntime/blob/main/js/common/lib/inference-session.ts), [Tensor](https://github.com/microsoft/onnxruntime/blob/main/js/common/lib/tensor.ts), and [Environment Flags](https://github.com/microsoft/onnxruntime/blob/main/js/common/lib/env.ts) for reference.
- See also: TypeScript declarations for [Inference Session](https://github.com/microsoft/onnxruntime/blob/main/js/common/lib/inference-session.ts), [Tensor](https://github.com/microsoft/onnxruntime/blob/main/js/common/lib/tensor.ts), and [Environment Flags](https://github.com/microsoft/onnxruntime/blob/main/js/common/lib/env.ts) for reference.

See [Tutorial: Web](../../tutorials/web/index.md) for tutorials.

Expand All @@ -98,7 +98,7 @@ The following are video tutorials that use ONNX Runtime Web in web applications:

## Supported Versions

| EPs/Browsers | Chrome/Edge (Windows) | Chrome/Edge (Android) | Chrome/Edge (MacOS) | Chrome/Edge (iOS) | Safari (MacOS) | Safari (iOS) | Firefox (Windows) | Node.js |
| EPs/Browsers | Chrome/Edge (Windows) | Chrome/Edge (Android) | Chrome/Edge (macOS) | Chrome/Edge (iOS) | Safari (macOS) | Safari (iOS) | Firefox (Windows) | Node.js |
|--------------|--------|---------|--------|------|---|----|------|-----|
| WebAssembly (CPU) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️<sup>\[1]</sup> |
| WebGPU | ✔️<sup>\[2]</sup> | ✔️<sup>\[3]</sup> | ✔️ ||||||
Expand All @@ -109,4 +109,4 @@ The following are video tutorials that use ONNX Runtime Web in web applications:
- \[2]: WebGPU requires Chromium v113 or later on Windows. Float16 support requires Chrome v121 or later, and Edge v122 or later.
- \[3]: WebGPU requires Chromium v121 or later on Windows.
- \[4]: WebGL support is in maintenance mode. It is recommended to use WebGPU for better performance.
- \[5]: Requires to launch browser with commandline flag `--enable-features=WebMachineLearningNeuralNetwork`.
- \[5]: Requires to launch browser with commandline flag `--enable-features=WebMachineLearningNeuralNetwork`.
6 changes: 3 additions & 3 deletions docs/get-started/with-python.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ nav_order: 1
# Get started with ONNX Runtime in Python
{: .no_toc }

Below is a quick guide to get the packages installed to use ONNX for model serialization and infernece with ORT.
Below is a quick guide to get the packages installed to use ONNX for model serialization and inference with ORT.

## Contents
{: .no_toc }
Expand Down Expand Up @@ -128,7 +128,7 @@ onnx_model = onnx.load("ag_news_model.onnx")
onnx.checker.check_model(onnx_model)
```

- Create inference session with `ort.infernnce`
- Create inference session with `ort.InferenceSession`
```python
import onnxruntime as ort
import numpy as np
Expand Down Expand Up @@ -170,7 +170,7 @@ output_path = model.name + ".onnx"
model_proto, _ = tf2onnx.convert.from_keras(model, input_signature=spec, opset=13, output_path=output_path)
output_names = [n.name for n in model_proto.graph.output]
```
- Create inference session with `rt.infernnce`
- Create inference session with `rt.InferenceSession`

```python
providers = ['CPUExecutionProvider']
Expand Down
4 changes: 2 additions & 2 deletions docs/performance/device-tensor.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ Ort::Value ort_value(Ort::Value::CreateTensor(memory_info_dml, dml_resource,
ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT));
```
A [single file sample](https://github.com/ankan-ban/HelloOrtDml/blob/main/Main.cpp) can be found on github which shows how to manage and create copy and execution command queues.
A [single file sample](https://github.com/ankan-ban/HelloOrtDml/blob/main/Main.cpp) can be found on GitHub which shows how to manage and create copy and execution command queues.
### Python API
Expand All @@ -132,4 +132,4 @@ binding.bind_output("out", "dml")
# binding.bind_ortvalue_output("out", dml_array_out)
session.run_with_iobinding(binding)
```
```
2 changes: 1 addition & 1 deletion docs/performance/transformers-optimization.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ The first command will generate ONNX models (both before and after optimizations

If you remove -o parameter, optimizer script is not used in benchmark.

If your GPU (like V100 or T4) has TensorCore, you can append `-p fp16` to the above commands to enable mixed precision. In some decoder-only(e.g GPT2) based generative models, you can enable [strict mode](../execution-providers/CUDA-ExecutionProvider.md#enable_skip_layer_norm_strict_mode) for SkipLayerNormalization Op on CUDA EP to achieve better accuray. However, the performance will drop a bit.
If your GPU (like V100 or T4) has TensorCore, you can append `-p fp16` to the above commands to enable mixed precision. In some decoder-only(e.g GPT2) based generative models, you can enable [strict mode](../execution-providers/CUDA-ExecutionProvider.md#enable_skip_layer_norm_strict_mode) for SkipLayerNormalization Op on CUDA EP to achieve better accuracy. However, the performance will drop a bit.

If you want to benchmark on CPU, you can remove -g option in the commands.

Expand Down
2 changes: 1 addition & 1 deletion docs/reference/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ nav_order: 2
Newer versions of ONNX Runtime support all models that worked with prior versions, so updates should not break integrations.

## Environment compatibility
ONNX Runtime is not explicitly tested with every variation/combination of environments and dependencies, so this list is not comprehensive. Please use this as starting reference. For specific questions or requests, please [file an issue](https://github.com/microsoft/onnxruntime/issues) on Github.
ONNX Runtime is not explicitly tested with every variation/combination of environments and dependencies, so this list is not comprehensive. Please use this as starting reference. For specific questions or requests, please [file an issue](https://github.com/microsoft/onnxruntime/issues) on GitHub.


### Platforms
Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/azureml.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ model = BertForQuestionAnswering.from_pretrained(model_name)
# behave differently in inference and training mode.
model.eval()

# Generate dummy inputs to the model. Adjust if neccessary
# Generate dummy inputs to the model. Adjust if necessary
inputs = {
'input_ids': torch.randint(32, [1, 32], dtype=torch.long), # list of numerical ids for the tokenized text
'attention_mask': torch.ones([1, 32], dtype=torch.long), # dummy list of ones
Expand Down Expand Up @@ -263,7 +263,7 @@ print("ONNX Runtime version: ", onnxruntime.__version__)

We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook.

Note that, the following code assumes you have a config.json file containing the subscription information in the same directory as the notebook, or in a sub-directory called .azureml. You can also supply the workspace name, subscription name, and resource group explicity using the Workspace.get() method.
Note that, the following code assumes you have a config.json file containing the subscription information in the same directory as the notebook, or in a sub-directory called .azureml. You can also supply the workspace name, subscription name, and resource group explicitly using the Workspace.get() method.

```python
import os
Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/csharp/bert-nlp-csharp-console-app.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ To run locally:

- [Visual Studio](https://visualstudio.microsoft.com/downloads/)
- [VS Code](https://code.visualstudio.com/Download) with the [Jupyter notebook extension](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter).
- [Anacaonda](https://www.anaconda.com/)
- [Anaconda](https://www.anaconda.com/)

To run in the cloud with Azure Machine Learning:

Expand Down Expand Up @@ -78,7 +78,7 @@ Now that we have downloaded the model we need to export it to an `ONNX` format.
- Set the `dynamic_axes` for the dynamic length input because the `sentence` and `context` variables will be of different lengths for each question inferenced.

```python
# Generate dummy inputs to the model. Adjust if neccessary.
# Generate dummy inputs to the model. Adjust if necessary.
inputs = {
# list of numerical ids for the tokenized text
'input_ids': torch.randint(32, [1, 32], dtype=torch.long),
Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/csharp/fasterrcnn_csharp.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ image.ProcessPixelRows(accessor =>
});
```

Here, we're creating a Tensor of the required size `(channels, paddedHeight, paddedWidth)`, accessing the pixel values, preprocessing them and finally assigning them to the tensor at the appropriate indicies.
Here, we're creating a Tensor of the required size `(channels, paddedHeight, paddedWidth)`, accessing the pixel values, preprocessing them and finally assigning them to the tensor at the appropriate indices.


### Setup inputs
Expand All @@ -117,7 +117,7 @@ var inputs = new Dictionary<string, OrtValue>

```

To check the input node names for an ONNX model, you can use [Netron](https://github.com/lutzroeder/netron) to visualise the model and see input/output names. In this case, this model has `image` as the input node name.
To check the input node names for an ONNX model, you can use [Netron](https://github.com/lutzroeder/netron) to visualize the model and see input/output names. In this case, this model has `image` as the input node name.

### Run inference

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/mobile/superres.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ After the script runs, you should see two ONNX files in the folder in the locati

```bash
pytorch_superresolution.onnx
pytorch_superresolution_with_pre_and_post_proceessing.onnx
pytorch_superresolution_with_pre_and_post_processing.onnx
```

If you load the two models into [netron](https://netron.app/) you can see the difference in inputs and outputs between the two. The first two images below show the original model with its inputs being batches of channel data, and the second two show the inputs and outputs being the image bytes.
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/tensorflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,4 @@ These examples use the [TensorFlow-ONNX converter](https://github.com/onnx/tenso

### TFLite
{: .no_toc }
* [TFLite: Image classifciation (mobiledet)](https://github.com/onnx/tensorflow-onnx/blob/master/tutorials/mobiledet-tflite.ipynb)
* [TFLite: Image classification (mobiledet)](https://github.com/onnx/tensorflow-onnx/blob/master/tutorials/mobiledet-tflite.ipynb)
4 changes: 2 additions & 2 deletions docs/tutorials/web/build-web-app.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,8 +111,8 @@ Raw input is usually a string (for NLP model) or an image (for image model). The

### Outputs

The output of a model vary, and most need their own post-processing code. Refer to the above tutorial as an example of Javascript post processing.
The output of a model vary, and most need their own post-processing code. Refer to the above tutorial as an example of JavaScript post processing.

## Bundlers

_[This section is coming soon]_
_[This section is coming soon]_
2 changes: 1 addition & 1 deletion docs/tutorials/web/ep-webgpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ To use WebGPU EP, you just need to make 2 small changes:
const session = await ort.InferenceSession.create(modelPath, { ..., executionProviders: ['webgpu'] });
```

You might also consider installing the latest nightly build version of ONNX Runtime Web (onnxruntime-web@dev) to benefit from the latest features and improvments.
You might also consider installing the latest nightly build version of ONNX Runtime Web (onnxruntime-web@dev) to benefit from the latest features and improvements.

## WebGPU EP features

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/web/ep-webnn.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ To use WebNN EP, you just need to make 3 small changes:
```
3. If it is dynamic shape model, ONNX Runtime Web offers `freeDimensionOverrides` session option to override the free dimensions of the model. See [freeDimensionOverrides introduction](https://onnxruntime.ai/docs/tutorials/web/env-flags-and-session-options.html#freedimensionoverrides) for more details.

WebNN API and WebNN EP are in actively development, you might consider installing the latest nightly build version of ONNX Runtime Web (onnxruntime-web@dev) to benefit from the latest features and improvments.
WebNN API and WebNN EP are in actively development, you might consider installing the latest nightly build version of ONNX Runtime Web (onnxruntime-web@dev) to benefit from the latest features and improvements.

## Keep tensor data on WebNN MLBuffer (IO binding)

Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/web/excel-addin-bert-js.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,14 +77,14 @@ Now we are ready to jump into the code!

## The `manifest.xml` file

The `manifest.xml` file specifies that all custom functions belong to the `ORT` namespace. You'll use the namespace to access the custom functions in Excel. Update the values in the `mainfest.xml` to `ORT`.
The `manifest.xml` file specifies that all custom functions belong to the `ORT` namespace. You'll use the namespace to access the custom functions in Excel. Update the values in the `manifest.xml` to `ORT`.

```xml
<bt:String id="Functions.Namespace" DefaultValue="ORT"/>
<ProviderName>ORT</ProviderName>
```

Learn more about the configuration of the [mainfest file here](https://learn.microsoft.com/office/dev/add-ins/develop/configure-your-add-in-to-use-a-shared-runtime#configure-the-manifest).
Learn more about the configuration of the [manifest file here](https://learn.microsoft.com/office/dev/add-ins/develop/configure-your-add-in-to-use-a-shared-runtime#configure-the-manifest).

## The `functions.ts` file

Expand Down

0 comments on commit 99bfaa3

Please sign in to comment.