Skip to content

Commit

Permalink
Fix links #4
Browse files Browse the repository at this point in the history
  • Loading branch information
natke committed Oct 26, 2021
1 parent 47aa7a6 commit 7be3bb0
Show file tree
Hide file tree
Showing 11 changed files with 22 additions and 22 deletions.
2 changes: 1 addition & 1 deletion docs/build/android-ios.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ nav_order: 5
# Build ONNX Runtime for Android and iOS
{: .no_toc }

Below are general build instructions for Android and iOS. For examples of deploying ONNX Runtime on mobile platforms (includes overall smaller package size and other configurations), see [Mobile Tutorials](../tutorials/mobile.md).
Below are general build instructions for Android and iOS. For examples of deploying ONNX Runtime on mobile platforms (includes overall smaller package size and other configurations), see [Mobile Tutorials](../tutorials/mobile).

## Contents
{: .no_toc }
Expand Down
2 changes: 1 addition & 1 deletion docs/build/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ The default NVIDIA GPU build requires CUDA runtime libraries installed on the sy
* [OpenMPI](https://www.open-mpi.org/) 4.0.4
* See [install_openmpi.sh](https://github.com/microsoft/onnxruntime/blob/master/tools/ci_build/github/linux/docker/scripts/install_openmpi.sh)

These dependency versions should reflect what is in [Dockerfile.training](https://github.com/microsoft/onnxruntime/blob/master/dockerfiles/Dockerfile.training).
These dependency versions should reflect what is in the [Dockerfiles](https://github.com/pytorch/ort/tree/main/docker).

### Build instructions
{: .no_toc }
Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/ACL-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ nav_order: 4


## Build
For build instructions, please see the [BUILD page](./build/eps.md#arm-compute-library).
For build instructions, please see the [build page](../build/eps.md#arm-compute-library).

## Usage
### C/C++
Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/DirectML-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Note that building onnxruntime with the DirectML execution provider enabled caus

## Usage

When using the [C API](../get-started/with-c.html.md) with a DML-enabled build of onnxruntime, the DirectML execution provider can be enabled using one of the two factory functions included in `include/onnxruntime/core/providers/dml/dml_provider_factory.h`.
When using the [C API](../get-started/with-c.md) with a DML-enabled build of onnxruntime, the DirectML execution provider can be enabled using one of the two factory functions included in `include/onnxruntime/core/providers/dml/dml_provider_factory.h`.

### `OrtSessionOptionsAppendExecutionProvider_DML` function
{: .no_toc }
Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/RKNPU-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Ort::SessionOptions sf;
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_RKNPU(sf));
Ort::Session session(env, model_path, sf);
```
The C API details are [here](../get-started/with-c.html.md).
The C API details are [here](../get-started/with-c.md).


## Support Coverage
Expand Down
4 changes: 2 additions & 2 deletions docs/execution-providers/Vitis-AI-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ The following table lists system requirements for running docker containers as w
## Build
See [Build instructions](../build/eps.md#vitis-ai).

**Hardware setup**
### Hardware setup

1. Clone the Vitis AI repository:
```
Expand Down Expand Up @@ -92,7 +92,7 @@ A couple of environment variables can be used to customize the Vitis-AI executio
| PX_QUANT_SIZE | 128 | The number of inputs that will be used for quantization (necessary for Vitis-AI acceleration) |
| PX_BUILD_DIR | Use the on-the-fly quantization flow | Loads the quantization and compilation information from the provided build directory and immediately starts Vitis-AI hardware acceleration. This configuration can be used if the model has been executed before using on-the-fly quantization during which the quantization and comilation information was cached in a build directory. |
### Samples
## Samples
When using python, you can base yourself on the following example:
Expand Down
2 changes: 1 addition & 1 deletion docs/get-started/with-obj-c.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ The artifacts are published to CocoaPods.
|-|-|-|
| onnxruntime-mobile-objc | CPU and CoreML | iOS |

Refer to the [installation instructions](../tutorials/mobile/initial-setup.md#iOS).
Refer to the [installation instructions](../tutorials/mobile/initial-setup.md#ios).

## Swift Usage

Expand Down
10 changes: 5 additions & 5 deletions docs/install/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,12 +146,12 @@ by running `locale-gen en_US.UTF-8` and `update-locale LANG=en_US.UTF-8`
||GPU - DirectML: [**Microsoft.ML.OnnxRuntime.DirectML**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML)|[ort-nightly (dev)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly)|[View](../execution-providers/DirectML-ExecutionProvider)|
|WinML|[**Microsoft.AI.MachineLearning**](https://www.nuget.org/packages/Microsoft.AI.MachineLearning)||[View](https://docs.microsoft.com/en-us/windows/ai/windows-ml/port-app-to-nuget#prerequisites)|
|Java|CPU: [**com.microsoft.onnxruntime:onnxruntime**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime)||[View](../api/java)|
||GPU - CUDA: [**com.microsoft.onnxruntime:onnxruntime_gpu**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime_gpu)||[View](../api/java-api.md)|
|Android|[**com.microsoft.onnxruntime:onnxruntime-mobile**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime-mobile) ||[View](../tutorials/mobile/mobile/initial-setup)|
|iOS (C/C++)|CocoaPods: **onnxruntime-mobile-c**||[View](../tutorials/mobile/mobile/initial-setup)|
|Objective-C|CocoaPods: **onnxruntime-mobile-objc**||[View](../tutorials/mobile/mobile/initial-setup)|
||GPU - CUDA: [**com.microsoft.onnxruntime:onnxruntime_gpu**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime_gpu)||[View](../api/java)|
|Android|[**com.microsoft.onnxruntime:onnxruntime-mobile**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime-mobile) ||[View](../tutorials/mobile/initial-setup)|
|iOS (C/C++)|CocoaPods: **onnxruntime-mobile-c**||[View](../tutorials/mobile/initial-setup)|
|Objective-C|CocoaPods: **onnxruntime-mobile-objc**||[View](../tutorials/mobile/initial-setup)|
|React Native|[**onnxruntime-react-native**](https://www.npmjs.com/package/onnxruntime-react-native)||[View](../api/js)|
|Node.js|[**onnxruntime-node**](https://www.npmjs.com/package/onnxruntime-node)||[View](../api/js-api.md)|
|Node.js|[**onnxruntime-node**](https://www.npmjs.com/package/onnxruntime-node)||[View](../api/js.md)|
|Web|[**onnxruntime-web**](https://www.npmjs.com/package/onnxruntime-web)||[View](../api/js)|


Expand Down
2 changes: 1 addition & 1 deletion docs/performance/quantization.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ Hardware support is required to achieve better performance with quantization on

ORT leverage TRT EP for quantization on GPU now. Different with CPU EP, TRT takes in full precision model and calibration result for inputs. It decides how to quantize with their own logic. The overall procedure to leverage TRT EP quantization is:
- Implement a [CalibrationDataReader](https://github.com/microsoft/onnxruntime/blob/07788e082ef2c78c3f4e72f49e7e7c3db6f09cb0/onnxruntime/python/tools/quantization/calibrate.py).
- Compute quantization parameter with calibration data set. Our quantization tool supports 2 calibration methods: MinMax and Entropy. Note: In order to include all tensors from the model for better calibration, please run symbolic_shape_infer.py first. Please refer to[here](../reference/execution-providers/TensorRT-ExecutionProvider.md#sample) for detail.
- Compute quantization parameter with calibration data set. Our quantization tool supports 2 calibration methods: MinMax and Entropy. Note: In order to include all tensors from the model for better calibration, please run symbolic_shape_infer.py first. Please refer to[here](../execution-providers/TensorRT-ExecutionProvider.md#sample) for detail.
- Save quantization parameter into a flatbuffer file
- Load model and quantization parameter file and run with TRT EP.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,11 @@ The source code for this sample is available [here](https://github.com/microsoft
## Install ONNX Runtime for OpenVINO Execution Provider

## Build steps
[build instructions](../../reference/execution-providers/OpenVINO-ExecutionProvider.md#build)
[build instructions](../build/eps.md#OpenVINO)


## Reference Documentation
[Documentation](../../reference/execution-providers/OpenVINO-ExecutionProvider.md)
[Documentation](../execution-providers/OpenVINO-ExecutionProvider.md)

If you build it by yourself, you must append the "--build_shared_lib" flag to your build command.
```
Expand All @@ -45,7 +45,7 @@ If you build it by yourself, you must append the "--build_shared_lib" flag to yo

3. compile the sample

```
```bash
g++ -o run_squeezenet squeezenet_cpp_app.cpp -I ../../../include/onnxruntime/core/session/ -I /opt/intel/openvino_2021.4.582/opencv/include/ -I /opt/intel/openvino_2021.4.582/opencv/lib/ -L ./ -lonnxruntime_providers_openvino -lonnxruntime_providers_shared -lonnxruntime -L /opt/intel/openvino_2021.4.582/opencv/lib/ -lopencv_imgcodecs -lopencv_dnn -lopencv_core -lopencv_imgproc
```

Expand All @@ -57,24 +57,24 @@ Note: This build command is using the opencv location from OpenVINO 2021.4 Relea

(using Intel OpenVINO-EP)

```
```bash
./run_squeezenet --use_openvino <path_to_onnx_model> <path_to_sample_image> <path_to_labels_file>
```

Example:

```
```bash
./run_squeezenet --use_openvino squeezenet1.1-7.onnx demo.jpeg synset.txt (using Intel OpenVINO-EP)
```

(using Default CPU)

```
```bash
./run_squeezenet --use_cpu <path_to_onnx_model> <path_to_sample_image> <path_to_labels_file>
```

Example:

```
```bash
./run_squeezenet --use_cpu squeezenet1.1-7.onnx demo.jpeg synset.txt (using Default CPU)
```
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ The source code for this sample is available [here](https://github.com/microsoft
## Install ONNX Runtime for OpenVINO Execution Provider

## Build steps
[build instructions](https://www.onnxruntime.ai/docs/reference/execution-providers/OpenVINO-ExecutionProvider.html#build)
[build instructions](../build/eps.md#OpenVINO)


## Reference Documentation
Expand Down

0 comments on commit 7be3bb0

Please sign in to comment.