Skip to content

Commit

Permalink
Fix links #3
Browse files Browse the repository at this point in the history
  • Loading branch information
natke committed Oct 26, 2021
1 parent 2701174 commit 47aa7a6
Show file tree
Hide file tree
Showing 7 changed files with 15 additions and 15 deletions.
2 changes: 1 addition & 1 deletion docs/execution-providers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Developers of specialized HW acceleration solutions can integrate with ONNX Runt

### Build ONNX Runtime package with EPs

The ONNX Runtime package can be built with any combination of the EPs along with the default CPU execution provider. **Note** that if multiple EPs are combined into the same ONNX Runtime package then all the dependent libraries must be present in the execution environment. The steps for producing the ONNX Runtime package with different EPs is documented [here](../build/inferencing.md#execution-providers).
The ONNX Runtime package can be built with any combination of the EPs along with the default CPU execution provider. **Note** that if multiple EPs are combined into the same ONNX Runtime package then all the dependent libraries must be present in the execution environment. The steps for producing the ONNX Runtime package with different EPs is documented [here](../build/inferencing.md).

### APIs for Execution Provider

Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/oneDNN-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ bool enable_cpu_mem_arena = true;
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_Dnnl(sf, enable_cpu_mem_arena));
```
The C API details are [here](../get-started/with-c.html.md).
The C API details are [here](../get-started/with-c.md).
### Python
Expand Down
4 changes: 2 additions & 2 deletions docs/get-started/with-c.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ nav_order: 4

| Artifact | Description | Supported Platforms |
|-----------|-------------|---------------------|
| [Microsoft.ML.OnnxRuntime](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | CPU (Release) |Windows, Linux, Mac, X64, X86 (Windows-only), ARM64 (Windows-only)...more details: [compatibility](../references/compatibility) |
| [Microsoft.ML.OnnxRuntime.Gpu](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu) | GPU - CUDA (Release) | Windows, Linux, Mac, X64...more details: [compatibility](../references/compatibility) |
| [Microsoft.ML.OnnxRuntime](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | CPU (Release) |Windows, Linux, Mac, X64, X86 (Windows-only), ARM64 (Windows-only)...more details: [compatibility](../reference/compatibility) |
| [Microsoft.ML.OnnxRuntime.Gpu](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu) | GPU - CUDA (Release) | Windows, Linux, Mac, X64...more details: [compatibility](../reference/compatibility) |
| [Microsoft.ML.OnnxRuntime.DirectML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.directml) | GPU - DirectML (Release) | Windows 10 1709+ |
| [ort-nightly](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | CPU, GPU (Dev) | Same as Release versions |

Expand Down
4 changes: 2 additions & 2 deletions docs/get-started/with-csharp.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,8 +125,8 @@ The ONNX runtime provides a C# .NET binding for running inference on ONNX models

| Artifact | Description | Supported Platforms |
|-----------|-------------|---------------------|
| [Microsoft.ML.OnnxRuntime](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | CPU (Release) |Windows, Linux, Mac, X64, X86 (Windows-only), ARM64 (Windows-only)...more details: [compatibility](../references/compatibility) |
| [Microsoft.ML.OnnxRuntime.Gpu](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu) | GPU - CUDA (Release) | Windows, Linux, Mac, X64...more details: [compatibility](../references/compatibility) |
| [Microsoft.ML.OnnxRuntime](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | CPU (Release) |Windows, Linux, Mac, X64, X86 (Windows-only), ARM64 (Windows-only)...more details: [compatibility](../reference/compatibility.md) |
| [Microsoft.ML.OnnxRuntime.Gpu](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu) | GPU - CUDA (Release) | Windows, Linux, Mac, X64...more details: [compatibility](../reference/compatibility.md) |
| [Microsoft.ML.OnnxRuntime.DirectML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.directml) | GPU - DirectML (Release) | Windows 10 1709+ |
| [ort-nightly](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | CPU, GPU (Dev) | Same as Release versions |

Expand Down
2 changes: 1 addition & 1 deletion docs/get-started/with-obj-c.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ The artifacts are published to CocoaPods.
|-|-|-|
| onnxruntime-mobile-objc | CPU and CoreML | iOS |

Refer to the [installation instructions](../tutorials/mobile/mobile/initial-setup.md#iOS).
Refer to the [installation instructions](../tutorials/mobile/initial-setup.md#iOS).

## Swift Usage

Expand Down
2 changes: 1 addition & 1 deletion docs/get-started/with-winrt.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This allows scenarios such as passing a [Windows.Media.VideoFrame](https://docs.

The WinML API is a WinRT API that shipped inside the Windows OS starting with build 1809 (RS5) in the Windows.AI.MachineLearning namespace. It embedded a version of the ONNX Runtime.

In addition to using the in-box version of WinML, WinML can also be installed as an application redistributable package (see [layered architecture](../references/high-level-design.md#the-onnx-runtime-and-windows-os-integration) for technical details).
In addition to using the in-box version of WinML, WinML can also be installed as an application redistributable package (see [layered architecture](../reference/high-level-design.md#the-onnx-runtime-and-windows-os-integration) for technical details).

## Contents
{: .no_toc }
Expand Down
14 changes: 7 additions & 7 deletions docs/install/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,20 +139,20 @@ by running `locale-gen en_US.UTF-8` and `update-locale LANG=en_US.UTF-8`
|Python|If using pip, run `pip install --upgrade pip` prior to downloading.|||
||CPU: [**onnxruntime**](https://pypi.org/project/onnxruntime)| [ort-nightly (dev)](https://test.pypi.org/project/ort-nightly)||
||GPU - CUDA: [**onnxruntime-gpu**](https://pypi.org/project/onnxruntime-gpu) | [ort-nightly-gpu (dev)](https://test.pypi.org/project/ort-nightly-gpu)|[View](../execution-providers/CUDA-ExecutionProvider.md#requirements)|
||OpenVINO: [**intel/onnxruntime**](https://github.com/intel/onnxruntime/releases/latest) - *Intel managed*||[View](build/eps.md#openvino)|
||OpenVINO: [**intel/onnxruntime**](https://github.com/intel/onnxruntime/releases/latest) - *Intel managed*||[View](../build/eps.md#openvino)|
||TensorRT (Jetson): [**Jetson Zoo**](https://elinux.org/Jetson_Zoo#ONNX_Runtime) - *NVIDIA managed*|||
|C#/C/C++|CPU: [**Microsoft.ML.OnnxRuntime**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) |[ort-nightly (dev)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly)||
||GPU - CUDA: [**Microsoft.ML.OnnxRuntime.Gpu**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu)|[ort-nightly (dev)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly)|[View](../execution-providers/CUDA-ExecutionProvider)|
||GPU - DirectML: [**Microsoft.ML.OnnxRuntime.DirectML**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML)|[ort-nightly (dev)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly)|[View](../execution-providers/DirectML-ExecutionProvider)|
|WinML|[**Microsoft.AI.MachineLearning**](https://www.nuget.org/packages/Microsoft.AI.MachineLearning)||[View](https://docs.microsoft.com/en-us/windows/ai/windows-ml/port-app-to-nuget#prerequisites)|
|Java|CPU: [**com.microsoft.onnxruntime:onnxruntime**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime)||[View](../api/java-api.md)|
|Java|CPU: [**com.microsoft.onnxruntime:onnxruntime**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime)||[View](../api/java)|
||GPU - CUDA: [**com.microsoft.onnxruntime:onnxruntime_gpu**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime_gpu)||[View](../api/java-api.md)|
|Android|[**com.microsoft.onnxruntime:onnxruntime-mobile**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime-mobile) ||[View](tutorials/mobile/mobile/initial-setup)|
|iOS (C/C++)|CocoaPods: **onnxruntime-mobile-c**||[View](tutorials/mobile/mobile/initial-setup)|
|Objective-C|CocoaPods: **onnxruntime-mobile-objc**||[View](tutorials/mobile/mobile/initial-setup)|
|React Native|[**onnxruntime-react-native**](https://www.npmjs.com/package/onnxruntime-react-native)||[View](../api/js-api.md)|
|Android|[**com.microsoft.onnxruntime:onnxruntime-mobile**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime-mobile) ||[View](../tutorials/mobile/mobile/initial-setup)|
|iOS (C/C++)|CocoaPods: **onnxruntime-mobile-c**||[View](../tutorials/mobile/mobile/initial-setup)|
|Objective-C|CocoaPods: **onnxruntime-mobile-objc**||[View](../tutorials/mobile/mobile/initial-setup)|
|React Native|[**onnxruntime-react-native**](https://www.npmjs.com/package/onnxruntime-react-native)||[View](../api/js)|
|Node.js|[**onnxruntime-node**](https://www.npmjs.com/package/onnxruntime-node)||[View](../api/js-api.md)|
|Web|[**onnxruntime-web**](https://www.npmjs.com/package/onnxruntime-web)||[View](../api/js-api.md)|
|Web|[**onnxruntime-web**](https://www.npmjs.com/package/onnxruntime-web)||[View](../api/js)|



Expand Down

0 comments on commit 47aa7a6

Please sign in to comment.