diff --git a/.github/workflows/check-website-links.yml b/.github/workflows/check-website-links.yml index 1c8b08e982aa3..9dd7c5990f3d7 100644 --- a/.github/workflows/check-website-links.yml +++ b/.github/workflows/check-website-links.yml @@ -23,4 +23,4 @@ jobs: run: bundle exec jekyll build --drafts - name: Check for broken links run: | - bundle exec htmlproofer --assume_extension --checks_to_ignore ImageCheck,ScriptCheck --only_4xx --http_status_ignore 429,403 --allow_hash_href --url_ignore "https://onnxruntime.ai/docs/reference/api/c-api.html,https://www.onnxruntime.ai/docs/reference/execution-providers/TensorRT-ExecutionProvider.html#c-api-example,https://www.onnxruntime.ai/docs/resources/graph-optimizations.html,onnxruntime/capi/onnxruntime_pybind11_state.html" --log-level :info ./_site + bundle exec htmlproofer --assume_extension --checks_to_ignore ImageCheck,ScriptCheck --only_4xx --http_status_ignore 429,403 --allow_hash_href --url_ignore "https://onnxruntime.ai/docs/reference/api/c-api.html,https://www.onnxruntime.ai/docs/reference/execution-providers/TensorRT-ExecutionProvider.html#c-api-example,https://www.onnxruntime.ai/docs/resources/graph-optimizations.html,onnxruntime/capi/onnxruntime_pybind11_state.html,https://github.com/microsoft/onnx-converters-private/" --log-level :info ./_site diff --git a/docs/execution-providers/Vitis-AI-ExecutionProvider.md b/docs/execution-providers/Vitis-AI-ExecutionProvider.md index 820a80304a099..fdfeb2bb22b2e 100644 --- a/docs/execution-providers/Vitis-AI-ExecutionProvider.md +++ b/docs/execution-providers/Vitis-AI-ExecutionProvider.md @@ -40,7 +40,7 @@ A [demonstration](https://github.com/amd/RyzenAI-cloud-to-client-demo) is availa ## Install ### AMD Adaptable SoC Installation -For AMD Adaptable SoC targets, a pre-built package is provided to deploy ONNX models on embedded Linux. Users should refer to the standard Vitis AI [Target Setup Instructions](https://xilinx.github.io/Vitis-AI/docs/board_setup/board_setup.html) to enable Vitis AI on the target. Once Vitis AI has been enabled on the target, the developer can refer to [this section](https://docs.xilinx.com/r/en-US/ug1414-vitis-ai/Programming-with-VOE) of the Vitis AI documentation for installation and API details. +For AMD Adaptable SoC targets, a pre-built package is provided to deploy ONNX models on embedded Linux. Users should refer to the standard Vitis AI [Target Setup Instructions](https://xilinx.github.io/Vitis-AI/3.0/html/docs/workflow.html) to enable Vitis AI on the target. Once Vitis AI has been enabled on the target, the developer can refer to [this section](https://docs.xilinx.com/r/en-US/ug1414-vitis-ai/Programming-with-VOE) of the Vitis AI documentation for installation and API details. For more complete examples, developers should refer to [ONNX Runtime Vitis AI Execution Provider examples](https://github.com/Xilinx/Vitis-AI/tree/master/examples/vai_library/samples_onnx). @@ -124,7 +124,7 @@ In the current release (3.0), the Vitis AI Quantizer supports quantization of Py With the future release of Vitis AI 3.5, available mid 2023, the Vitis AI Quantizer will enable parsing and quantization of ONNX models, enabling an end-to-end ONNX model -> ONNX Runtime workflow. Also, in a future release, the Vitis AI ONNX Runtime Execution Provider will support on-the-fly quantization, enabling direct deployment of FP32 ONNX models. -See [Vitis AI Model Quantization](https://xilinx.github.io/Vitis-AI/docs/workflow-model-development.html#model-quantization) for details. +See [Vitis AI Model Quantization](https://xilinx.github.io/Vitis-AI/3.0/html/docs/workflow-model-development.html#model-quantization) for details. ### Olive diff --git a/onnx/converterteam.html b/onnx/converterteam.html new file mode 100644 index 0000000000000..61b1204b452ae --- /dev/null +++ b/onnx/converterteam.html @@ -0,0 +1,118 @@ + + + + + + + + + + + + + + ONNX Converters Team + + + +
+
+ ONNX Logo +

ONNX Converters Team

+
+

Welcome to the landing page for the ONNX Converters Team at Microsoft.
We hope your stay is + short and that you quickly get what you need!

+
+
+
+
+
+

Issue Submission

+
+

+ Have an issue converting a PyTorch or TensorFlow model to ONNX model?
Submit an issue to get help ASAP. +

+
+
+
+

Microsoft Internal (Private)

+

If the issue contains information that cannot be disclosed publicly, and you're internal to Microsoft, file an issue internally:

+ +
+
+

General Issue (Public)

+

Have an issue that can be publicly disclosed on GitHub? File the issue upstream:

+ +
+
+

External Partner (Private)

+

If you're external to Microsoft and the issue contains information that cannot be disclosed publicly, use the following link to template an email, and send it to the below DRI:

+ +
+
+
+
+
+ + + + + \ No newline at end of file