Skip to content

Commit

Permalink
Fix typo in Python code block on home page (#21196)
Browse files Browse the repository at this point in the history
### Description
Python code block on home page contained typo

Previous: outputs = session.run(None {"input": inputTensor})
Correction: outputs = session.run(None, {"input": inputTensor})

Fixes issue #21146



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
  • Loading branch information
sophies927 authored Jun 27, 2024
1 parent 77df36b commit 8fc4470
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion docs/tutorials/mobile/deploy-android.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ The pre-trained [TorchVision MOBILENET V2](https://pytorch.org/hub/pytorch_visio
- Quantize the FP32 ONNX model to an uint8 ONNX model
- Convert both FP32 and uint8 ONNX models to ORT models

Note: this step is optional, you can download the FP32 and uint8 ORT models [here](https://onnxruntimeexamplesdata.z13.web.core.windows.net/mobilenet_v2_ort_models.zip).
Note: this step is optional, you can download the FP32 and uint8 ORT models [here](https://github.com/onnx/models/tree/main/validated/vision/classification/mobilenet/model).

2. Download the model class labels

Expand Down
2 changes: 1 addition & 1 deletion src/routes/components/code-blocks.svelte
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
import github from "svelte-highlight/styles/github";
let pythonCode =
'import onnxruntime as ort\n# Load the model and create InferenceSession\nmodel_path = "path/to/your/onnx/model"\nsession = ort.InferenceSession(model_path)\n# "Load and preprocess the input image inputTensor"\n...\n# Run inference\noutputs = session.run(None {"input": inputTensor})\nprint(outputs)';
'import onnxruntime as ort\n# Load the model and create InferenceSession\nmodel_path = "path/to/your/onnx/model"\nsession = ort.InferenceSession(model_path)\n# "Load and preprocess the input image inputTensor"\n...\n# Run inference\noutputs = session.run(None, {"input": inputTensor})\nprint(outputs)';
let csharpCode =
'using Microsoft.ML.OnnxRuntime;\n// Load the model and create InferenceSession\nstring model_path = "path/to/your/onnx/model";\nvar session = new InferenceSession(model_path);\n// Load and preprocess the input image to inputTensor\n...\n// Run inference\nvar outputs = session.Run(inputTensor).ToList();\nConsole.WriteLine(outputs[0].AsTensor()[0]);';
let javascriptCode =
Expand Down

0 comments on commit 8fc4470

Please sign in to comment.