Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/master' into global_tp
Browse files Browse the repository at this point in the history
  • Loading branch information
pranavsharma committed Mar 13, 2020
2 parents 8dec4b7 + b8575dd commit a4405e8
Show file tree
Hide file tree
Showing 50 changed files with 1,515 additions and 577 deletions.
68 changes: 48 additions & 20 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,29 +3,55 @@
We're always looking for your help to fix bugs and improve the product. Create a pull request and we'll be happy to take a look.
Start by reading the [Engineering Design](docs/HighLevelDesign.md). You can find the doxygen generated documentation [here](https://microsoft.github.io/onnxruntime/).

# Checkin procedure
1. Fork the repo
2. git clone your fork
3. Create feature branch
4. Make and checkin your changes along with unit tests
5. git commit your changes
6. git push origin HEAD
7. To request merge into master send a pull request from the web ui
https://github.com/Microsoft/onnxruntime.
8. Add 'Microsoft/onnxruntime' as a reviewer.

New code *must* be accompanied by unit tests.

*Note*: After creating a pull request, you might not see a build getting triggered right away. One of the
## Proposing new public APIs

ONNX Runtime has a collection of [public APIs](docs/HighLevelDesign.md). Some of these APIs make their way back into the Windows OS. We make compatibility committments for these APIs and follow a structured process when adding to them. Please use the [Feature Request issue template](issues/new?template=feature_request.md) before starting any PRs that affect any of the public APIs.

## Process details

Please search the [issue tracker](https://github.com/microsoft/onnxruntime/issues) for a similar idea first: there may already be an issue you can contribute to.

1. **Create Issue**
To propose a new feature or API please start by filing a new issue in the [issue tracker](https://github.com/microsoft/onnxruntime/issues).
Include as much detail as you have. It's fine if it's not a complete design: just a summary and rationale is a good starting point.

2. **Discussion**
We'll keep the issue open for community discussion until it has been resolved or is deemed no longer relevant.
Note that if an issue isn't a high priority or has many open questions then it might stay open for a long time.

3. **Owner Review**
The ONNX Runtime team will review the proposal and either approve or close the issue based on whether it broadly aligns with the [Onnx Runtime Roadmap - High Level Goals section](../docs/Roadmap.md) and contribution guidelines.

4. **API Review**
If the feature adds new APIs then we'll start an API review.
All new public APIs must be reviewed before merging.
For making changes to the C API refer to guidance [here](onnxruntime/core/session/onnxruntime_c_api.cc#L1326).
For making changes to the WinRT API someone from the ONNX Runtime team will workwith you.

1. **Implementation**
* A feature can be implemented by you, the ONNX Runtime team, or other community members. Code contributions are greatly appreciated: feel free to work on any reviewed feature you proposed, or choose one in the backlog and send us a PR. If you are new to the project and want to work on an existing issue, we recommend starting with issues that are tagged with “good first issue”. Please let us know in the issue comments if you are actively working on implementing a feature so we can ensure it's assigned to you.
* Unit tests: New code *must* be accompanied by unit tests.
* Documentation and sample updates: If the PR affects any of the documentation or samples then include those updates in the same PR.
* Build instructions are [here](BUILD.md).
* Checkin Procedure: Once a feature is complete and tested according to the contribution guidelines follow these steps:
* Fork the repo
* git clone your fork
* Create feature branch
* Make and checkin your changes along with unit tests
* git commit your changes
* git push origin HEAD
* To request merge into master, send a pull request from the [web ui](https://github.com/Microsoft/onnxruntime).
* Add 'Microsoft/onnxruntime' as a reviewer.
* Binaries: We periodically produce signed prerelease binaries from the master branch to validate new features and APIs. After the feature has been sufficiently validated as part of a prerelease package we will include it in the next stable binary release.
* Note: After creating a pull request, you might not see a build getting triggered right away. One of the
onnxruntime team members will trigger the build for you.

# Build
[Build](BUILD.md)
## Coding guidelines

# Coding guidelines
Please see [Coding Conventions and Standards](./docs/Coding_Conventions_and_Standards.md)

# Licensing guidelines
## Licensing guidelines

This project welcomes contributions and suggestions. Most contributions require you to
agree to a Contributor License Agreement (CLA) declaring that you have the right to,
and actually do, grant us the rights to use your contribution. For details, visit
Expand All @@ -35,12 +61,14 @@ When you submit a pull request, a CLA-bot will automatically determine whether y
to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the
instructions provided by the bot. You will only need to do this once across all repositories using our CLA.

# Code of conduct
## Code of conduct

This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [[email protected]](mailto:[email protected]) with any additional questions or comments.

# Reporting Security Issues
## Reporting Security Issues

Security issues and bugs should be reported privately, via email, to the Microsoft Security
Response Center (MSRC) at [[email protected]](mailto:[email protected]). You should
receive a response within 24 hours. If for some reason you do not, please follow up via
Expand Down
8 changes: 2 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ ONNX Runtime stays up to date with the ONNX standard and supports all operators
* [Performance Tuning](./docs/ONNX_Runtime_Perf_Tuning.md)
* [Extensibility Options](#extensibility-options)
* **[Installation](#installation)**
* [API Documentation](#apis-documentation)
* [API Documentation](#api-documentation)
* [Builds and Packages](#Builds-and-Packages)
* **[Usage](#usage)**
* [Samples and Tutorials](./samples)
Expand Down Expand Up @@ -88,14 +88,10 @@ The list of currently supported accelerators (termed [Execution Providers](./doc
|[C#](docs/CSharp_API.md)| | [Samples](./samples#C)|
|[C++](./include/onnxruntime/core/session/onnxruntime_cxx_api.h)| |[Samples](./samples#CC)|
|[C](docs/C_API.md)| | [Samples](./samples#CC)|
|[WinRT](docs/WinRT_API.md) | [Windows.AI.MachineLearning](https://docs.microsoft.com/en-us/windows/ai/windows-ml/api-reference)| [Samples](https://github.com/microsoft/windows-Machine-Learning)|
|[Java](docs/Java_API.md)|8-13|[Samples](./samples#Java)|
[Ruby](https://github.com/ankane/onnxruntime) (external project)| 2.4-2.7| [Samples](https://ankane.org/tensorflow-ruby)|

The ORT package also includes the Windows Machine Learning APIs, which provide an thin layer on top of the ONNX Runtime API optimized for Windows development.
* [API Reference](https://docs.microsoft.com/en-us/windows/ai/windows-ml/api-reference)
* Compatibility: Windows 8.1 - CPU, RS3 (1709) - GPU
* [Samples](https://docs.microsoft.com/en-us/windows/ai/windows-ml/get-started-desktop)

## Builds and Packages

Official builds are published for the default CPU Provider (Eigen + MLAS), as well as GPU with CUDA. Python packages can be found on PyPi, and C#/C/C++ packages on Nuget. Please view the table on [aka.ms/onnxruntime](https://aka.ms/onnxruntime) for instructions for different build combinations.
Expand Down
1 change: 1 addition & 0 deletions docs/AddingCustomOp.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ Note: These APIs are experimental and will change in the next release. They're r
* Create an OrtCustomOp structure for each op and add them to the OrtCustomOpDomain with OrtCustomOpDomain_Add
* Call OrtAddCustomOpDomain to add the custom domain of ops to the session options
See [this](../onnxruntime/test/shared_lib/test_inference.cc) for an example called MyCustomOp that uses the C++ helper API (onnxruntime_cxx_api.h).
Currently, the only supported Execution Providers (EPs) for custom ops registered via this approach are the `CUDA` and the `CPU` EPs.

### 2. Using RegisterCustomRegistry API
* Implement your kernel and schema (if required) using the OpKernel and OpSchema APIs (headers are in the include folder).
Expand Down
35 changes: 35 additions & 0 deletions docs/HighLevelDesign.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,3 +81,38 @@ their subgraph.
* [Add a new graph
transform](../include/onnxruntime/core/optimizer/graph_transformer.h)
* [Add a new rewrite rule](../include/onnxruntime/core/optimizer/rewrite_rule.h)

## The ONNX Runtime and Windows OS integration

The ONNX runtime shipped with the Windows operating system in build 1809 (RS5). The runtime was embedded inside the Windows.AI.MachineLearning.dll and was exposed via that WinRT API (WinML for short). It includes CPU support and a DirectML execution provider for GPU support. Since then it has continued to ship in every version of Windows.

Starting with the ONNX Runtime 1.2 release we are bringing a new layered architecture to the ONNX Runtime and Windows ML.
*Note: This feature is preview as of the 1.2 release*

The high level design looks like this

![ONNX + WinML layered architecture](images/layered-architecture.png)

You can see we replaced the embedded ONNX runtime with the new ONNXRuntime.dll. With this new approach customers have flexibility on which API they choose to use and on how they want to distribute the binaries.

### API choice

Developers can now choose which API works best for their scenario.

||WinRT|C API|
|--|--|--|
|Type system| Integration with Windows RT types| Platform neutral types|
|Lanugage support| Language support via WinRT Projections| Language support via per language projections|
|Tensorization| Accepts VideoFrames and converts to tensors (support for CPU and GPU)| Accepts tensors|

### Distribution choice

You can also choose to use runtimes included in the Windows OS, or use the redist nuget to ship the runtime with the app.

|Distribution|Inbox|App nuget|
|--|--|--|
|Disk footprint| Included in the OS| Included in the App|
|Servicing fixes| Serviced by OS updates| Serviced by the App|
|Execution Providers| CPU & DirectML EP | App chosen EP|
|Compatability testing| Tested with OS flights against supported GPU's and CPU's | App performs compatability testing|
|Opset| Refreshed in OS updates| App chooses|
55 changes: 55 additions & 0 deletions docs/WinRT_API.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Windows Machine Learning WinRT API

New in the ONNX Runtime Nuget package is the ability to use the full [Windows.AI.MachineLearning API](https://docs.microsoft.com/en-us/windows/ai/windows-ml/api-reference).

This allows scenarios such as passing a [Windows.Media.VideoFrame](https://docs.microsoft.com/en-us/uwp/api/Windows.Media.VideoFrame) from your connected camera directly into the runtime for realtime inference.

The Windows.AI.MachineLearning API is a WinRT API that shipped inside the Windows OS starting with build 1809 (RS5). It embedded a version of the ONNX Runtime.

Many customers have asked for a way to use this offering as an application redistributable package.

With our new [layered architecture](HighLevelDesign.md#the-onnx-runtime-and-windows-os-integration) you can now do this, with some limitations.

## NuGet Package

The Microsoft.ML.OnnxRuntime [Nuget package](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime/) includes the precompiled binaries for using the ONNX runtime with the WinRT API. Support is compiled directly into *onnxruntime.dll*

Note: As of the 1.2 release, you can use all of the CPU functionality from these binaries. In order to get GPU funtionality using DirectML, you will need to build the binary yourself using [these instructions](https://github.com/microsoft/onnxruntime/blob/master/BUILD.md#DirectML).

## Sample Code

Any code already written for the Windows.AI.MachineLearning API can be easily modified to run against the Microsoft.ML.OnnxRuntime package. Check out these [existing samples](https://github.com/microsoft/windows-Machine-Learning) in github.

## Activation and Side-by-Side

Because Windows.AI.MachineLearning ships inside the OS, default object activation is going to use those OS binaries. Applications must explicitly code to enable the use of the redist binaries when creating WinML objects (Like [LearningModelSession](https://docs.microsoft.com/en-us/uwp/api/windows.ai.machinelearning.learningmodelsession)).

Read up [here](HighLevelDesign.md#the-onnx-runtime-and-windows-os-integration) in how to decide when to use the OS binaries and when to use redist binaries.

To create objects using the redist binaries you have several choices depending on how you are consuming the WinRT:

* cpp/winrt: You can use WINRT_RoGetActivationFactory hooking as shown [here](https://github.com/microsoft/Windows-Machine-Learning/blob/master/Samples/SqueezeNetObjectDetection/Desktop/cpp/dllload.cpp) in our sample projects.
* WRL: (coming soon)
* Raw C++: Simply use the similar code to the cpp/winrt sample to load and use the activation factory in your redist binary.

## Deciding which header files to use

The best way to use the API is to use the header files that come with the Windows SDK.

* For Visual Studio they are included as an optional feature.
* For Visual Studio Code you can download them [here](https://developer.microsoft.com/en-US/windows/downloads/windows-10-sdk/).

This [tutorial](https://docs.microsoft.com/en-us/windows/ai/windows-ml/get-started-desktop) is a great place to get started.

To detect if an OS already has Windows.AI.MachineLearning you can use the [IsApiContractPresent](https://docs.microsoft.com/en-us/uwp/api/windows.foundation.metadata.apiinformation.isapicontractpresent) method. This can be called from either UWP or native apps.

If the OS does not have the runtime you need you can switch to use the redist binaries instead.

|Release|API contract version|
|--|--|
|Windows OS 1809| 1|
|Windows OS 1903| 2|
|Windows OS 1909| 2|
|ORT release 1.2| 3|

See [here](https://docs.microsoft.com/en-us/windows/ai/windows-ml/onnx-versions) for more about opsets and ONNX version details in Windows OS distributons.
Binary file added docs/images/layered-architecture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 2 additions & 3 deletions onnxruntime/core/common/threadpool.cc
Original file line number Diff line number Diff line change
Expand Up @@ -43,17 +43,16 @@ void ThreadPool::ParallelFor(int32_t total, std::function<void(int32_t)> fn) {

// TODO: Eigen supports a more efficient ThreadPoolDevice mechanism
// We will simply rely on the work queue and stealing in the short term.
Barrier barrier(static_cast<unsigned int>(total - 1));
Barrier barrier(static_cast<unsigned int>(total));
std::function<void(int32_t)> handle_iteration = [&barrier, &fn](int iteration) {
fn(iteration);
barrier.Notify();
};

for (int32_t id = 1; id < total; ++id) {
for (int32_t id = 0; id < total; ++id) {
Schedule([=, &handle_iteration]() { handle_iteration(id); });
}

fn(0);
barrier.Wait();
}

Expand Down
Loading

0 comments on commit a4405e8

Please sign in to comment.