Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenVINO-EP v4.0 Release PR with OpenVINO 2022.1 #11025

Merged
merged 101 commits into from
Apr 6, 2022

Conversation

MaajidKhan
Copy link
Contributor

Description:
OpenVINO-EP v4.0 Release PR with OpenVINO 2022.1

Motivation and Context

  • OpenVINO-EP now supports the latest OpenVINO 2022.1 version
  • OpenVINO-EP is now updated to use the new OpenVINO 2.0 API's starting from this release
  • Backward compatibility support for older OpenVINO versions (OV 2021.3, OV 2021.4) is available
  • New code design changes introduced to have cleaner code structure for different OpenVINO versions
  • Opset 13 compliance w.r.t OpenVINO (More operators added)
  • New features added (GPU opencl throttling, device type checks)

MaajidKhan and others added 30 commits February 15, 2022 21:38
->Added ov-ep 2022.1 flow
->Validated CPU Unit tests with OV
Master using onnxruntime_test_all unit
tests.

Signed-off-by: MaajidKhan <[email protected]>
->Enable Resize op for iGPU
->Enable Add op for iGPU

Signed-off-by: MaajidKhan <[email protected]>
->Removing some conditions from
GetCapability() which are now not
required. (Removed conditions for
OV version support less than 2021.2)

Signed-off-by: MaajidKhan <[email protected]>
Signed-off-by: MaajidKhan <[email protected]>
Signed-off-by: MaajidKhan <[email protected]>
Signed-off-by: MaajidKhan <[email protected]>
*Added GPU Throttling feature for iGPU's.
when user enables it as a runtime option,
it helps in reducing overall CPU usage
of the application

*Added changes to exercise this option
using onnxruntime_perf_test application.

Signed-off-by: MaajidKhan <[email protected]>
->Handling corner cases for
device_type checks

Signed-off-by: MaajidKhan <[email protected]>
->Enabled Few ops
->Added Debug info for case 3b in getcapability()

Signed-off-by: MaajidKhan <[email protected]>
->Added ov-ep 2022.1 flow
->Validated CPU Unit tests with OV
Master using onnxruntime_test_all unit
tests.

Signed-off-by: MaajidKhan <[email protected]>
->Enable Resize op for iGPU
->Enable Add op for iGPU

Signed-off-by: MaajidKhan <[email protected]>
->Removing some conditions from
GetCapability() which are now not
required. (Removed conditions for
OV version support less than 2021.2)

Signed-off-by: MaajidKhan <[email protected]>
Signed-off-by: MaajidKhan <[email protected]>
Signed-off-by: MaajidKhan <[email protected]>
Signed-off-by: MaajidKhan <[email protected]>
*Added GPU Throttling feature for iGPU's.
when user enables it as a runtime option,
it helps in reducing overall CPU usage
of the application

*Added changes to exercise this option
using onnxruntime_perf_test application.

Signed-off-by: MaajidKhan <[email protected]>
//using ie_core capability GetAvailableDevices to fetch list of devices plugged in
bool device_found = false;
bool device_id_found = false;
auto available_devices = openvino_ep::BackendManager::GetGlobalContext().ie_core.GetAvailableDevices();
Copy link
Contributor

@oliviajain oliviajain Apr 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to silence the output from this call? If the inference takes long, the user only sees below for some time:
[E:] [BSL] found 0 ioexpander device

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unfortunately, there is not much we can do to disable this as its coming from the OpenVINO API's. I will create a ticket with OpenVINO team to see if we can get this print removed from the next OpenVINO release (2022.2)

->Disabled this model for openvino. The
test is failing in Internal_CI pipelines.

Signed-off-by: MaajidKhan <[email protected]>
@jywu-msft
Copy link
Member

/azp run Linux OpenVINO CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@jywu-msft
Copy link
Member

/azp run MacOS NoContribops CI Pipeline, Windows CPU CI Pipeline, Windows GPU CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows WebAssembly CI Pipeline, orttraining-amd-gpu-ci-pipeline, orttraining-linux-ci-pipeline, orttraining-linux-gpu-ci-pipeline, orttraining-ortmodule-distributed, Linux CPU CI Pipeline

@jywu-msft
Copy link
Member

/azp run Linux CPU Minimal Build E2E CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux Nuphar CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-python-checks-ci-pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux CPU x64 NoContribops CI Pipeline, Linux CPU x64 NoContribops CI Pipeline, Linux GPU CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 8 pipeline(s).

@azure-pipelines
Copy link

Azure Pipelines successfully started running 7 pipeline(s).

@jywu-msft
Copy link
Member

@MaajidKhan FYI, the python flake8 check failed:

[flake8 PEP8 ERROR] ./tools/nuget/generate_nuspec_for_native_nuget.py:475:121: E501 line too long (122 > 120 characters)

Signed-off-by: MaajidKhan <[email protected]>
@MaajidKhan
Copy link
Contributor Author

@MaajidKhan FYI, the python flake8 check failed:

[flake8 PEP8 ERROR] ./tools/nuget/generate_nuspec_for_native_nuget.py:475:121: E501 line too long (122 > 120 characters)

@jywu-msft Done. commit added.

@oliviajain
Copy link
Contributor

/azp run onnxruntime-python-checks-ci-pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@oliviajain
Copy link
Contributor

/azp run Linux OpenVINO CI Pipeline

@oliviajain
Copy link
Contributor

/azp run MacOS NoContribops CI Pipeline, Windows CPU CI Pipeline, Windows GPU CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows WebAssembly CI Pipeline, orttraining-amd-gpu-ci-pipeline, orttraining-linux-ci-pipeline, orttraining-linux-gpu-ci-pipeline, orttraining-ortmodule-distributed, Linux CPU CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@oliviajain
Copy link
Contributor

/azp run Linux CPU Minimal Build E2E CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux Nuphar CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-python-checks-ci-pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux CPU x64 NoContribops CI Pipeline, Linux CPU x64 NoContribops CI Pipeline, Linux GPU CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 8 pipeline(s).

@azure-pipelines
Copy link

Azure Pipelines successfully started running 7 pipeline(s).

@oliviajain
Copy link
Contributor

/azp run onnxruntime-binary-size-checks-ci-pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@oliviajain
Copy link
Contributor

oliviajain commented Apr 6, 2022

Python tests were disabled for OpenVINO CPU because openvino doesn't support max min sum operators with 1 input. The fix will come in the next OV release. Let's add this regression to the OV release notes.

@oliviajain oliviajain self-requested a review April 6, 2022 20:30
@oliviajain oliviajain merged commit 81fa28b into microsoft:master Apr 6, 2022
seddonm1 pushed a commit to seddonm1/onnxruntime that referenced this pull request May 15, 2022
* Enabling ov-ep for 2022.1 Release

->Added ov-ep 2022.1 flow
->Validated CPU Unit tests with OV
Master using onnxruntime_test_all unit
tests.

Signed-off-by: MaajidKhan <[email protected]>

* Fix for output mismatch b/w OpenVINO and ONNX

Refer:
https://jira.devtools.intel.com/browse/CVS-60310

Signed-off-by: MaajidKhan <[email protected]>

* Enabling Adobe ops

->Enable Resize op for iGPU
->Enable Add op for iGPU

Signed-off-by: MaajidKhan <[email protected]>

* Removing irrelevant conditions

->Removing some conditions from
GetCapability() which are now not
required. (Removed conditions for
OV version support less than 2021.2)

Signed-off-by: MaajidKhan <[email protected]>

* Enable upsample op

Signed-off-by: MaajidKhan <[email protected]>

* Enable Adobe proxy-e model

Signed-off-by: MaajidKhan <[email protected]>

* Removing any extra conditions for Opset13 ops

* Opset13 changes

Signed-off-by: MaajidKhan <[email protected]>

* Exception handling for devices

* Added comments

* Implement GPU Throttling feature

*Added GPU Throttling feature for iGPU's.
when user enables it as a runtime option,
it helps in reducing overall CPU usage
of the application

*Added changes to exercise this option
using onnxruntime_perf_test application.

Signed-off-by: MaajidKhan <[email protected]>

* Renaming the runtime config option

Signed-off-by: MaajidKhan <[email protected]>

* Added the user to video and users group

* Handling_GPU.0_GPU.1

* Handling special conditions

->Handling corner cases for
device_type checks

Signed-off-by: MaajidKhan <[email protected]>

* Modification to include new api 2.0 changes in the code

* Added opset13 changes

->Enabled Few ops
->Added Debug info for case 3b in getcapability()

Signed-off-by: MaajidKhan <[email protected]>

* Enabling ov-ep for 2022.1 Release

->Added ov-ep 2022.1 flow
->Validated CPU Unit tests with OV
Master using onnxruntime_test_all unit
tests.

Signed-off-by: MaajidKhan <[email protected]>

* Fix for output mismatch b/w OpenVINO and ONNX

Refer:
https://jira.devtools.intel.com/browse/CVS-60310

Signed-off-by: MaajidKhan <[email protected]>

* Enabling Adobe ops

->Enable Resize op for iGPU
->Enable Add op for iGPU

Signed-off-by: MaajidKhan <[email protected]>

* Removing irrelevant conditions

->Removing some conditions from
GetCapability() which are now not
required. (Removed conditions for
OV version support less than 2021.2)

Signed-off-by: MaajidKhan <[email protected]>

* Enable upsample op

Signed-off-by: MaajidKhan <[email protected]>

* Enable Adobe proxy-e model

Signed-off-by: MaajidKhan <[email protected]>

* Removing any extra conditions for Opset13 ops

* Opset13 changes

Signed-off-by: MaajidKhan <[email protected]>

* Exception handling for devices

* Added comments

* Implement GPU Throttling feature

*Added GPU Throttling feature for iGPU's.
when user enables it as a runtime option,
it helps in reducing overall CPU usage
of the application

*Added changes to exercise this option
using onnxruntime_perf_test application.

Signed-off-by: MaajidKhan <[email protected]>

* Renaming the runtime config option

Signed-off-by: MaajidKhan <[email protected]>

* Added the user to video and users group

* Handling_GPU.0_GPU.1

* Handling special conditions

->Handling corner cases for
device_type checks

Signed-off-by: MaajidKhan <[email protected]>

* Added opset13 changes

->Enabled Few ops
->Added Debug info for case 3b in getcapability()

Signed-off-by: MaajidKhan <[email protected]>

* Log comments updated

* Changes to enable 2.0 api

* Enabling ov-ep for 2022.1 Release

->Added ov-ep 2022.1 flow
->Validated CPU Unit tests with OV
Master using onnxruntime_test_all unit
tests.

Signed-off-by: MaajidKhan <[email protected]>

* Fix for output mismatch b/w OpenVINO and ONNX

Refer:
https://jira.devtools.intel.com/browse/CVS-60310

Signed-off-by: MaajidKhan <[email protected]>

* Enabling Adobe ops

->Enable Resize op for iGPU
->Enable Add op for iGPU

Signed-off-by: MaajidKhan <[email protected]>

* Removing irrelevant conditions

->Removing some conditions from
GetCapability() which are now not
required. (Removed conditions for
OV version support less than 2021.2)

Signed-off-by: MaajidKhan <[email protected]>

* Enable upsample op

Signed-off-by: MaajidKhan <[email protected]>

* Enable Adobe proxy-e model

Signed-off-by: MaajidKhan <[email protected]>

* Removing any extra conditions for Opset13 ops

* Opset13 changes

Signed-off-by: MaajidKhan <[email protected]>

* Exception handling for devices

* Added comments

* Implement GPU Throttling feature

*Added GPU Throttling feature for iGPU's.
when user enables it as a runtime option,
it helps in reducing overall CPU usage
of the application

*Added changes to exercise this option
using onnxruntime_perf_test application.

Signed-off-by: MaajidKhan <[email protected]>

* Renaming the runtime config option

Signed-off-by: MaajidKhan <[email protected]>

* Added the user to video and users group

* Handling_GPU.0_GPU.1

* Handling special conditions

->Handling corner cases for
device_type checks

Signed-off-by: MaajidKhan <[email protected]>

* Added opset13 changes

->Enabled Few ops
->Added Debug info for case 3b in getcapability()

Signed-off-by: MaajidKhan <[email protected]>

* Fix build issue

Signed-off-by: MaajidKhan <[email protected]>

* Fixes issues

*Fixes compiler warnings c4458 on windows.
*Fixes the bug in device_type check logic
*Adds print info for enable_opencl_throttling
option in onnxruntime_perf_test

Signed-off-by: MaajidKhan <[email protected]>

* commit to make openvino_2021.4 compatible

* Fixed IO Buffer Optimization

* Fix output names issue

* Fix 2021.3 branch

* Bug Fix for Multiple inputs/outputs

- Assigns the right output_name and
input_name for the graph when
returned by CompiledModel::inputs()
OV function.

- Also takex care of output mismatch
issue b/w openvino output and onnx
output

Signed-off-by: MaajidKhan <[email protected]>

* Add comments for the changes made

Signed-off-by: MaajidKhan <[email protected]>

* IO Buffer Changes

* Commit for Disabling GPU Throttling for 2021.4

* Updated branch

* Fix windows build

->Fixed windows build in debug mode
->Disabled scatternd3_tensor_int64

Signed-off-by: MaajidKhan <[email protected]>

* Fixed CPP Unit tests for CPU

-Fixed shrink, MVN, ReduceL2, Maxpool,
upsample, scatter, slice, reshape,
unsqueeze.

Signed-off-by: MaajidKhan <[email protected]>

* Fixed first set of GPU Tests

Signed-off-by: MaajidKhan <[email protected]>

* Fixed additional failing tests on GPU

->Added conditions to disable certain ops
under certain conditions

->Disabled certain tests

->Added some op supports for no_dimension
supported

Signed-off-by: MaajidKhan <[email protected]>

* Added Expand op support for CPU

Signed-off-by: MaajidKhan <[email protected]>

* Added condition for squeeze op

->Shape can't have empty axes attribute

Signed-off-by: MaajidKhan <[email protected]>

* Add support for LessOrEqual op function

Signed-off-by: MaajidKhan <[email protected]>

* OV Interface wait for replaced by indefinite wait call

* use names from ONNX model to access OV tensors

This chnage is to use the input/output names
retrieved from original onnx model to access
OV tensors and to check if there's any input
or output names mismatch b/w ONNX naming
and OV naming.

Signed-off-by: MaajidKhan <[email protected]>

* Fixes Myriad unit tests and other issues

->Fixes Myriad CPP unit tests
->Fixes output mismatch issue with models with
sub graph partitioning

Signed-off-by: MaajidKhan <[email protected]>

* Fix segfault issue

->Fixed case 3b condition in get_capability()
which was causing the segfault issue

Signed-off-by: MaajidKhan <[email protected]>

* Fixed build isuse with ov 2021.4 with I/O buffer

Signed-off-by: MaajidKhan <[email protected]>

* Disables performance counters for I/O Buffer

Signed-off-by: MaajidKhan <[email protected]>

* Fixed inputs/outputs mismatch for HDDL with 2022.1

Signed-off-by: Mohammad Amir Aqeel <[email protected]>

* Fix to enable GPU FP16

* Enabled mlperf_ssd_mobilenet_300 model fully on CPU

Signed-off-by: MaajidKhan <[email protected]>

* Added ov version specific dll packaging for nuget

* Fixed conditions for few ops

Signed-off-by: MaajidKhan <[email protected]>

* Dockerfile updates

* Updated License Info

-Updated the copyrights License Info
-modified FP16 transformations with OV 2022.1

Signed-off-by: MaajidKhan <[email protected]>

* Disabling mlperf_ssd_mobilenet_300 model

->Disabled this model for openvino. The
test is failing in Internal_CI pipelines.

Signed-off-by: MaajidKhan <[email protected]>

* Disabling failing python CPU Tests

Signed-off-by: MaajidKhan <[email protected]>

* Fixed flake8 python errors

Signed-off-by: MaajidKhan <[email protected]>

Co-authored-by: hdgx <[email protected]>
Co-authored-by: mayavijx <[email protected]>
Co-authored-by: sfatimar <[email protected]>
Co-authored-by: mohsinmx <[email protected]>
Co-authored-by: Mohammad Amir Aqeel <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants