Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QNN EP] Improve QDQ model accuracy tests #16916

Merged
merged 35 commits into from
Aug 4, 2023

Conversation

adrianlizarraga
Copy link
Contributor

@adrianlizarraga adrianlizarraga commented Jul 29, 2023

Description

  • Improves how unit tests measure the accuracy of QDQ models on QNN EP.
  • Adds tests for ops: Add, Mul, Abs1, And1, Or1, Ceil1, Cos1

1: Not previously supported due to missing node unit handling.

Motivation and Context

The new approach for testing QDQ operator accuracy requires running 3 inferences:

  1. float model on CPU EP (baseline)
  2. qdq model on CPU EP
  3. qdq model on QNN EP

The units tests check that running the QDQ model on QNN EP (3) is at least as accurate (+- small tolerance) as running the QDQ model on CPU EP (2). We measure accuracy by comparing to the baseline (1).

This is essentially what we care about: is qnn ep as accurate as cpu ep. If not, it is worth investigating as a potential bug.

@adrianlizarraga adrianlizarraga marked this pull request as ready for review July 31, 2023 07:27
HectorSVC
HectorSVC previously approved these changes Aug 3, 2023
Copy link
Contributor

@HectorSVC HectorSVC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

Copy link
Contributor

@HectorSVC HectorSVC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

@adrianlizarraga adrianlizarraga merged commit 191f98a into main Aug 4, 2023
@adrianlizarraga adrianlizarraga deleted the adrianl/qnn-add-unit-tests branch August 4, 2023 19:15
adrianlizarraga added a commit that referenced this pull request Aug 9, 2023
### Description
Slightly increases the allowable error tolerance for ReduceProd tests on
x64 Windows/Linux with the QNN CPU backend.


### Motivation and Context
A recent [PR](#16916)
updated the input range for ReduceProd tests, which uncovered an
inaccuracy for ReduceProd on x64 Windows/Linux with the QNN CPU backend.
This PR updates the allowable error tolerance and adds a TODO for
investigation.

This is needed to ensure the QNN_Nuget_Windows pipeline runs
successfully.
jchen351 pushed a commit that referenced this pull request Aug 12, 2023
### Description
- Improves how unit tests measure the accuracy of QDQ models on QNN EP.
- Adds tests for ops: Add, Mul, Abs<sup>1</sup>, And<sup>1</sup>,
Or<sup>1</sup>, Ceil<sup>1</sup>, Cos<sup>1</sup>

<sup>1</sup>: Not previously supported due to missing node unit
handling.

### Motivation and Context
The new approach for testing QDQ operator accuracy requires running 3
inferences:

1. float model on CPU EP (baseline)
2. qdq model on CPU EP
3. qdq model on QNN EP

The units tests check that running the QDQ model on QNN EP (3) is at
least as accurate (+- small tolerance) as running the QDQ model on CPU
EP (2). We measure accuracy by comparing to the baseline (1).

This is essentially what we care about: is qnn ep as accurate as cpu ep.
If not, it is worth investigating as a potential bug.
jchen351 pushed a commit that referenced this pull request Aug 12, 2023
### Description
Slightly increases the allowable error tolerance for ReduceProd tests on
x64 Windows/Linux with the QNN CPU backend.


### Motivation and Context
A recent [PR](#16916)
updated the input range for ReduceProd tests, which uncovered an
inaccuracy for ReduceProd on x64 Windows/Linux with the QNN CPU backend.
This PR updates the allowable error tolerance and adds a TODO for
investigation.

This is needed to ensure the QNN_Nuget_Windows pipeline runs
successfully.
kleiti pushed a commit to kleiti/onnxruntime that referenced this pull request Mar 22, 2024
### Description
- Improves how unit tests measure the accuracy of QDQ models on QNN EP.
- Adds tests for ops: Add, Mul, Abs<sup>1</sup>, And<sup>1</sup>,
Or<sup>1</sup>, Ceil<sup>1</sup>, Cos<sup>1</sup>

<sup>1</sup>: Not previously supported due to missing node unit
handling.

### Motivation and Context
The new approach for testing QDQ operator accuracy requires running 3
inferences:

1. float model on CPU EP (baseline)
2. qdq model on CPU EP
3. qdq model on QNN EP

The units tests check that running the QDQ model on QNN EP (3) is at
least as accurate (+- small tolerance) as running the QDQ model on CPU
EP (2). We measure accuracy by comparing to the baseline (1).

This is essentially what we care about: is qnn ep as accurate as cpu ep.
If not, it is worth investigating as a potential bug.
kleiti pushed a commit to kleiti/onnxruntime that referenced this pull request Mar 22, 2024
…soft#17078)

### Description
Slightly increases the allowable error tolerance for ReduceProd tests on
x64 Windows/Linux with the QNN CPU backend.


### Motivation and Context
A recent [PR](microsoft#16916)
updated the input range for ReduceProd tests, which uncovered an
inaccuracy for ReduceProd on x64 Windows/Linux with the QNN CPU backend.
This PR updates the allowable error tolerance and adds a TODO for
investigation.

This is needed to ensure the QNN_Nuget_Windows pipeline runs
successfully.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants