Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Neural Insights: PyTorch support frontend #1209

Merged
merged 5 commits into from
Sep 4, 2023

Conversation

aradys
Copy link
Contributor

@aradys aradys commented Sep 1, 2023

Type of Change

feature or bug fix or documentation or validation or others
API changed or not

Description

detail description

Expected Behavior & Potential Risk

the expected behavior that triggered by this PR

How has this PR been tested?

how to reproduce the test (including hardware information)

Dependency Change?

any library dependency introduced or removed

aradys added 5 commits August 24, 2023 13:46
Signed-off-by: aradys-intel <[email protected]>
Signed-off-by: aradys-intel <[email protected]>
Signed-off-by: aradys-intel <[email protected]>
Signed-off-by: aradys-intel <[email protected]>
Signed-off-by: aradys-intel <[email protected]>
@aradys aradys requested a review from bmyrcha September 1, 2023 11:27
@aradys aradys merged commit e34f316 into bmyrcha/ni_pytorch_fx_support Sep 4, 2023
@aradys aradys deleted the aradys/NI_pytorch branch September 4, 2023 07:55
chensuyue pushed a commit to chensuyue/lpot that referenced this pull request Feb 21, 2024
* refine onnxrt adaptor

* fix dynamic quant

* Strategy refine (intel#1210)

* refine(strategy): refine tuning strategy

* refine(strategy): add fallback tuning sampler

* refine(strategy): integrate to main logic

* refine(strategy): add support for dynamic/static

* refine(strategy): add default strategy

* refine(strategy): update the best cfg

* refine(strategy): merge with user cfg

* fix(strategy): remove pdb command

Co-authored-by: sys-lpot-val <[email protected]>

* fix onnxrt adaptor bug

* refine(strategy): add optype-wise tuning

* refine(strategy): replace default with basic

* refine(strategy): fix auto mixed-precision

* fix(strategy): fix line too long

* fix(strategy): fix line too long

* fix(strategy): add init

* fix(strategy): add init & fix best update

* fix(strategy): fix line too long

* fix(strategy): workaround before adaptor ready

* modify `_merge_op_wise_cfg` to support config with regex.

* fix(strategy): fix parse capability

* fix onnxrt ut

* fix(strategy): fix calib_iter setting

* refine(strategy): add mse

* fix(strategy): fix typo

* fix(strategy): keep other strategies

* fixed the bug that ptq quant always crashes.

* fix(strategy): fixed tf recover model ut

* fix ort ut

* fix qdq auto_quant mode

* fix(strategy): refactor exhaustive strategy

* fix(strategy): fixed resume from history

* fix(strategy): refactor random strategy

* fix(strategy): fixed the merge logic

* fix(startegy): fixed test_multi_metrics

* refine(strategy): update model-wise tuning

* fix(strategy): fixed typos

* fix(strategy): fixed mixed precision ut

* fix(strategy): fix `_tune_cfg_converter` and `_create_calib_dataloader`.

* Revert "fix(strategy): fix `_tune_cfg_converter` and `_create_calib_dataloader`."

This reverts commit 9804db605eb20ca2faa2559c23ed60c65f57db51.

* fix(strategy): fix config for qat

* refactor(strategy): refactor the strategy

* refactor(strategy): refactor tpe,sigopt, bayesian

* refactor(strategy): refactor tpe,sigopt, bayesian (intel#1261)

* fix(strategy): fixed typos

* fix(strategy): enable dataloader for ptq auto quant

* refine(strategy): refine the interfaces of query result

* update data strcture for onnx backend

* fix(strategy): update the interfaces

* refine onnxrt adaptor

* fix dynamic quant

* Strategy refine (intel#1210)

* refine(strategy): refine tuning strategy

* refine(strategy): add fallback tuning sampler

* refine(strategy): integrate to main logic

* refine(strategy): add support for dynamic/static

* refine(strategy): add default strategy

* refine(strategy): update the best cfg

* refine(strategy): merge with user cfg

* fix(strategy): remove pdb command

Co-authored-by: sys-lpot-val <[email protected]>

* fix(strategy): solve merge conflict

* refine(strategy): add optype-wise tuning

* refine(strategy): replace default with basic

* refine(strategy): fix auto mixed-precision

* fix(strategy): fix line too long

* fix(strategy): fix line too long

* fix(strategy): add init

* fix(strategy): add init & fix best update

* fix(strategy): fix line too long

* fix(strategy): workaround before adaptor ready

* fix(strategy): fix parse capability

* modify `_merge_op_wise_cfg` to support config with regex.

* fix(strategy): fix calib_iter setting

* fix onnxrt ut

* refine(strategy): add mse

* fix(strategy): fix typo

* fixed the bug that ptq quant always crashes.

* fix(strategy): keep other strategies

* fix(strategy): fixed tf recover model ut

* fix ort ut

* fix qdq auto_quant mode

* fix(strategy): refactor exhaustive strategy

* fix(strategy): fixed resume from history

* fix(strategy): refactor random strategy

* fix(strategy): fixed the merge logic

* fix(startegy): fixed test_multi_metrics

* refine(strategy): update model-wise tuning

* fix(strategy): fixed typos

* fix(strategy): fixed mixed precision ut

* fix(strategy): fix `_tune_cfg_converter` and `_create_calib_dataloader`.

* Revert "fix(strategy): fix `_tune_cfg_converter` and `_create_calib_dataloader`."

This reverts commit 9804db605eb20ca2faa2559c23ed60c65f57db51.

* fix(strategy): fix config for qat

* refactor(strategy): refactor the strategy

* refactor(strategy): refactor tpe,sigopt, bayesian

* fix(strategy): fixed typos

* refactor(strategy): refactor tpe,sigopt, bayesian (intel#1261)

* fix(strategy): enable dataloader for ptq auto quant

* refine(strategy): refine the interfaces of query result

* update data strcture for onnx backend

* fix(strategy): update the interfaces

* fix(stratgy): fixed the conflicts after rebase

* fix(strategy): fixed the conflicts

* fix onnx bug

* remove list

* fix(strategy): update the  merge logic

* fix(strategy): WA for TF/ONNX/PT

* fix(strategy): fix typo

* Adapt TF to the refined strategy

* Remove TF WA

* update pytorch adaptor

* update config structure for mxnet.

* fix pytorch adaptor bug

* remove code

* fix(strategy): remove temporary convertion since adaptor is ready.

* exclude const node and placeholder from fp32 list

* remove fp32 from capability

* fix ut issue

* fix(test_adaptor_pytorch): fix assertions to fit new fw_capability format.

* fix adaptor bug

* remove some unnecessary debug info

* fixed tf qat

* add LSTM

* Strategy/mengni (intel#1279)

* remove some unnecessary debug info

* fixed tf qat

* fix(PT): LSTM won't be quantized in static ptq

* fix(PT): quantized LSTM in qat

* fix ut issue

* fix(util): enable loading both old and new format of tuning cfgs.

* fix(strategy): keep the order of options

* fix(strategy): fixed the cfg initialization for qat

* fix(strategy): remove some debug info and fix qat cfg

* fix(strategy): remove unused code

* fix(strategy): fixed the cfg initialization for bayesian

* fix(startegy): fixed the model-wise sampler

* ut(strategy): add more uts

* fix(strategy): fixed the fallback order

Co-authored-by: Ray <[email protected]>
Co-authored-by: sys-lpot-val <[email protected]>
Co-authored-by: yiliu30 <[email protected]>
Co-authored-by: Zhang Yi5 <[email protected]>
Co-authored-by: lvliang-intel <[email protected]>
Co-authored-by: Lv, Kaokao <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants