Skip to content
This repository has been archived by the owner on Oct 25, 2024. It is now read-only.

Migrate trainer INC 1.x API to 2.x #1605

Merged
merged 23 commits into from
Jun 24, 2024
Merged

Migrate trainer INC 1.x API to 2.x #1605

merged 23 commits into from
Jun 24, 2024

Conversation

changwangss
Copy link
Contributor

@changwangss changwangss commented Jun 12, 2024

Type of Change

optimize:
remove PT question-answering pruning group_lasso, longformer_triviaqa
remove TF examples, will add 3.x INC API quantization later.
remove NoTrainerOptimizer due to 2.x INC API design.
migrate PT examples from 1.x API to 2.x API, about quantization, pruning, distillation, orchestration.

Description

detail description
JIRA ticket: xxx

Expected Behavior & Potential Risk

the expected behavior that triggered by this PR

How has this PR been tested?

how to reproduce the test (including hardware information)

Dependency Change?

any library dependency introduced or removed

Copy link

github-actions bot commented Jun 12, 2024

⛈️ Required checks status: Has failure 🔴

Warning
If you do not have the access to re-run the CI-Summary bot, please contact VincyZhang for help. If you push a new commit, all of the workflow will be re-triggered.

Groups summary

🟢 Format Scan Tests workflow
Check ID Status Error details
format-scan (pylint) success
format-scan (bandit) success
format-scan (cloc) success
format-scan (cpplint) success

These checks are required after the changes to intel_extension_for_transformers/transformers/__init__.py, intel_extension_for_transformers/transformers/config.py, intel_extension_for_transformers/transformers/distillation.py, intel_extension_for_transformers/transformers/optimizer.py, intel_extension_for_transformers/transformers/optimizer_tf.py, intel_extension_for_transformers/transformers/pruning.py, intel_extension_for_transformers/transformers/quantization.py, intel_extension_for_transformers/transformers/trainer.py, workflows/compression_aware_training/config/README.md, workflows/compression_aware_training/config/config.yaml, workflows/compression_aware_training/config/distillation_with_qat.yaml, workflows/compression_aware_training/config/qat.yaml, workflows/compression_aware_training/config/sat.yaml, workflows/compression_aware_training/src/itrex_opt.py, workflows/compression_aware_training/src/utils.py, workflows/dlsa/run_dlsa.py, workflows/hf_finetuning_and_inference_nlp/src/finetune_itrex.py, workflows/hf_finetuning_and_inference_nlp/src/infer_itrex.py.

🔴 Optimize Unit Test workflow
Check ID Status Error details
optimize-unit-test-baseline success
optimize-unit-test-PR-test success
Genreate-OptimizeUT-Report failure download

These checks are required after the changes to intel_extension_for_transformers/transformers/__init__.py, intel_extension_for_transformers/transformers/config.py, intel_extension_for_transformers/transformers/distillation.py, intel_extension_for_transformers/transformers/optimizer.py, intel_extension_for_transformers/transformers/optimizer_tf.py, intel_extension_for_transformers/transformers/pruning.py, intel_extension_for_transformers/transformers/quantization.py, intel_extension_for_transformers/transformers/trainer.py, tests/CI/test_config.py, tests/CI/test_quantization.py, tests/CI/test_quantization_qa_ipex.py, tests/Nightly/test_distillation.py, tests/Nightly/test_orchestrate_optimization.py, tests/Nightly/test_pruning.py, tests/Nightly/test_tf_distillation.py, tests/Nightly/test_tf_pruning.py, tests/Nightly/test_tf_quantization.py.

🟢 NeuralChat Unit Test
Check ID Status Error details
neuralchat-unit-test-baseline success
neuralchat-unit-test-PR-test success
Generate-NeuralChat-Report success

These checks are required after the changes to intel_extension_for_transformers/transformers/__init__.py, intel_extension_for_transformers/transformers/config.py, intel_extension_for_transformers/transformers/distillation.py, intel_extension_for_transformers/transformers/optimizer.py, intel_extension_for_transformers/transformers/optimizer_tf.py, intel_extension_for_transformers/transformers/pruning.py, intel_extension_for_transformers/transformers/quantization.py, intel_extension_for_transformers/transformers/trainer.py.

🟢 Engine Unit Test workflow
Check ID Status Error details
engine-unit-test-baseline success
engine-unit-test-PR-test success
Genreate-Engine-Report success

These checks are required after the changes to intel_extension_for_transformers/transformers/__init__.py, intel_extension_for_transformers/transformers/config.py, intel_extension_for_transformers/transformers/distillation.py, intel_extension_for_transformers/transformers/optimizer.py, intel_extension_for_transformers/transformers/optimizer_tf.py, intel_extension_for_transformers/transformers/pruning.py, intel_extension_for_transformers/transformers/quantization.py, intel_extension_for_transformers/transformers/trainer.py.

🟢 Chat Bot Test workflow
Check ID Status Error details
call-inference-llama-2-7b-chat-hf / inference test success
call-inference-mpt-7b-chat / inference test success

These checks are required after the changes to intel_extension_for_transformers/transformers/__init__.py, intel_extension_for_transformers/transformers/config.py, intel_extension_for_transformers/transformers/distillation.py, intel_extension_for_transformers/transformers/optimizer.py, intel_extension_for_transformers/transformers/optimizer_tf.py, intel_extension_for_transformers/transformers/pruning.py, intel_extension_for_transformers/transformers/quantization.py, intel_extension_for_transformers/transformers/trainer.py.


Thank you for your contribution! 💜

Note
This comment is automatically generated and will be updates every 180 seconds within the next 6 hours. If you have any other questions, contact VincyZhang or XuehaoSun for help.

Signed-off-by: changwangss <[email protected]>
Signed-off-by: changwangss <[email protected]>
Signed-off-by: changwangss <[email protected]>
Signed-off-by: changwangss <[email protected]>
Signed-off-by: changwangss <[email protected]>
Signed-off-by: changwangss <[email protected]>
Signed-off-by: changwangss <[email protected]>
Signed-off-by: changwangss <[email protected]>
Signed-off-by: changwangss <[email protected]>
Copy link
Contributor

@a32543254 a32543254 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@kevinintel kevinintel added the 1.5 label Jun 21, 2024
Signed-off-by: changwangss <[email protected]>
Signed-off-by: changwangss <[email protected]>
Copy link
Contributor

@ZePan110 ZePan110 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jenkins / nlp-toolkit-validation-top-mr-extension / #1660 (intel.com)
Bertbase_swag_qat passed the test, And Xlnet_plm_qat timeout failed the test..
nlp-toolkit-validation-top-mr-extension #1661 [Jenkins] (intel.com)
Sd_poken_diffusers_static and sd_poken_diffusers_dynamic passed the test.

@kevinintel kevinintel merged commit b7cbdd8 into main Jun 24, 2024
21 of 22 checks passed
@kevinintel kevinintel deleted the wangchang/quantization2.x branch June 24, 2024 02:08
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants