Skip to content
This repository has been archived by the owner on Oct 25, 2024. It is now read-only.

[pre-commit.ci] pre-commit autoupdate #1646

Merged
merged 7 commits into from
Jul 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
98 changes: 49 additions & 49 deletions .github/checkgroup.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,54 +65,54 @@ subprojects:
- "engine-unit-test-PR-test"
- "Genreate-Engine-Report"

- id: "Windows Binary Test"
paths:
- ".github/workflows/windows-test.yml"
- "requirements.txt"
- "setup.py"
- "intel_extension_for_transformers/transformers/runtime/**"
- "intel_extension_for_transformers/transformers/llm/operator/**"
- "!intel_extension_for_transformers/transformers/runtime/third_party/**"
- "!intel_extension_for_transformers/transformers/runtime/docs/**"
- "!intel_extension_for_transformers/transformers/runtime/test/**"
checks:
- "Windows-Binary-Test"
# - id: "Windows Binary Test"
# paths:
# - ".github/workflows/windows-test.yml"
# - "requirements.txt"
# - "setup.py"
# - "intel_extension_for_transformers/transformers/runtime/**"
# - "intel_extension_for_transformers/transformers/llm/operator/**"
# - "!intel_extension_for_transformers/transformers/runtime/third_party/**"
# - "!intel_extension_for_transformers/transformers/runtime/docs/**"
# - "!intel_extension_for_transformers/transformers/runtime/test/**"
# checks:
# - "Windows-Binary-Test"

- id: "LLM Model Test workflow"
paths:
- ".github/workflows/llm-test.yml"
- ".github/workflows/script/models/run_llm.sh"
- "intel_extension_for_transformers/transformers/runtime/**"
- "!intel_extension_for_transformers/transformers/runtime/kernels/**"
- "!intel_extension_for_transformers/transformers/runtime/test/**"
- "!intel_extension_for_transformers/transformers/runtime/third_party/**"
- "!intel_extension_for_transformers/transformers/runtime/docs/**"
checks:
- "LLM-Workflow (gpt-j-6b, engine, latency, bf16,int8,fp8)"
- "Generate-LLM-Report"
# - id: "LLM Model Test workflow"
# paths:
# - ".github/workflows/llm-test.yml"
# - ".github/workflows/script/models/run_llm.sh"
# - "intel_extension_for_transformers/transformers/runtime/**"
# - "!intel_extension_for_transformers/transformers/runtime/kernels/**"
# - "!intel_extension_for_transformers/transformers/runtime/test/**"
# - "!intel_extension_for_transformers/transformers/runtime/third_party/**"
# - "!intel_extension_for_transformers/transformers/runtime/docs/**"
# checks:
# - "LLM-Workflow (gpt-j-6b, engine, latency, bf16,int8,fp8)"
# - "Generate-LLM-Report"

- id: "Chat Bot Test workflow"
paths:
- ".github/workflows/chatbot-test.yml"
- ".github/workflows/chatbot-inference-llama-2-7b-chat-hf.yml"
- ".github/workflows/chatbot-inference-mpt-7b-chat.yml"
- ".github/workflows/chatbot-finetune-mpt-7b-chat.yml"
- ".github/workflows/chatbot-inference-llama-2-7b-chat-hf-hpu.yml"
- ".github/workflows/chatbot-inference-mpt-7b-chat-hpu.yml"
- ".github/workflows/chatbot-finetune-mpt-7b-chat-hpu.yml"
- ".github/workflows/script/chatbot/**"
- ".github/workflows/sample_data/**"
- "intel_extension_for_transformers/neural_chat/**"
- "intel_extension_for_transformers/transformers/llm/finetuning/**"
- "intel_extension_for_transformers/transformers/llm/quantization/**"
- "intel_extension_for_transformers/transformers/**"
- "workflows/chatbot/inference/**"
- "workflows/chatbot/fine_tuning/**"
- "!intel_extension_for_transformers/neural_chat/docs/**"
- "!intel_extension_for_transformers/neural_chat/tests/ci/**"
- "!intel_extension_for_transformers/neural_chat/examples/**"
- "!intel_extension_for_transformers/neural_chat/assets/**"
- "!intel_extension_for_transformers/neural_chat/README.md"
checks:
- "call-inference-llama-2-7b-chat-hf / inference test"
- "call-inference-mpt-7b-chat / inference test"
# - id: "Chat Bot Test workflow"
# paths:
# - ".github/workflows/chatbot-test.yml"
# - ".github/workflows/chatbot-inference-llama-2-7b-chat-hf.yml"
# - ".github/workflows/chatbot-inference-mpt-7b-chat.yml"
# - ".github/workflows/chatbot-finetune-mpt-7b-chat.yml"
# - ".github/workflows/chatbot-inference-llama-2-7b-chat-hf-hpu.yml"
# - ".github/workflows/chatbot-inference-mpt-7b-chat-hpu.yml"
# - ".github/workflows/chatbot-finetune-mpt-7b-chat-hpu.yml"
# - ".github/workflows/script/chatbot/**"
# - ".github/workflows/sample_data/**"
# - "intel_extension_for_transformers/neural_chat/**"
# - "intel_extension_for_transformers/transformers/llm/finetuning/**"
# - "intel_extension_for_transformers/transformers/llm/quantization/**"
# - "intel_extension_for_transformers/transformers/**"
# - "workflows/chatbot/inference/**"
# - "workflows/chatbot/fine_tuning/**"
# - "!intel_extension_for_transformers/neural_chat/docs/**"
# - "!intel_extension_for_transformers/neural_chat/tests/ci/**"
# - "!intel_extension_for_transformers/neural_chat/examples/**"
# - "!intel_extension_for_transformers/neural_chat/assets/**"
# - "!intel_extension_for_transformers/neural_chat/README.md"
# checks:
# - "call-inference-llama-2-7b-chat-hf / inference test"
# - "call-inference-mpt-7b-chat / inference test"
1 change: 1 addition & 0 deletions .github/workflows/script/formatScan/nlp_dict.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
alse
ans
assertIn
bu
charactor
daa
Expand Down
4 changes: 2 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ ci:

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
rev: v4.6.0
hooks:
- id: debug-statements
- id: mixed-line-ending
Expand Down Expand Up @@ -44,7 +44,7 @@ repos:
)$

- repo: https://github.com/codespell-project/codespell
rev: v2.2.6
rev: v2.3.0
hooks:
- id: codespell
args: [-w, --ignore-words=.github/workflows/script/formatScan/nlp_dict.txt]
Expand Down
2 changes: 1 addition & 1 deletion docs/code_of_conduct.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
level of experience, education, socioeconomic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.

## Our Standards
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ namespace qsl {
}
}

// Splice them togather
// Splice them together
Queue_t result;
for (auto& q : Buckets)
result.splice(result.end(), std::move(q));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -295,7 +295,7 @@ def postprocess_qa_predictions_with_beam_search(

assert len(predictions[0]) == len(
features
), f"Got {len(predictions[0])} predicitions and {len(features)} features."
), f"Got {len(predictions[0])} predictions and {len(features)} features."

# Build a map example to its corresponding features.
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Pipeline Modificaiton based from the diffusers 0.12.1 StableDiffusionInstructPix2PixPipeline"""
"""Pipeline Modification based from the diffusers 0.12.1 StableDiffusionInstructPix2PixPipeline"""

import inspect
from typing import Callable, List, Optional, Union
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ python run_executor.py --ir_path=./qat_int8_ir --mode=latency --input_model=runw
## 3. Accuracy
Frechet Inception Distance(FID) metric is used to evaluate the accuracy. This case we check the FID scores between the pytorch image and engine image.

By setting --accuracy to check FID socre.
By setting --accuracy to check FID score.
Python API command as follows:
```python
# FP32 IR
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Pipeline Modificaiton based from the diffusers 0.12.1 StableDiffusionImg2ImgPipeline"""
"""Pipeline Modification based from the diffusers 0.12.1 StableDiffusionImg2ImgPipeline"""

import inspect
from typing import Callable, List, Optional, Union
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Install requirements that have denpendency on stock pytorch"
"Install requirements that have dependency on stock pytorch"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Install requirements that have denpendency on stock pytorch"
"Install requirements that have dependency on stock pytorch"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ mkdir /home/nfs_images
export IMAGE_SERVER_IP="your.server.ip"
```

# Configurate photoai.yaml
# Configure photoai.yaml

You can customize the configuration file `photoai.yaml` to match your environment setup. Here's a table to help you understand the configurable options:

Expand All @@ -91,9 +91,9 @@ You can customize the configuration file `photoai.yaml` to match your environmen
| tasks_list | ['voicechat', 'photoai'] |


# Configurate Environment Variables
# Configure Environment Variables

Configurate all of the environment variables in file `run.sh` using `export XXX=xxx`. Here's a table of all the variables needed to configurate.
Configure all of the environment variables in file `run.sh` using `export XXX=xxx`. Here's a table of all the variables needed to configure.

| Variable | Value |
| ------------------- | ---------------------------------------|
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Parse and Evalate."""
"""Parse and Evaluate."""
import os
import json

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Parse and Evalate."""
"""Parse and Evaluate."""
import os
import json
import shlex
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Pipeline Modificaiton based from the diffusers 0.12.1 StableDiffusionInstructPix2PixPipeline."""
"""Pipeline Modification based from the diffusers 0.12.1 StableDiffusionInstructPix2PixPipeline."""

import inspect
from typing import Callable, List, Optional, Union
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ def get_environ_info():


def search_straight_pattern(input_pattern, graph):
"""Search user specified patterns on internal grpah structure.
"""Search user specified patterns on internal graph structure.

Attention: the input computation chain in the graph which can be called pattern, there must be
straight (or sequence). It means it has not any subgraph nodes. Otherwise this
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ def get_initializer_children_names(model, initializer):
def graph_node_names_details(model):
"""Parse the graph nodes ans get the graph_nodes_dict.

Be used for Grpah class with creating a new graph.
Be used for Graph class with creating a new graph.
The node_name is the key, node in value is for getting the Const
tensor value and the input_tensor source op; output_names in value
is the node output name list; outputs in value is for output_tensor dest op
Expand Down Expand Up @@ -155,7 +155,7 @@ def bias_to_int32(bias_node, a_scale, b_scale):
bias_node: bias_add in graph (from onnx framework)
a_scale: matmul node input matrice a scale tensor
b_scale: matmul node input matrice b scale tensor
model: Grpah class
model: Graph class

Returns:
int32 bias numpy array
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ def create_tf_node(op, name, inputs):
def graph_node_names_details(nodes):
"""Parse the graph nodes ans get the graph_nodes_dict.

Be used for Grpah class when converting a tensorflow computation graph to an engine graph.
Be used for Graph class when converting a tensorflow computation graph to an engine graph.
The node_name is the key, node in value is for getting the Const
tensor value and the input_tensor source op; outputs in value is for
output_tensor dest op.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ PI32_CONST256(inv_mant_mask, ~0x7f800000);

PI32_CONST256(0x7f, 0x7f);

// evaluation of 8 sines at once using AVX intrisics
// evaluation of 8 sines at once using AVX intrinsics
__m256 sinf(__m256 x) {
__m256 sign_bit = x;
// take the absolute value
Expand Down
Loading
Loading