Skip to content
This repository has been archived by the owner on Oct 25, 2024. It is now read-only.

Commit

Permalink
Merge branch 'main' into hengguo/h2o
Browse files Browse the repository at this point in the history
  • Loading branch information
n1ck-guo authored Jul 29, 2024
2 parents 2706d5d + b00652d commit b2d5768
Show file tree
Hide file tree
Showing 8 changed files with 12 additions and 8 deletions.
4 changes: 3 additions & 1 deletion docs/publication.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
Full Publications/Events (50)
==========

## 2024 (12)
## 2024 (14)
* Blog published on Intel information: [解决方案为最新Meta Llama 3.1模型提供加速](https://mp.weixin.qq.com/s/Qk3mSWPW8qdIMswpEJxVoA)
* Blog published on Intel Developer News: [Intel AI Solutions Boost LLMs: Unleashing the Power of Meta* Llama 3.1](https://www.intel.com/content/www/us/en/developer/articles/technical/intel-ai-solutions-support-meta-llama-3-1-launch.html)
* Blog published on digit.in: [AI hallucination in LLM and beyond: Will it ever be fixed?](https://www.digit.in/features/general/ai-hallucination-in-llm-and-beyond-can-it-be-fixed-completely.html)
* Blog published on Medium: [Accelerating Qwen2 Models with Intel Extension for Transformers](https://medium.com/intel-analytics-software/accelerating-qwen2-models-with-intel-extension-for-transformers-99403de82f68) (June 2024)
* Blog published on Huggingface: [Building Cost-Efficient Enterprise RAG applications with Intel Gaudi 2 and Intel Xeon](https://huggingface.co/blog/cost-efficient-rag-applications-with-intel) (May 2024)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,5 @@ tiktoken #code_gen
neural-compressor
intel_extension_for_pytorch==2.3.0
git+https://github.com/huggingface/optimum-intel.git@50d867c13b22c22eda451ddb67bddb8159670f85
auto-round==0.2
git+https://github.com/intel/auto-round.git@61cf9eef4a3ccb5a2d83a557deb709091a548581
git+https://github.com/bigcode-project/bigcode-evaluation-harness@094c7cc197d13a53c19303865e2056f1c7488ac1
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,6 @@ bitsandbytes #baichuan
transformers_stream_generator
tiktoken #qwen
einops #qwen
auto-round
git+https://github.com/intel/auto-round.git@e24b9074af6cdb099e31c92eb81b7f5e9a4a244e
git+https://github.com/intel/neural-compressor.git
lm-eval==0.4.3
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ transformers_stream_generator
tiktoken #qwen
einops #qwen
git+https://github.com/intel/neural-speed.git
auto-round==0.2
git+https://github.com/intel/auto-round.git@e24b9074af6cdb099e31c92eb81b7f5e9a4a244e
git+https://github.com/intel/neural-compressor.git
lm-eval==0.4.3
huggingface_hub
Original file line number Diff line number Diff line change
Expand Up @@ -658,7 +658,7 @@ def convert_to_quantized_model(model, config, device="cpu"):
lr=config.lr,
minmax_lr=config.minmax_lr,
seqlen=config.seq_len,
n_samples=config.n_samples,
nsamples=config.n_samples,
iters=config.iters,
scale_dtype=config.scale_dtype,
)
Expand All @@ -672,7 +672,7 @@ def convert_to_quantized_model(model, config, device="cpu"):
dataset_name="NeelNanda/pile-10k",
seed=42,
bs=config.batch_size,
n_samples=config.n_samples)
nsamples=config.n_samples)
run_fn = run_fn_for_autoround
run_args = (dataloader,)
model = prepare(model=model, quant_config=quant_config)
Expand Down
2 changes: 1 addition & 1 deletion tests/CI/test_quantization.py
Original file line number Diff line number Diff line change
Expand Up @@ -432,7 +432,7 @@ def test_quantization_for_llm(self):
woq_model.eval()
output = woq_model(dummy_input)
if CpuInfo().bf16:
self.assertTrue(isclose(float(output[0][0][0][0]), 0.1513671875, rel_tol=1e-04))
self.assertTrue(isclose(float(output[0][0][0][0]), 0.150390625, rel_tol=1e-04))

def test_export(self):
# test model with model_id
Expand Down
2 changes: 2 additions & 0 deletions tests/CI/test_weight_only.py
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,7 @@ def test_auto_model_saving_loading(self):
module_list.append(name)
self.assertTrue(len(module_list) > 0)

@unittest.skip("need bug fix.")
def test_nf4_training(self):
quantization_config = RtnConfig(bits=4, weight_dtype="nf4", scale_dtype="fp32")
model = AutoModelForCausalLM.from_pretrained(
Expand Down Expand Up @@ -251,6 +252,7 @@ def test_nf4_training(self):
module.unmerge()
model.merge_and_unload()

@unittest.skip("need bug fix.")
def test_int8_training(self):
model = AutoModelForCausalLM.from_pretrained(
llama_model_path, load_in_8bit=True, use_neural_speed=False)
Expand Down
2 changes: 1 addition & 1 deletion tests/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
--extra-index-url https://download.pytorch.org/whl/cpu
accelerate
auto-gptq
auto-round==0.2
bitsandbytes
datasets==2.16.1
einops
evaluate
gguf
git+https://github.com/huggingface/optimum-intel.git@50d867c13b22c22eda451ddb67bddb8159670f85
git+https://github.com/intel/auto-round.git@61cf9eef4a3ccb5a2d83a557deb709091a548581
git+https://github.com/intel/neural-compressor.git
git+https://github.com/intel/neural-speed.git
intel-extension-for-pytorch==2.3.0
Expand Down

0 comments on commit b2d5768

Please sign in to comment.