You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have fixed this blank output issue for deepseek-coder models, you could upgrade to ipex-llm>=2.2.0b20250115 and have a try again with the updated deepseek GPU example.
For your second issue regarding deepseek-ai/deepseek-coder-6.7b-instruct, this is because your memory is insufficient to load the original 6.7b model for further IPEX-LLM low-bit optimizations. You can use a machine with sufficient memory to save the IPEX-LLM low-bit model, then load the low-bit model onto the current machine for inference. You could refer to our Save-Load example for more information.
If no machine with larger memory is available, you could also try increasing your virtual memory to load the original mode.
Following the guild of https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/HuggingFace/LLM/deepseek
and change the model to deepseek-ai/deepseek-coder-1.3b-instruct
following is the results
please help me
The text was updated successfully, but these errors were encountered: