Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: temporary disable warm up for deepseek-r1 #379

Merged
merged 2 commits into from
Jan 21, 2025

Conversation

vansangpfiev
Copy link
Contributor

This pull request includes changes to the src/llama_engine.cc file to handle specific model types and improve the inference process. The most important changes are:

Model handling improvements:

  • src/llama_engine.cc: Added a condition to skip the warm-up process for models with IDs containing "deepseek-r1".

Inference process enhancements:

  • src/llama_engine.cc: Modified the HandleInferenceImpl method to append a specific end-of-sentence marker for models with IDs containing "deepseek-r1" when the input role is "assistant".

@vansangpfiev vansangpfiev merged commit e51e22c into main Jan 21, 2025
32 checks passed
@vansangpfiev vansangpfiev deleted the fix/no-warmup-for-deepseek-r1 branch January 21, 2025 22:57
vansangpfiev pushed a commit that referenced this pull request Feb 3, 2025
vansangpfiev pushed a commit that referenced this pull request Feb 3, 2025
vansangpfiev added a commit that referenced this pull request Feb 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants