Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: set a default temperature in the common local llm settings #696

Merged
merged 1 commit into from
Dec 25, 2023

Conversation

cpacker
Copy link
Collaborator

@cpacker cpacker commented Dec 25, 2023

Please describe the purpose of this pull request.

  • set a default temperature in the common local llm settings (use 0.8, most user-friendly lm frontends seem to be using 0.7/0.8 atm)
  • this shouldn't change LM Studio calls (0.8 default) but it should change vLLM calls (1.0 default)
    • default settings on vLLM make it hard for the LLM to output structured content (eg lists)

How to test

Regression test to check runtime errors on various backends (caused by additional param in payload):

  • lmstudio
  • ollama
  • webui
  • llama.cpp
  • koboldcpp
  • vLLM

Have you tested this PR?

See checklist

@cpacker cpacker merged commit a2543e6 into main Dec 25, 2023
10 checks passed
@cpacker cpacker deleted the default-temp branch December 25, 2023 07:36
sarahwooders pushed a commit that referenced this pull request Dec 26, 2023
norton120 pushed a commit to norton120/MemGPT that referenced this pull request Feb 15, 2024
mattzh72 pushed a commit that referenced this pull request Oct 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant