Connecting LibreChat to a local LM Studio, which hosts a Mistral 7B. #1836
Replies: 18 comments 9 replies
-
I don’t think this is specific to LMStudio but you may have an incorrect configuration of the librechat.yaml file or the compose file if you made edits there:
share either here with sensitive values removed. You can use https://yamlchecker.com/ to verify also check your logs in ./logs, the latest debug* and error* logs are both relevant for this |
Beta Was this translation helpful? Give feedback.
-
Hi Danny, and thanks for coming back to me. https://yamlchecker.com/ did not detect any errors. The librechat.yaml: # # For more information, see the Configuration Guide:
# https://docs.librechat.ai/install/configuration/custom_config.html
# Configuration version (required)
version: 1.0.3
# Cache settings: Set to true to enable caching
cache: true
# Example Registration Object Structure (optional)
registration:
socialLogins: ["github", "google", "discord", "openid", "facebook"]
# allowedDomains:
# - "gmail.com"
# fileConfig:
# endpoints:
# assistants:
# fileLimit: 5
# fileSizeLimit: 10 # Maximum size for an individual file in MB
# totalSizeLimit: 50 # Maximum total size for all files in a single request in MB
# supportedMimeTypes:
# - "image/.*"
# - "application/pdf"
# openAI:
# disabled: true # Disables file uploading to the OpenAI endpoint
# default:
# totalSizeLimit: 20
# YourCustomEndpointName:
# fileLimit: 2
# fileSizeLimit: 5
# serverFileSizeLimit: 100 # Global server file size limit in MB
# avatarSizeLimit: 2 # Limit for user avatar image size in MB
# rateLimits:
# fileUploads:
# ipMax: 100
# ipWindowInMinutes: 60 # Rate limit window for file uploads per IP
# userMax: 50
# userWindowInMinutes: 60 # Rate limit window for file uploads per user
# Definition of custom endpoints
endpoints:
# assistants:
# disableBuilder: false # Disable Assistants Builder Interface by setting to `true`
# pollIntervalMs: 750 # Polling interval for checking assistant updates
# timeoutMs: 180000 # Timeout for assistant operations
# # Should only be one or the other, either `supportedIds` or `excludedIds`
# supportedIds: ["asst_supportedAssistantId1", "asst_supportedAssistantId2"]
# # excludedIds: ["asst_excludedAssistantId"]
custom:
# Mistral AI API
- name: "Mistral" # Unique name for the endpoint
# For `apiKey` and `baseURL`, you can use environment variables that you define.
# recommended environment variables:
apiKey: "not-needed"
baseURL: "http://127.0.0.1:1234/v1"
# Models configuration
models:
# List of default models to use. At least one value is required.
default: ["mistral-tiny", "mistral-small", "mistral-medium"]
# Fetch option: Set to true to fetch models from API.
fetch: true # Defaults to false.
# Optional configurations
# Title Conversation setting
titleConvo: true # Set to true to enable title conversation
# Title Method: Choose between "completion" or "functions".
titleMethod: "completion" # Defaults to "completion" if omitted.
# Title Model: Specify the model to use for titles.
titleModel: "mistral-tiny" # Defaults to "gpt-3.5-turbo" if omitted.
# Summarize setting: Set to true to enable summarization.
summarize: false
# Summary Model: Specify the model to use if summarization is enabled.
summaryModel: "mistral-tiny" # Defaults to "gpt-3.5-turbo" if omitted.
# Force Prompt setting: If true, sends a `prompt` parameter instead of `messages`.
forcePrompt: false
# The label displayed for the AI model in messages.
modelDisplayLabel: "Mistral" # Default is "AI" when not set.
# Add additional parameters to the request. Default params will be overwritten.
addParams:
safe_prompt: false # This field is specific to Mistral AI: https://docs.mistral.ai/api/
# Drop Default params parameters from the request. See default params in guide linked below.
# NOTE: For Mistral, it is necessary to drop the following parameters or you will encounter a 422 Error:
dropParams: ["stop", "user", "frequency_penalty", "presence_penalty"]
# OpenRouter.ai Example
- name: "OpenRouter"
# For `apiKey` and `baseURL`, you can use environment variables that you define.
# recommended environment variables:
# Known issue: you should not use `OPENROUTER_API_KEY` as it will then override the `openAI` endpoint to use OpenRouter as well.
apiKey: "${OPENROUTER_KEY}"
baseURL: "https://openrouter.ai/api/v1"
models:
default: ["gpt-3.5-turbo"]
fetch: true
titleConvo: true
titleModel: "gpt-3.5-turbo"
summarize: false
summaryModel: "gpt-3.5-turbo"
forcePrompt: false
modelDisplayLabel: "OpenRouter"
# See the Custom Configuration Guide for more information:
#
> https://docs.librechat.ai/install/configuration/custom_config.html docker-compose.override.yml:
|
Beta Was this translation helpful? Give feedback.
-
Your override file does generate an error for me as is: Try this: version: '3.4'
services:
# USE LIBRECHAT CONFIG FILE
api:
volumes:
- ./librechat.yaml:/app/librechat.yaml |
Beta Was this translation helpful? Give feedback.
-
Thanks, Disably, it launches. |
Beta Was this translation helpful? Give feedback.
-
Here's the message returned when I tried {"level":"error","message":"Failed to fetch models from Mistral API Something happened in setting up the request Cannot read properties of undefined (reading 'status')"} |
Beta Was this translation helpful? Give feedback.
-
This is what I used in my librechat.yaml for Mistral in LMStudio: # Mistral AI API
- name: "Mistral 7B"
apiKey: "sk-1234"
baseURL: "http://localhost:1234/v1"
models:
default: [
"local-model",
]
fetch: false
titleConvo: true
titleMethod: "completion"
titleModel: "local-model"
summarize: false
summaryModel: "local-model"
forcePrompt: false
modelDisplayLabel: "Mistral 7B"
dropParams: ["stop", "user", "frequency_penalty", "presence_penalty"] Note: in LMStudio you need to enable the "server mode" since it's disabled by default Edit: removed "user_provided" and replaced it with a dummy key |
Beta Was this translation helpful? Give feedback.
-
Top, thank you very much Fuegovic, And do you have any idea why I can't connect to my ML Studio server? |
Beta Was this translation helpful? Give feedback.
-
Yes, I did the test and it works. |
Beta Was this translation helpful? Give feedback.
-
I'm getting the same error when connecting to Mistral via CloudFlare:
Debug logs:
Error logs:
|
Beta Was this translation helpful? Give feedback.
-
Anyone have any idea why this isn't working? Really impossible to connect to my LM Studio with my LibreChat. |
Beta Was this translation helpful? Give feedback.
-
Hi, Danny, |
Beta Was this translation helpful? Give feedback.
-
I give up for now, as I can't find any solution. |
Beta Was this translation helpful? Give feedback.
-
A quick update on my problem. Now I've got to connect it to a local Mistral AI on an LM Studio server, but obviously that's not possible yet, as I haven't found anyone who has. The next step would be to run a training session on this AI. |
Beta Was this translation helpful? Give feedback.
-
Still can't connect to my LM Studio server. |
Beta Was this translation helpful? Give feedback.
-
@fuegovic , Knowing that I can connect to my LM studio with Python, could it be that ports like 1234 are not open on docker? |
Beta Was this translation helpful? Give feedback.
-
I think you must be close to the problem. Because once again I was disappointed when I saw that it didn't work. Since I'm going to dedicate a server to hosting LibreChat, this won't be a problem for me. However, it will be difficult to test locally. Thanks a lot for your support @fuegovic |
Beta Was this translation helpful? Give feedback.
-
Not sure if it's still needed but I solved the exact same problem by changing 127.0.0.1 to host.docker.internal (=> baseURL: 'http://host.docker.internal:1234/v1') |
Beta Was this translation helpful? Give feedback.
-
Hello,
I've set up LibreChat to connect to LM Studio, which hosts a Mistral 7B.
Visibly, the whole installation works, on the LM Studio side with the server activated, LibreChat seems OK as it is accessible.
I then try to modify LibreChat's configuration to connect it to my LM Studio server by changing the configuration :
In docker-compose.override.yml, I uncomment the following lines:
services:
api:
volumes:
- ./librechat.yaml:/app/librechat.yaml
In librechat.yaml I point to my local server, specifying the parameters provided by my LMStudio server:
- name: "Mistral" # Unique name for the endpoint
apiKey: "not-needed"
baseURL: http://localhost:1234/v1
I launch Docker : docker-compose up
This is the error returned:
PS C:\LibreChat> docker-compose up
time="2024-02-19T15:13:11+01:00" level=warning msg="The "UID" variable is not set. Defaulting to a blank string."
time="2024-02-19T15:13:11+01:00" level=warning msg="The "GID" variable is not set. Defaulting to a blank string."
time="2024-02-19T15:13:11+01:00" level=warning msg="The "UID" variable is not set. Defaulting to a blank string."
time="2024-02-19T15:13:11+01:00" level=warning msg="The "GID" variable is not set. Defaulting to a blank string."
time="2024-02-19T15:13:11+01:00" level=warning msg="The "UID" variable is not set. Defaulting to a blank string."
time="2024-02-19T15:13:11+01:00" level=warning msg="The "GID" variable is not set. Defaulting to a blank string."
yaml: line 15: did not find expected key
PS C:\LibreChat>
Has anyone ever tried to connect to an LM Studio server, especially with a mistral AI?
Vincent.
Beta Was this translation helpful? Give feedback.
All reactions