-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How would one go about running embedding as a service using something like vLLM? #2
Comments
I think it should be easy to serve GritLM using vLLM or similar and providing access to its embedding capability / its language modeling capability or both in one single model / endpoint. But I'm not sure about the details of vllm etc. |
would we just need to get the last hidden states for the embed token and return it from vllm at inference time? |
the last hidden state for the entire seq to be embedded & then mean pool it |
vllm seems to support encode method(which we need for embedding model) after vllm 0.4.3. But I am running into some issues. When I run gritlm on unified mode, vllm doesnt seem to consider gritlm an embedding model and doesnt allow it to call encode() func(according to this issue vllm-project/vllm#6015), and I get the error anyone has the same problem / is working on embedding model with vllm? |
have you solved the problem |
I would like to run embedding as a service using something like vLLM on a Docker container on different host. How would one go about doing this?
The text was updated successfully, but these errors were encountered: