-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unexpected embeddings using GritLM #5783
Comments
The paper says they use mean pooling for the embeddings, but currently embeddings for generative models like this are hard-coded to last token. If you change the code around const int64_t embd_pos = 0;
const int64_t embd_size = n_embd * n_tokens; It'll pull in embeddings for each token then you can manually do the mean pooling. That said, I just tried it and I'm stilling getting different results. So there must be something else going on as well. I'm pretty excited about GritLM, especially the document caching tricks they show. Would be cool to get it working here. |
Thanks, I saw that but don't know what it means (no pun intended). Do I also need to change the way I call Also, I added some quants on HF. |
Good news @dranger003! I finally got the numbers to match up exactly. Check it out at iamlemec/llama.cpp:gritlm. Here are the changes that were needed:
Also note the slight fix to the cosine similarity (we want similarity, not distance). |
Thanks, just tested it and it works great! You even included the sample code, amazing. I think there is a lot of potential in using the same model for both representation and generation, especially when resources are scarce. |
I've been trying to add the text generation piece and I noticed two things:
Text generation code
|
@iamlemec Am I right in thinking there is a part two for master branch to work? I saw ggerganov's commit yesterday to fix embeddings but gritlm doesn't work in master, so this is why I'm asking. EDIT: Just saw your v2 branch. If I can help with anything let me know! |
@dranger003 Yup, that's my attempt at rebasing after the new embedding fixes. Unfortunately it's not giving the right results currently. Trying to figure it out now, but if you spot anything, do tell! |
@dranger003 Finally figured it out. My |
@iamlemec I added the text generation code to the sample and it works great without reloading the model for both modes. Should we submit a PR? EDIT: So it looks like we cannot use both embeddings and causal on the same context? The text generation works but the embeddings are different when I set both embeddings and causal to true. Is that expected (i.e. they need to have a separate context)? |
Just pushed a thing where you can toggle embedding mode with Let's test and think about this for a day or two to see if it's ready for prime time. I might also try to see if I can get the doc caching example working. |
I reproduced the embeddings sample from GritLM and llama.cpp returns unexpected embedding values. I have been able to get embeddings to work with other models. I verified the tokenization and all seems good (with and without special tokens and bos/eos).
Below is a sample program to reproduce the issue with the unexpected results. The gritlm python inference code can be found here. The model inference supports both text generation and text representation (embeddings) and is based on Mistral 7B.
There is nothing special in this code, and this is based off the embeddings sample in llama.cpp so I'm not sure what is going. Any guidance is appreciated.
Note: It seems HF is in maintenance right now, but I'll add a gguf link when they're back online.
sample source code
output
The text was updated successfully, but these errors were encountered: