-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do you really create WordLlama model? #20
Comments
The author put together a tutorial here: https://github.com/dleemiller/WordLlama/blob/main/tutorials/extract_token_embeddings.md Conceptually, I think about it kind of like wordfreq but maybe that is not a correct comparison. For wordllama specifically (l2_supercat, l3_supercat), it looks like the code that created it is here: |
I'll add another tutorial for training, but the general process is:
The key here is that the model we train for wordllama produces identical vectors for each token, regardless of the other tokens in the sentence. So as a result, you can inference each token and save the output vectors as a new set of token embeddings and discard the original token embeddings, as well as the projection tensor and token weight parameters. Not all of the information learned by the LLM token embeddings is relevant to this model, so we are essentially attempting to distill out the specific representation that is relevant to a single token in isolation (like a word embedding) by training a projection on downstream tasks. I'll keep this open for now, and close when I have a tutorial for training. |
Thanks both of you. I guess I didn't scroll far enough to tutorials/extract_token_embeddings.md. I think I understand the general process now. |
I read this. Sorry I don't understand how you did at all. Let say if I have my own transformer-decoder model. What are the specific steps to achieve something like WordLlama?
The text was updated successfully, but these errors were encountered: