Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do you really create WordLlama model? #20

Open
dinhanhx opened this issue Sep 21, 2024 · 3 comments
Open

How do you really create WordLlama model? #20

dinhanhx opened this issue Sep 21, 2024 · 3 comments

Comments

@dinhanhx
Copy link

WordLlama begins by extracting the token embedding codebook from a state-of-the-art LLM (e.g., LLama3 70B), and training a small context-less model in a general purpose embedding framework.

I read this. Sorry I don't understand how you did at all. Let say if I have my own transformer-decoder model. What are the specific steps to achieve something like WordLlama?

@chapmanjacobd
Copy link

Let say if I have my own transformer-decoder model. What are the specific steps to achieve something like WordLlama?

The author put together a tutorial here: https://github.com/dleemiller/WordLlama/blob/main/tutorials/extract_token_embeddings.md

Conceptually, I think about it kind of like wordfreq but maybe that is not a correct comparison.

For wordllama specifically (l2_supercat, l3_supercat), it looks like the code that created it is here:

@dleemiller
Copy link
Owner

I'll add another tutorial for training, but the general process is:

  • Extract token embeddings from an LLM
  • [optional] concatenate with other token embeddings that use the same tokenizer
  • Train a projection model on a large corpus of sentence pairs and/or triplets using the train.py script
  • Save the output model by inferencing the entire vocabulary

The key here is that the model we train for wordllama produces identical vectors for each token, regardless of the other tokens in the sentence. So as a result, you can inference each token and save the output vectors as a new set of token embeddings and discard the original token embeddings, as well as the projection tensor and token weight parameters.

Not all of the information learned by the LLM token embeddings is relevant to this model, so we are essentially attempting to distill out the specific representation that is relevant to a single token in isolation (like a word embedding) by training a projection on downstream tasks.

I'll keep this open for now, and close when I have a tutorial for training.

@dinhanhx
Copy link
Author

Thanks both of you. I guess I didn't scroll far enough to tutorials/extract_token_embeddings.md.

I think I understand the general process now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants