-
Notifications
You must be signed in to change notification settings - Fork 612
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Freeze pretrained weights #467
Comments
Is the use case here incremental retraining ? I'd like to add support for that at some point - though I was thinking of a different API than freezing items you don't want to update |
@benfred yes, I'd like to train say item embeddings using some other method (like a variational autoencoder) and then use these latent features in the recommender, but not modify them during training. |
It seems that I came up with same idea as @phiweger. I think adding something like I do realize though, that it would take quite a bit more more work than that to implement this feature. |
There is a PR here which should make this possible #527 (as well as let you incrementally retrain models on subsets of users/items). As an example given a # Train an ALS model with pre-existing item factors, but calculating user factors
model = AlternatingLeastSquares()
model.item_factors = existing_item_factors
userids = np.arange(user_items.shape[0])
model.partial_fit_users(userids, user_items) |
Following up on #341 : Is it possible to freeze the pretrained weights, so that they are not changed during training?
The text was updated successfully, but these errors were encountered: