Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected behavior from normalization in matrix ops inner product #108

Open
amywinecoff opened this issue Apr 22, 2022 · 0 comments
Open

Comments

@amywinecoff
Copy link
Collaborator

In the inner_product function in matrix_ops, the parameter normalize_users is set to True, presumably for performance reasons; however, this results in a host of behaviors that can trip users up. Or at least this has tripped me up on two separate occasions. For example, intuitively:

recommender.predicted_scores.value
np.dot(ideal.users_hat.value,ideal.items_hat.value)

should be the same, but they are not. This is made further murky because in the documentation for the Recommender class it says:

predicted_scores: :class:~components.users.PredictedScores
An array representing the user preferences as perceived by the
system. The shape is always :math:|U| \\times |I|, where
:math:|U| is the number of users in the system and :math:|I|
is the number of items in the system. The scores are calculated with
the dot product of :attr:users_hat and :attr:items_hat.

This is not true when normalize_users is True

Another issues arises with metrics when this normalization is happening. For example, if you set up an IdealRecommender class based on the ContentFiltering model and set the model predicted user and item attributes to the actual user and item attributes, for each timestep, MSE should be 0. If you set both normalization parameters in inner_product to False, this is what happens, but with the default parameters, it does not.

It's worth reconsidering, IMHO, whether the default parameters of TRECS should be such that is intuitive for users but possibly slower versus more efficient but resulting in unexpected outputs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant