You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is a nice suggestion. I think a simple improvement would be reporting the results according to different levels of activity (besides the overall performance), which can be on schedule if we had more hand later. While as model-level improvement, we will read the the recommended paper first.
It is a nice suggestion. I think a simple improvement would be reporting the results according to different levels of activity (besides the overall performance), which can be on schedule if we had more hand later.
Sounds good.
To be clear, I was just using the paper as an example of this kind of metric appearing in a paper. I have already coded up the model in the paper, btw #670
Of course, cold-start performance metrics is common to find in papers
It is a nice suggestion. I think a simple improvement would be reporting the results according to different levels of activity (besides the overall performance), which can be on schedule if we had more hand later.
Sounds good.
To be clear, I was just using the paper as an example of this kind of metric appearing in a paper. I have already coded up the model in the paper, btw #670
Of course, cold-start performance metrics is common to find in papers
Got it! That would be with new metrics and new result presentation ways. We will schedule this point.
It would be convenient to disentangle performance for "cold-start" and "hot" situations for both users and items.
Interesting example for the user case from "Noise Contrastive Estimation for One-Class Collaborative Filtering" by Ga Wu et al.
The text was updated successfully, but these errors were encountered: