-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Car Dealer Clarifications #17
Comments
The reward function specified as "fancy" is used for online evaluation. In the offline dataset, the reward is assigned at the end of the episode and is not weighted by the purchase probability. We will update the dataset on the server. Thank you for bringing this to our attention. |
Thank you! For now, is there a purchase probability threshold that I can use to determine a purchase? |
Also, Table 4 in the paper looks like it might be using revenue in thousands as a reward (although my attempt to reproduce results were in the 20Ks as opposed to the 50Ks). Can you provide the results from Table 4 using the "fancy" reward instead? Or clarify how the results in Table 4 were obtained for car dealer? |
@icwhite - You mentioned above that the |
Sorry, I don't know. @abdulhaim would you be able to respond with the complete dataset? |
The training data made available for Car Dealer only has 4399 instances instead of the 19K mentioned in the paper. Can you provide the complete datasets for this task?
Also, it's not clear to me how you calculate car dealer rewards based on the training dataset. The dataset provides a purchase probability at each step. Is the agent rewarded when this value reaches 100? Only at the end of the episode? Or is the purchase price weighted by the purchase probability?
Finally, when using the Car Dealer task online, the environment code provides two versions of a reward function. Which is used for evaluation and online training in the paper?
The text was updated successfully, but these errors were encountered: