-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
different results from paper #1
Comments
Thanks for your attention. The quick answer is, GW-Net uses masked_mae_loss while we just use the original mae loss. |
Thanks for the reply. I see it now. But if you don't mind me asking, why not use the masked loss for the benchmark? After all the evaluation uses masked MAE and apparently the masked loss reaches better result than the non-masked (at least in gwnet) |
Yes, you are right, on METR-LA, apparently mask_mae is better, but on PEMS-BAY, the difference is not so big. This all depends on the dataset property. We adopt the most naive MAE loss as the unified loss function to verify the "pure" performances of the models, by excluding other factors (e.g., loss function, extra data source). This is our originial idea. But, of course, on METR-LA, we should use masked MAE as the unified loss, definitely. Hope we could update/improve this point in the future. |
The masking metrics may be more reasonable. |
Hi, and thank you for this valuable survey.
I noticed that in the preprint your results for Graph WaveNet are substantially worse than what was reported in the original paper.
For reference, Graph WaveNet MAE results for 15, 30 and 60 min are: 2.69, 3.07, 3.53
While in the preprint the results are: 3.204, 3.922, 4.848
Perhaps i missed something but the hyperparameters and data splits seem similar in both cases.
How do you explain this difference?
The text was updated successfully, but these errors were encountered: