-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Future regressors modularization #1144
Conversation
Model Benchmark
|
Codecov Report
📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more @@ Coverage Diff @@
## main #1144 +/- ##
==========================================
+ Coverage 90.24% 90.27% +0.03%
==========================================
Files 33 36 +3
Lines 4992 5018 +26
==========================================
+ Hits 4505 4530 +25
- Misses 487 488 +1
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
@@ -465,7 +465,7 @@ def get_valid_configuration( # move to utils | |||
} | |||
) | |||
elif validator == "plot_parameters": | |||
regressor_param = m.model.get_reg_weights(regressor)[quantile_index, :] | |||
regressor_param = m.model.future_regressors.get_reg_weights(regressor)[quantile_index, :] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sweet, like that you include the regularization already!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's one if-else statement where I suspect some dublicate, please review and change if applicable. Otherwise looks really good, ready to go from my side.
n_multiplicative_regressor_params = 0 | ||
for name, configs in self.regressors_dims.items(): | ||
if configs["mode"] not in ["additive", "multiplicative"]: | ||
log.error("Regressors mode {} not implemented. Defaulting to 'additive'.".format(configs["mode"])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a comment: We should make a consistent decision where to raise error messages if certain parameters are missing. But fine for now.
components["future_regressors_additive"] = self.scalar_features_effects( | ||
features=inputs["regressors"]["additive"], params=self.regressor_params["additive"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really good solution, I like how much simpler the code now is!
neuralprophet/time_net.py
Outdated
if mode == "additive": | ||
features = inputs["regressors"]["additive"] | ||
params = self.regressor_params["additive"] | ||
mode = "additive" | ||
else: | ||
features = inputs["regressors"]["multiplicative"] | ||
params = self.regressor_params["multiplicative"] | ||
components[f"future_regressor_{regressor}"] = self.scalar_features_effects( | ||
features=features, params=params, indices=index | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What exactly is this section of the new code doing? The following looks kind of redundant
if mode == "additive":
...
mode = "additive"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alfonsogarciadecorral can you have a quick look at this?
neuralprophet/utils_torch.py
Outdated
def new_param(dims): | ||
"""Create and initialize a new torch Parameter. | ||
|
||
Parameters | ||
---------- | ||
dims : list or tuple | ||
Desired dimensions of parameter | ||
|
||
Returns | ||
------- | ||
nn.Parameter | ||
initialized Parameter | ||
""" | ||
if len(dims) > 1: | ||
return nn.Parameter(nn.init.xavier_normal_(torch.randn(dims)), requires_grad=True) | ||
else: | ||
return nn.Parameter(torch.nn.init.xavier_normal_(torch.randn([1] + dims)).squeeze(0), requires_grad=True) | ||
|
||
|
||
def init_parameter(dims): | ||
""" | ||
Create and initialize a new torch Parameter. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we did the same thing here ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed this in the latest commit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great Work!!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only issue: let's not pass logger
|
||
|
||
class FutureRegressors(BaseComponent): | ||
def __init__(self, config, id_list, quantiles, n_forecasts, device, log, config_trend_none_bool): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the logger can be retrived at the beginning of a file - does not need to be passed by. (see examples of other files)
🔬 Background
🔮 Key changes
📋 Review Checklist
Please make sure to follow our best practices in the Contributing guidelines.