-
Notifications
You must be signed in to change notification settings - Fork 80
Fix bug in new NNs with forecasting interval #1108
Conversation
🚀 Deployed on https://deploy-preview-1108--etna-docs.netlify.app |
Codecov Report
📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more @@ Coverage Diff @@
## inference-v2.1 #1108 +/- ##
=================================================
Coverage ? 86.86%
=================================================
Files ? 164
Lines ? 8954
Branches ? 0
=================================================
Hits ? 7778
Misses ? 1176
Partials ? 0 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's seems like nothing change on the level of test cases, isn't it?
), | ||
], | ||
) | ||
def test_forecast_in_sample_full_no_target_failed_assertion_error(self, model, transforms, example_tsds): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've seen this pattern while refactoring nn for etna-v2
Why do we duplicate information about assertion raised?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain please what exactly do you mean? Naming of the test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@to_be_fixed(raises=AssertionError) and test_forecast_in_sample_full_no_target_failed_assertion_error
- assertion_error duplication
- Instead of using test_forecast_in_sample_full_no_target with pytest.mark.xfail
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assertion_error
to distinguish different tests that failed in different ways.to_be_fixed
is used to clarify that we want to fix that test in the future and I can write a particular error message to catch inside the test.
Please, take a look at the #1087, I explained what I've found about these tests and probable solutions. But I think all of them aren't really good and should be discussed. |
# Conflicts: # CHANGELOG.md
…nough, add check on size of context for DeepBaseModel and test for it
@@ -93,6 +99,7 @@ def step(self, batch: MLPBatch, *args, **kwargs): # type: ignore | |||
: | |||
loss, true_target, prediction_target | |||
""" | |||
self._validate_batch(batch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems we validate twice.
We can call forward pass in step
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I understand, step
isn't called during forecasting, it is called only during training.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use step
in training_step
and validation_step
, it is for training.
Before submitting (must do checklist)
Proposed Changes
Change alignment during forecasting.
Closing issues
Closes #1087.