-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reuse fixtures in slow MMM tests #515
Reuse fixtures in slow MMM tests #515
Conversation
@@ -106,15 +106,15 @@ def mmm_with_fourier_features() -> DelayedSaturatedMMM: | |||
) | |||
|
|||
|
|||
@pytest.fixture(scope="class") | |||
@pytest.fixture(scope="module") | |||
def mmm_fitted( | |||
mmm: DelayedSaturatedMMM, toy_X: pd.DataFrame, toy_y: pd.Series | |||
) -> DelayedSaturatedMMM: | |||
mmm.fit(X=toy_X, y=toy_y, target_accept=0.8, draws=3, chains=2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of this nearly useless fit here and in the fixture below, you can take draws from a prior predictive with narrow priors that correspond to a reasonable posterior and store those as if they were the posterior. We do this in almost all CLV tests.
There should be one test that actually calls fit (from real priors) and asserts it is converging to something correct. That could be marked with @pytest.mark.slow so it is skipped by default locally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that sounds good
What converging stats do you have in mind?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dunno. We can simulate data with known parameters and check the posterior converges to those with some relative tolerance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool. I will add that in
Some of the tests still call fit with draws=100
Is that an issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Depends on whether that is relevant for the test or not
Tests seem to be a lot faster! So step in the right direction 😄 |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #515 +/- ##
==========================================
+ Coverage 91.34% 91.36% +0.02%
==========================================
Files 21 21
Lines 2067 2073 +6
==========================================
+ Hits 1888 1894 +6
Misses 179 179 ☔ View full report in Codecov by Sentry. |
some other tests that depend on the fit... Will have to change a bit more than expected |
So let's merge the easy wins with the scope and optimize in a separate PR? |
This reverts commit 2408f61.
Going back one commit for the easy win solution. Will create an issue for follow up |
Thanks @wd60622 ! |
Description
Modify the scope of the fixture from class to module in order to reuse on the out of sample tests
Related Issue
test_new_data_sample_posterior_predictive_method
takes 50 minutes (70%) of CI time #514Checklist
Modules affected
Type of change
📚 Documentation preview 📚: https://pymc-marketing--515.org.readthedocs.build/en/515/