-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(ci): disable continue-on-error for all test jobs in CI workflow #1968
chore(ci): disable continue-on-error for all test jobs in CI workflow #1968
Conversation
@fehiepsi Some more test cases have failed! Can you check |
@OlaRonning could you take a look? i think the inspected zs is different now. |
I think you're right @fehiepsi. I'll work it over in detail in the morning. |
Updated latents in stein loss test case
@@ -80,7 +80,7 @@ def stein_loss_fn(chosen_particle, obs, particles, assign): | |||
xs = jnp.array([-1, 0.5, 3.0]) | |||
num_particles = xs.shape[0] | |||
particles = {"x": xs} | |||
zs = jnp.array([-0.1241799, -0.65357316, -0.96147573]) # from inspect | |||
zs = jnp.array([-3.3022664, -1.06049, 0.64527285]) # from inspect |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could we just replicate the logic to generate zs here? @OlaRonning
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that should be possible. I'll have a look at it now.
changed zs to be computed instead of hardcoded
The stein test is passing but there is a problem with stochastic support. |
I fixed the very same problem for py3.10 in 78c51a7, by changing the random seed, because the error was too high for the previous seed on which it was passing on py3.9. Now it is passing for py3.10 and not for py3.9. I presume we will encounter such cases in the future too! We have to find something more robust that will work on both Python versions. |
Looking at the failing assert in CI it looks like the dimensions are swapped. Could be spurious or could be that the inference in the test case is nonidentifiable. Note: I haven't check the test or method in detail. |
Yeah, we just need to assert whether the actual is close to expected or expected[::-1] (using np.is_close) |
This seems like a simpler solution than making the toy problem identifiable 👍 I'll add it. |
Allow for both solutions in test/contrib/stochastic_support/test_dcc.py:
fixed tolerance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @Qazalbash and @OlaRonning!
continue-on-error
was temporarily introduced in #1959 and accidentally got merged into the default branch. This PR reverts the commit (295136f) introducing the temporary changes.