-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better coverage for float32 tests #6780
Conversation
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #6780 +/- ##
=======================================
Coverage 91.89% 91.89%
=======================================
Files 95 95
Lines 16181 16185 +4
=======================================
+ Hits 14870 14874 +4
Misses 1311 1311
|
@ricardoV94 some errors seem weird to me, any idea how they could emerge? |
with pm.Model() as model: | ||
c = pm.floatX([1, 1, 1]) | ||
pm.Dirichlet("a", c) | ||
model.point_logps() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be enough to request the model.logp()
? No point in compiling and evaluating once if you're not checking the results
I think some change with the latest PyTensor release, probably with |
This is actual a change to the |
I'll change that in the test then |
@ricardoV94 I looked into how transforms are checked with float32 mode, they do not. And the float32 tests that exist do not check float32 condition properly, they still allow float64 subgraphs graphs. I bet that float32 tests should be more strict that they are now. |
Feel free to make the float32 job more restrictive. Just keep in mind we shouldn't throw random things into those float32 jobs, but only tests that actually matter. |
I added a check that backward(forward(x)) keeps the original tensortype. It seems to catch the intended bug |
44711fe
to
a45a0b6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
The first failing test seems unrelated, can open an issue and rerun or see if it can be seeded.
Also failing with the new numpy deprecation, should be a simple fix to replace np.product by np.prod (but should be a distinct commit, or if not, PR) |
@@ -44,10 +44,10 @@ | |||
|
|||
# some transforms (stick breaking) require addition of small slack in order to be numerically | |||
# stable. The minimal addable slack for float32 is higher thus we need to be less strict | |||
tol = 1e-7 if pytensor.config.floatX == "float64" else 1e-6 | |||
tol = 1e-7 if pytensor.config.floatX == "float64" else 1e-5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just in case, float32 was not checked in the CI so the previous tolerance was not taken in account
fix #6779
Bugfixes
📚 Documentation preview 📚: https://pymc--6780.org.readthedocs.build/en/6780/