-
-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update RBC calculation for Wilcoxon signed-rank test to be dependent on the alternative #457
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #457 +/- ##
=======================================
Coverage 98.54% 98.54%
=======================================
Files 19 19
Lines 3360 3360
Branches 492 492
=======================================
Hits 3311 3311
Misses 26 26
Partials 23 23 ☔ View full report in Codecov by Sentry. |
Hi @rhazn, are you sure this is to close 45? Maybe that was a typo because 45 deals with ANOVAs and was closed years ago. Please edit the original post to match the relevant Issue if I'm right. As far as the failing tests, when I make the same single change in L493 as you, I get the same output for all docs/tests as were originally there. Like, it doesn't require the small changes you added. But you passed 3.9 tests (only), so I guess those changes helped with that? I'm not sure fully what the deal is, but just letting you know what I found after a quick check. I'm using Python 3.11 on Windows for what that's worth. |
Ah sorry, you are correct. I think I accidentally deleted a number, this PR is in reference to #456. Sadly, I am not well versed in Python development so I might struggle more with this than average. I am working on OSX, when I run the docs examples with 3.11 in my REPL, I get the following output:
Though with 3.9:
Maybe this is something that is not good to discuss in a PR and should go into a Q&A, but it feels like I am making a mistake with Python venv that is preventing me from providing a good PR 😅 . |
Hmm, I'm not sure what is going on between the Python versions either... 🤔 |
Closes #456
I had a look at providing a PR, but I am unsure if I broke more than I assumed or I am misunderstanding something. If I rerun the examples from the documentation, I get some different p-values (I updated the docs with my values in this PR) and I am seeing a test failure with just slightly mismatched values in
tests/test_pairwise.py:435
: