-
-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update GH Actions and CodeCov #922
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #922 +/- ##
=======================================
Coverage 73.44% 73.44%
=======================================
Files 19 19
Lines 4610 4610
=======================================
Hits 3386 3386
Misses 1224 1224
Flags with carried forward coverage won't be shown. Click here to find out more.
|
@jdebacker. This looks like a weird Dask issue. The failure in Linux Python 3.10 in
|
@jdebacker @talumbau . I reran all the failed jobs, and Linux Python 3.9 passed with no dask concurrent futures error. On this second round, it was Mac Python 3.11 that failed. So I have rerun the jobs again to see if they will pass. My guess is that they will. But this is at least as annoying as the Codecov issue that this PR fixes. |
@jdebacker and @talumbau. Yep. It just took two rounds of rerunning the CI tests to get past those concurrent futures errors. I think this PR is ready for review. I do still think that we have too many tests for OG-Core. I have this sneaking suspicion that some of these errors that go away by rerunning jobs come from implicit compute limits on GH Actions. It seems excessive for us to test Python 3.9, 3.10, and 3.11. I would prefer we only tested Python 3.10 and 3.11. |
@rickecon Yes, perhaps we should drop at least the 3.9 tests. I think I'm ok with that. But I do also think we should minimize running tests when not needed. Here's what I posted over at the OG-USA that you might add to this PR: Re my comments a few weeks ago about unnecessary compute time, from a read of the updated workflow docs, I think we should be able to condition tests to run only when relevant files are affected. E.g., something like:
|
@jdebacker. I have removed the draft status of this PR and have made all the commits that I think this one needs. It is ready for your review and is ready to merge as soon as all the tests pass. I also have the full battery of tests running locally on my machine. |
@jdebacker. This PR had a concurrent futures failure in |
@jdebacker. All tests are passing now on GitHub. However, I had 20 tests fail on the full run of my local machine. Here are the results of my full set of local tests. 15 of these failures are I am going to re-run these tests locally overnight tonight and see if the
Here is the full traceback of the first
And here is the traceback of the two demographics errors.
And here is the traceback of the three txfunc errors.
|
@jdebacker. I re-ran all the tests on my machine locally, and I got 17 errors instead of the previous 20. There were three fewer
|
@jdebacker. I recommend we:
|
@rickecon thank you for these updates! Merging. |
This PR:
build_and_test.yml
so that those tests only run when one of those files is changed.deploy_docs.yml
anddocs_check.yml
, and limitsdocs_check.yml
to only run on pull requests.cc: @jdebacker @talumbau