-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More advanced unittests (eg of analysis and generate steps) fail when run as GitHub action #340
Comments
attempt to address #340 by restricting max-workers to 1 (attempting to edit online as local commit push failed with alert of limitations of personal access token for editing workflows)
…o get more advanced unittests to work in GitHub actions, as parallelisation may be causing problems; have to commit to see if this will work!)
reverting because setting max-workers didn't resolve the issue #340
…ility for test running (ie. without having to run these as subprocesses, in case this helps address #340)
…rhood analysis code that was previously used in debugging)
re this, I had some success with the ordering, just by making sure the unittests were named alphanumerically sequenced (so, test_1..., test_2... etc). But the error now seems to be internal communication between the docker containers -- db connection fails (oh, that should be a test in itself!)
|
…onnection); also added failfast=True to help move towards better #340
…l as per https://stackoverflow.com/questions/71782897/connection-to-the-postgres-refused-in-github-actions to help towards #340 (specified the PGPORT environment variable)
The last bit --- adding the database health checks and dependencies did the trick! The workflow takes a while -- probably too long; could probably cut some corners (eg. probably don't actually have to do the full sensitivity analysis -- could duplicate results, modify them and pretend we did, then compare. That would cut time in half. |
…sis test (inadvertently removed) towards #340
The tests basically work, and sensitivity analysis only fails in a technicality --- comparison of identical areas triggers system.Exit() advising the inputs are identical. So, the cheeky work around didn't quite work --- to work it needs some numbers jiggled around... |
…parison (cast the reference df to ints, so by rounding there is a difference); as per #340, the sensitivity analysis test is now fast and passes
Describe the bug
As per issue #337, I attempted to include a more robust suite of tests to evaluate analysis, generating resources and comparison of regions as a sensitivity analysis.
However, I ended up commenting out most of the new tests as they failed when implemented as part of a GitHub actions workflow despite passing locally.
I suspect its to do with some kind of parallel processing, with the tests not running sequentially. I attempted to force them to run sequentially, but things still failed (and after looking at the test log, I'm not convinced it actually did run sequentially).
The text was updated successfully, but these errors were encountered: