-
-
Notifications
You must be signed in to change notification settings - Fork 320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI: Speed up pytest workflow by 1min20 with pytest-xdist #3257
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While not shortening the total CI check time, it doesn't hurt to limit the CI resources used where we can.
I read from the meeting notes that CI was a bit long. I'm taking a little look and it's one of the first little things |
For example, in the last month I analyzed a bit, and finally took the time to let the authors adjust themselves jidicula/clang-format-action#147 (comment) But I hate having too many pending PRs to keep track of, since I'm not sure if the project is interested when nothing happens |
The test that is really long are the GRASS "Run tests" launched by .github/workflows/test_thorough.sh, about 1–1,5 hrs! I guess that is the CI issue discussed. |
Yesterday I was exploring a little bit on this. I know pytest can run unittest tests. But I still have to figure out how gunittest is able to work. Then find out what is incompatible with parallel execution. |
For me, one of the issues I found with this and the other repos, is that CI might be long to start since 20 workers are shared for all OSGeo projects. So sorter runtimes, even of smaller ones, means another run can start faster |
I did not explore that much yet, because the idea for grass.gunittest was that you write a test sort of like a GRASS tool, i.e., you count on being in a mapset, and sort of like documentation, i.e., you count on having NC SPM sample dataset. A lot of custom asserts including those for calling tools were included to allow full control over execution which was so far mostly used for better reports and tracking executed tools (see fatra where this was originally running). I'm the author of most of the grass.gunittest code. The decisions made a lot of sense back then, but the whole thing ended up with a lot of code and I'm not particularly happy about the current state. When adding pytest, I decided to do basically all opposite decisions (kind of like design of Git in relation to CVS: "Take the Concurrent Versions System (CVS) as an example of what not to do; if in doubt, make the exact opposite decision."). I started with minimal code just to get that started and it also seemed easier for integration with CI then adding custom improvements. The pytest route still waits for its solution to the sample data problem and functions to support asserts, so writing tests of individual tools using gunittest is much easier right now. Having both integrated, well, that would be wonderful.
At this point, each test file ( |
I really had trouble trying to run the tests manually, even with gunittest, without having an error like module grass not found. How is is supposed to be called? That prompted me to imagine a way to get the pipelines complete faster by splitting the longest job in two separate jobs, ie run all tests except temporal in one instance, and only temporal in another, paying the 4 min overhead of building grass in the beginning (on an 1h30+ job). But since it didn't work as close to unittest (ie getting unrecognized arguments/argument provided not used or something like that), I didn't seem to find a way of specifying what to run inside the command line. It would mean to change at runtime the .gunittest.cfg file to change the exclusions... kind of dirty and stopped before implementing that. |
Oh, yes, that's another difference, gunittest has a top level execution tooling or your run a single file in an existing session. pytest is supposed to run with simple pytest, but there is one issue, specifically issue #658, and you need to set Python path first. This is in the CI, so you just reproduce the relevant part of the CI setup locally. gunittest should be sufficiently documented. pytest is still experimental and there is no documentation. |
You mean something like these lines: grass/.github/workflows/test_thorough.sh Lines 9 to 12 in 2841c5f
That's what I wasn't able to deviate from, only that specific construction worked correctly. The issue linked was interesting, but I didn't extract the way to set the path correctly (maybe I looked too fast on some parts). My observation when programming, up until now, is that the way the python code is laid out, it is difficult for code analysers (intellisense and the same thing in jetbrains pycharm) to understand what are the packages and ie understand that we import correctly. Thus I think I agree on both opposing points of view in the issue, and they aren't completely mutually exclusive in my eyes. I think laying out more like any real python package (in the repo, not only once installed) would help for programming and checking the code. But it is also fine for me, yet, to not have it really work, but at least "importable". That would ultimately alloy to do unit tests, not only bigger integration/system tests, which is mostly what we have now. Having only simple pieces of code that can be tested is easier to make sure they work as we want. Speaking of integration tests, when looking at my alternatives for the slow temporal tests, I stumbled upon t.rename tests. "Since it only renames layers, it should be pretty fast, no?" I said to myself. I was quite surprised to see the size and amount of input dataset used for this, and using random data (non seeded) for the values. If there's more to check, that would be in the functions used to test it out, not rename. Like, why not use maximum 2x3x4 cells when possible, maybe 2 or 3 different ones, when the data really doesn't matter? Are they a lot of other tests that use overkill test set-ups? Lastly, for now I don't seem to have any problems with pytest-based jobs. Since it isn't a custom solution, we ride on a bigger user-base and aren't alone in our problems when we have some. |
If you expect some functionality and see some bugs, let's discuss that in a separate issue.
It just uses a current directory. That creates a trouble with the config file. I don't recall the current behavior, but if that's not useful, let's change it.
As far as layout goes, we moved from
For development of the package, special things may be needed, so using PYTHONPATH is okay in that case and it should be enough most of the time. I think the issue is that same setup is needed for a common user too.
See above. I don't know what you mean here. I think the basic layout is all right. Maybe we need some other files?
I think it is more about whether it is findable, i.e., on path.
Let's have a separate discussion about that.
My guess is that the idea was to have somewhat realistically sized data and on top of that t.rename just ended up with the same set-up code as all the other tests. A lot of theses tests were written even before there was grass.gunittest and any automated execution of all tests. So, you would just occasionally execute the test suite manually. Later, the tests were just moved to the grass.gunittest framework. Ah, that's actually another feature of grass.gunittest, it detects and runs both Feel free to change the size of the test data in these cases. Some of these tests for temporal would be good candidates to migrate to pytest if you need to do a full rewrite for some other reason.
There is no analysis for that, but the data management tools are definitively possible candidates.
...which is the idea here (with pytest). It's not that we thought it is a great idea to have all the grass.gunittest code to maintain (and document) including a custom test execution, the problem was that we thought we need the custom solution. |
* Use pytest-xdist in pytest.yml * Add pip caching on pytest.yml
Using pytest-xdist, it is possible to use multiple cores to run pytest, with many multiple options to manage the queue.
Since there isn’t many tests to run with pytest, only 100, the runtime for the complete pytest workflow goes from 532-537 seconds to 439-451 seconds (about 86-93 seconds faster). For each run. The pytest only part passed from 3:24-3:44 to 2:04-2:19.
Caching python packages doesn’t seem to help a lot, since only about 30 seconds is passed on this.
In my Megalinter repo, this had a bigger impact, cutting off ~10mins on our longest workflow, cutting the python tests time in half. (It automatically chose 4 workers for the 2 cores in CI, but a lot of time is idle depending on the linters to spin up)