-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rethinking looper testing #464
Comments
One idea would be to have a test that clones the hello_looper repository and then runs these example pipelines. Or something like that. I think we did this for peppy, having it use the |
Ok, I've started a branch to work on this. I have a skeleton to have two large tests for using looper with and without pipestat integration. I'll look at the peppy example and the pypiper unit test for inspiration as I continue. |
I have a proof of concept going where I can clone the hello_looper repository into a tempdir and execute the pipestat configured run successfully. |
One issue I've run into when using Looper with Pipestat is that I would clone the hello_looper repo and run the command: Pytest is running the command from a directory that is not the pipestat folder. I can manually reproduce this. Issue: Pointing to the looper config file works for everything except finding the samples. Looper complains it cannot find:
This is because the One option is to change the pipeline interface such that (see asterisks below):
becomes something like:
In reality I know the data directory is relative to to the config file, so I could just use the config file directory when writing the interface. However, I don't see this as part of the looper namespace that is built for this purpose. It seems like the proper solution is that Looper should get the absolute path to the data file which is does not. Here is an example submission script that exhibits the issue.
|
Official documentation on PEP suggests derived columns to eliminate paths from the sample table: So my above solution is probably not actually what we want. |
Ok, I refactored the pipestat hello_looper example (branch Then, in looper pytest, I open the project config and replace the This (sort of) simulates the user using an environment variable to point to the data. This now appears to be working in pytest as desired AND the hello_looper examples works exactly the same as before from a user perspective. |
|
Refactoring of all tests is now complete and PEPs pull from the hello-looper dev branch. I've also added comprehensive tests, but, as discussed, these will continue to be a WIP as we add complexity to these comprehensive tests. |
We seem to have lots of pytests for looper, and even a 'smoketests' subfolder.
What is the distinction between the regular tests and the smoketests? at first glance, they just look like other unit tests.
Anyway, it's clear to me after tying to get looper to work today that our tests are not effective for actually testing looper functionality. I ran into so many little issues that it's clear we're not actually testing the way looper is actually used.
I think we need to rethink the way we are testing this package. I think we should redesign tests around some kind of more holistic use cases, and maybe step away from testing small modular units. It just doesn't seem to be working.
A possible test would be: a few basic dummy pipelines that execute something. Run these on some demo datasets, with some expected failures, some re-runs, etc. Basically, the kind of use that a person would actually use looper to do.
The text was updated successfully, but these errors were encountered: