-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add unit tests #14
Comments
I have two sets of reference results for tests now:
Currently the functional test takes about 20 seconds to run on analysis. To get there, I was able to shorten the LRX runtime by adjusting its input parameters. Currently the negative sampling algorithm is taking most of the time. I think the only way to shorten this would be to expose some of the internal parameters like the number of cross-validation folds or the RF parameters that the grid search operates on. I welcome thoughts on this. To run the tests (from top-level DORA repo directory): $ pytest You will see some warnings that are coming from code inside the |
@stevenlujpl We discussed that you might add these negative sampling parameters to the config file options (low priority). |
I added pytest to Github actions. Just an FYI, any commits to master will show an error if there are no tests found in the repo. |
I went ahead and merged @bdubayah we should be able to have pytest run as an action now since it has pytests to run? |
I just did a run through of all the the tests. It looks like the planetary test failed for the negative sampling case. I'm attaching the log file too; the failed case starts on line 782 |
On my machine this fails on the first planetary case (demud). I'm wondering if this might still be related to #44. In the most recent PR I didn't add anything to sort the images after being loaded in from the directory, so they are still probably being loaded in a different order across machines. If the order of training data affects results, then we might want to sort the images. |
Hmm, I can't run the tests at all due to lack of tensorflow in our dora-venv at JPL:
That's weird because I can run the |
I think the problem was that I was using my (user-installed) pytest, and it doesn't know about tensorflow. I think the solution is to install |
@hannah-rae FYI, I'd like to troubleshoot this further, but cannot proceed until pytest is installed in the DORA venv (@stevenlujpl could you do this?), and also I'm no longer allowed to charge to the JPL DORA account so progress may be slow. :( However, I want to capture here that my first guess is that the neg sampling test is failing because I may not have updated the "correct output" after Steven resolved the random seed setting. That's where I would look first. |
I was thinking that was probably the issue too. We should be able to look into this on the UMD side and are not blocked by the JPL pytest installation. |
@wkiri Sorry about the delayed response. I've installed |
We can add unit tests for each use case and its corresponding input data type as they become ready:
The text was updated successfully, but these errors were encountered: