-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regression testing #292
Comments
to run a series of srhort python models for testing various aspects of the code. The models that are run are stored (by default) in the directory examples/regress. This is intended as way to partially address #292
I've added a routine regression.py to py_progs that is capable of running a series of short models, and performing rudimentary checks on whether they completed. The command line looks something like this regresion.py -np 3 py81b where py81b is the executable you wish to test. By default this sets up a directory to run in, copies all of the .pf files from PYTHON/examples/regress to the new directory, runs the models and performs a rudimentary check that they ran. There is a switch which allows you to get the .pf files from another directory (and so one could image running special tests, or tests that take a weekend to run). In the end, I do think this is something like the right approach (though probably it could be done better with gitlab), and I note that one can go back to older versions of python as long as the pf files do not have to be changed. The way one tests old versions of the code, is to compile and old version, since that is the only thing we really need from the old version. What I am not sure is how similar this is to code we already have somewhere Aside: In creating this, I found that py_error.py does not work in Python3. I suspect this is true of some other routines there as well. I tried to fix py_error.py in particular, and failed (even though I have done this for a lot of other python routines). @jhmatthews, the problem has to do with issues related to the fact that subprocess returns bytestrings rather than ascii. In regression.py I have duplicated the functionality of py_error avoiding subproccess and grep, which does work in python3, and it would be easy to make a standlone version of this to replace py_error, but I have not done this. |
@smangham @Higginbottom @jhmatthews Here is my proposal for how to reorganize the examples folder: At present we have what we call an examples directory, which we are using for at least 3 purposes:
At present the examples directory has the following subdirectories
There are also a couple of files under examples and it is not This needs to be organized better, with a clearer understanding of what should be in each directory
Note: We currently use Travis to check whether the python can still read in certain.pf file. Here I would propose that we only do this on a regular basis on those .pf files which are stored in basic or regression. |
@jhmatthews For Travis, I an tempted either to use just "basic" and, or basic and the "travis" directory for any files we want to check via travis. What do you think. Whatever we do, I plan to write a little python script, test_travis.py that will look in the directories we designate for travel and check that we get through the travis tests. |
@jhmatthews In the end I decided to put the all of the files we want to test with Travis into a Travis subdirectory. That way we can keep everything separate. I've modified the .travis.yml file appropriately, and the travis run was successful. |
At this point the examples subdirectory have been reorganized as discussed there, so the infrastructure changes are complete. Various parameter files have been moved to new diretories.
The examples directory is now reorganized on dev, as described in earlier comments. The basic subdirectory contains "modern" versions of the simple models we expect new users to explore. There are various README.md files that say what is supposed to go in each directory. There may be more work to do in terms of deciding what .pf files would best go in the individual directories, but that is an ongoing process, so I am going to close this issue for now. |
We need a better way to test larger numbers of features in python than is currently possible. For example, had we been testing for the failure of kurucz models in Python we would have discovered this problem long ago.
Currently, we have a small number of example pf files stored in examples, and in principle one could write a script or use one of the continuous deployment features in gitlab to do the testing each time one pushes a new version of the code.
The question, I would like to ask is whether we simply should add additional tests to this, as we go forward to get a more complete idea of how our development affects the code.
Doing this has the advantage that it allows us to add tests as we go along, and it does not require us to do anything to the existing git repository structure. It keeps test .pfs with a branch, and as long as we run all of the tests regularly we have increasing confidence that we are maintaining sound code.
The main disadvantage of this approach is that it does not allow us to go back and add new tests to older versions of the code.
An alternative would be to create a separate repository where somehow we keep various versions of the test .pfs, with a directory for each version of the code. In this case, we could go back and create pf files for the various versions in the directories. Unless @jhmatthews or @Higginbottom think that we should do otherwise, I plan to star adding some more test pfs to the examples directory (or perhaps a separate subdirectory of it). Then I plan to write a little script to run these, and begin to verify that they have all worked. It should be easy to integrate into continuous integration eventually.
The text was updated successfully, but these errors were encountered: