Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Design stand-alone test that calls set_surface_forcing and set_interior_forcing #156

Closed
mnlevy1981 opened this issue May 5, 2017 · 3 comments · Fixed by #338
Closed

Comments

@mnlevy1981
Copy link
Collaborator

Testing for many issues (#41 and #146 stand out) would be easier if we had a simple test that initialized the model, called marbl_instance%set_surface_forcing and marbl_instance%set_interior_forcing, and then wrote the diagnostic output.

This would require a couple of things that we don't have right now:

  1. Datasets (initial conditions, forcing files, etc), and a way to provide them to users (some sort of inputdata repository?)
  2. An IO module that can parse the diagnostics type and write netCDF output (CVMix has one that would be a great starting point)
@mnlevy1981
Copy link
Collaborator Author

I've been working on this test, and one thing I haven't been happy with how I've added netCDF to the build. @matt-long pointed out that one possibility is to use Docker to construct a virtual machine with gfortran and netcdf and then it should be easy to update the Makefile to build because the docker environment will be the same regardless of host machine. The picture in my head about this right now is pretty vague, but I think having a docker build target in the Makefile and a way for test suite to know it is running in the VM would work.

Something else that came to me as I was typing this ticket is that I could also add some netcdf build options to machines.py so we could also test on Cheyenne with ifort (for example). Combining these two thoughts, maybe --machine docker would be a new supported option for the python scripts.

@mnlevy1981
Copy link
Collaborator Author

FYI, I set up a kanban-style project board in my fork to track progress and what needs to be done. It looks like I should read through my old comments here and make sure that board covers everything I wanted to do.

As soon as the TravisCI tests are passing (see mnlevy1981#17) I'll submit a pull request. It probably makes sense to split the code review into two pieces -- one for the Fortran changes, and one for the changes on the testing side / updated documentation. Aside from the failed Travis build, the Fortran is ready to be looked at.

@mnlevy1981
Copy link
Collaborator Author

closed via #338

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant