Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPPs in CICE6 #288

Closed
apcraig opened this issue Mar 7, 2019 · 37 comments
Closed

CPPs in CICE6 #288

apcraig opened this issue Mar 7, 2019 · 37 comments

Comments

@apcraig
Copy link
Contributor

apcraig commented Mar 7, 2019

As part of the bfb cleanup #204, I am reviewing the cpps (because of the REPRODUCIBLE cpp that I am removing). I also found a minor bug in the build related to cpps which is being fixed. Anyway, a few questions arose.

First, there are about 15-20 cpps currently in CICE. I know we need some, ncdf, CESMCOUPLED, RASM_MODS, _OPENACC, _OPENMP. But there are several that are probably obsolete and certainly untested. Do we need to clean them up? I am going to leave them there until we decide about each cpp separately. A partial list of the questionable ones are key_oasis*, coupled, CICE_IN_NEMO, NO_F2003, ICE_DA, popcice, ORCA_GRID, etc.

Second, I removed some cpps from the current compile (like NO_R16). Those cpps didn't exist in the code. But it leaves open some questions about how to implement cpps we need. With the ncdf cpp, we are explicitly "turning on" netcdf capability. With a cpp like NO_R16 (if we were to implement it again), we are explicitly turning off a feature. Can we improve some of this nomenclature?

Third, is there anything we need to code around in particular. Like integer8 or real16, certain levels of compiler (2003, 2008, etc)? Right now, none of the test machines are running into any issues. Should we not worry about it or should we proactively provide a cpp.

Fourth, we treat MPI and netcdf fundamentally different. With MPI, we create two directories under infrastructure/comm, mpi and serial. These contain the same files, modules, and subroutines and there is a lot of code overlap. With netcdf, we just cpp it in or out in the code if it exists. I am wondering if we wouldn't benefit from combining the serial and mpi code and cpp'ing out mpi when it isn't turned on. This would allow us to keep up one set of comm routines and for those routines to have no chance to diverge. I worry that when we fix(ed) or implement(ed) something new (like tripole zippers in halos or global sums or improvements to algorithms) that it's done in one set of comm routines but not the other. I know there are pluses and minuses to keeping them separate, just asking.

My recommendations would be that

  • We remove cpps that are clearly obsolete and/or no longer needed and/or very unlikely to work. Each cpp is considered independently. There is no rush to do this, but would be nice to clean up when we can.
  • We try to move forward with some clear nomenclature in the future for cpps if possible and that we NOT code around potential problems until we hit them or we feel it is absolutely needed. We should use all-caps for cpps and should think about whether we should rename any cpps currently in use (ncdf is the obvious one), although I hate to change things that are already in place.
  • We have a discussion about prototyping merging of the serial and mpi codes and add a cpp like NO_MPI to turn off the MPI code in serial mode.

@eclare108213 @mattdturner @duvivier @dabail10 looking for feedback.

@phil-blain
Copy link
Member

With respect to CICE_IN_NEMO and ORCA_GRID: these are related to Nemo (the ocean model we use for our coupled system). So they are definitely needed. We will be incorporating CICE6 in our coupled model during the following months, so we might make more use of them. I'm definitely open to renaming them if it improves code consistency.

@apcraig
Copy link
Contributor Author

apcraig commented Mar 7, 2019

Thanks @phil-blain, I did not mean to suggest that because I didn't understand where/how a cpp was used that it should be eliminated, just that we should consider eliminating cpps that we can't find a user/use for. Good to know that the NEMO/ORCA cpps might still be useful. Happy to keep them as they are currently named too.

@rgrumbine
Copy link
Contributor

At the moment, I'm having failures to compile on theia and phase3, apparently related to no_I8, which I've tried specifying in different ways (CPPFLAGS -- no_I8, NO_I8, noI8, NOI8; setenv no_I8 true) to no avail, reprosum still fails to compile.

phase2, on the other hand, had no objections and passes all regression tests.

@apcraig
Copy link
Contributor Author

apcraig commented May 2, 2019

Can you let us know the errors you're getting. The cpp should be noI8 to turn off support for that feature.

@rgrumbine
Copy link
Contributor

rgrumbine commented May 13, 2019 via email

@apcraig
Copy link
Contributor Author

apcraig commented May 13, 2019

@rgrumbine , the ice_shr_reprosum code is a small piece of c code. You might want to check that code is being compiled correctly. The error suggests is has not been compiled. If you are having problems compiling that, let me know. I could take a look at your branch/port to see if I see any problems.

Separately, I will try to verify that the -DnoI8 can be placed in the Macros file. That should work.

@rgrumbine
Copy link
Contributor

rgrumbine commented May 13, 2019 via email

@apcraig
Copy link
Contributor Author

apcraig commented May 13, 2019

Check if you have the latest code, there were several upgrades to the build in the last couple weeks to deal with some of these things. I think that fix is already on the trunk.

@rgrumbine
Copy link
Contributor

rgrumbine commented May 28, 2019 via email

@eclare108213
Copy link
Contributor

@apcraig did you make a list of all the cpp's in the code now? It has been many years (probably since v5.0 release) since I went through them and cleaned up, and it's definitely time to do that again. Many of them are to support CICE's use in various coupled systems. My approach would be to create a table listing all of them along with their apparent function, and send that out to the users for their feedback, e.g. ask CESM, the MetOffice, etc, specifically which of theirs we need to keep active in the code. (I generally know which came from where.)

I don't think we should add cpp's unless they are actually needed and we can test them.

I have no qualms about changing what they are called, BUT if the coupled centers are actively using any of them, we need to do it in communication with them.

I'm agnostic about using a separate code directory (like serial) versus cpp's (like netcdf). At one point in CICE's development history, the explicit goal was to reduce the number of cpp's, mainly to make the code more readable and less prone to bugs -- some cpp sets interacted in complex ways. I'm not sure that having separate directories with equivalent subroutines makes it less buggy -- the bugs are of a different sort, like missing entire functionalities that are added to the other directory.

So I agree with all three of @apcraig's recommendations. The question is whether this happens incrementally as the code evolves, based on coding standards that we produce, or if there's a concentrated effort. Maybe a little of both -- (1) do a basic cleanup; (2) based on user/center feedback, develop a master design for cpp use and begin implementing it; (3) have a planning discussion about longer-term code structure (that might involve other aspects like dynC in addition to serial/mpi directories, etc).

@apcraig
Copy link
Contributor Author

apcraig commented Jul 1, 2020

The CICE CPPs are

General

  • NO_F2003, one occurance in ice_fileunits.F90 protecting a native fortran "flush" call
  • ncdf, throughout code, turns on netcdf code, generally required in all builds
  • _OPENMP, automatic flag associated with openmp compilation, turns on openmp features like omp_get_max_threads, omp_get_thread_num, omp_get_num_threads. used in ice_dyn_evp_1d.F90 and ice_reprosum.F90
  • _OPENACC, used in ice_dyn_evp_1d.F90 to support openacc
  • ORCA_GRID, used in ice_read_write.F90 for what seems to be IO with halos
  • CESM1_PIO, used in io_pio2 ice_history_write.F90 to support an older version of PIO used in CESM1 and RASM
  • GPTL, used in io_pio2 ice_pio.F90, needed if pio is built with gptl
  • SERIAL_REMOVE_MPI, used in infrastructure/comm mpi and serial ice_global_reductions.F90 and ice_reprosum.F90 to allow the mpi and serial versions to have identical code with mpi turned off with a cpp. Currently, the cpp is defined in the serial source files, but this is a step towards possibly merging the mpi and serial code. This cpp is never defined by the user right now.
  • noI8, defined in ice_reprosum.F90 in cases where integer*8 is not available
  • NO_R16, defined in ice_global_reductions in cases where real*16 is not available, but there may be other places where r16 has no protection (icepack_kinds, ice_communicate (mpiR16))
  • gather_scatter_barrier, defined in mpi/ice_gather_scatter.F90. This can be converted to a namelist flag, it does not need to be a cpp.
  • key_oasis3, used in mpi/ice_communicate to define the default communicator for oasis coupling. The key_ cpp must be part of the oasis standard.
  • key_oasis3mct, used in mpi/ice_communicate to define the default communicator for oasis coupling. The key_ cpp must be part of the oasis standard.
  • key_oasis4, used in mpi/ice_communicate to define the default communicator for oasis coupling. The key_ cpp must be part of the oasis standard.
  • key_iomput, used in mpi/ice_communicate to define the default communicator for oasis coupling. The key_ cpp must be part of the oasis standard.

Both general and coupling interfaces

  • coupled, used quite a lot in drivers, also in ice_dyn_shared.F90, ice_communicate.F90, not sure about need/use.
  • CICE_IN_NEMO, mods for coupling with NEMO (similar need to CESMCOUPLED?)
  • CESMCOUPLED, used throughout when coupled with CESM. Is this cpp serving multiple purposes and/or will be need more or different keywords for different CESM versions
  • RASM_MODS, used in ice_grid.F90, drivers/mct/cesm1/ice_import_export.F90 to support RASM specific mods

In coupling interfaces

  • popcice, this seems like something leftover from earlier times to make standalone cice not standalone.
  • hadgem3
    • ICE_DA
  • mct/cesm1
    • ESMF_INTERFACE
    • USE_ESMF_LIB
    • USE_ESMF_METADATA
  • nuopc/cmeps
    • NEWCODE
    • USE_ESMF_METADATA
  • nuopc/dmi
    • CICE_DMI

In ice_shr_reprosum86.c

  • FORTRANUNDERSCORE
  • FORTRANDOUBLEUNDERSCORE
  • FORTRANCAPS
  • NAMING
  • x86
  • _FPU_GETCW
  • _FPU_SETCW
  • _FPU_EXTENDED
  • _FPU_DOUBLE

@apcraig
Copy link
Contributor Author

apcraig commented Jul 1, 2020

My recommendations with regard to current CPPs are as follows. Keep all the cpps as they are now with the following proposals

  • remove the popcice cpp and code as if it were never set
  • add NO_I8 as another option for noi8 thereby having both like "#if defined noi8 || defined NO_I8". Document as NO_I8 but preserve old version noi8 for backwards compatibility.
  • would love to do something with ncdf like switching it to NO_NETCDF (changing sign and syntax), but everyone must be using -Dncdf right now, so it's hard to justify. Need to discuss more. Maybe stay with positive but move to NETCDF like "#if defined ncdf || defined NETCDF" or as suggested "#if defined ncdf || !defined NO_NETCDF"?
  • review coupled, move to namelist. remove coupled from drivers and implement as coupled "on". I think the coupled flag is being carried between caps for no reason. Should check with users.
  • move gather_scatter_barrier to namelist
  • move ORCA_GRID to namelist
  • move CICE_IN_NEMO to namelist after further review
  • move RASM_MODS to namelist
  • move most of CESMCOUPLED to namelist (will need a lot of review) and maybe split into different namelist features.

We should then document the most common cpps in the user guide. We should also have some documentation that says that cpps should be capitalized and only included if they are needed to code around a build issue. They should not be used for run-time options.

We also should try to establish a preference for positive or negative cpps. For instance "#ifdef I8" is the same as "#ifndef NO_I8", NO_I8 is more intuitive as a cpp but the implementation of a double negative is less clear in code. And generally, the default should be that a cpp is not needed (ncdf does not seem to follow this convention), but it's not always easy to decide what that is.

I think a discussion to merge mpi/serial or different io code should occur separately.

@eclare108213
Copy link
Contributor

I can think of one user possibly still using the popcice cpp, but her code is so old that I seriously doubt it will be updated to cice6. I think it's fine to drop this one.

For all of the cpp's we plan to get rid of completely or switch to a new format (capitalization or positive/negative), should we treat them the same way we are doing other deprecated code? I.e. add them on the wiki page, first get rid of the -D implementation, then remove the code itself later?

Personally, I prefer positive syntax to negative, just from a psychological point of view, and I definitely don't like double negatives. Logically, the cpp should reverse whatever the default configuration is, although I still consider netcdf (and pio, which is related) to be something external that's added to the code, and therefore must be turned on (not off). But I'm open to others' opinions.

The coupled cpp was originally intended to be a general-use flag that all coupled configurations would need to use, e.g. to catch differences in parameterizations needed for coupled vs stand-alone configurations. Those are probably all in namelist by now (but check). Interesting idea to have 'coupled' itself as a namelist variable, which could then switch on other namelist options as needed. Users could override the effect by turning coupled off and setting things however they want. Not sure having coupled in namelist would be all that useful.

@apcraig
Copy link
Contributor Author

apcraig commented Jul 1, 2020

Several good points. I agree turning things on is easier, but I think we want to avoid having to turn on things that are pretty much always on. For instance, do we want to have to set I8 on all the time except when it rarely is unavailable. Much cleaner for the option to be to turn it off with NO_I8 if it's not available. I think one could argue similarly for netcdf and mpi. But there is no right answer. We just have to take it one cpp at a time and decide how we want to implement it. I think a big help would be just to document what we have even if we don't change anything.

If we do move the cpps to namelist, we will need to review all that carefully. We'll want to make sure there are no cpp required things (like a use statement from another model). We also may want to shift it to namelist mode with a better set of namelist variables. For instance, we might not use "coupled" or "cesmcoupled", we might want to divide them into different namelists like "compute_forcing" or "nemo_halo_io" or "write_coszen_on_restart" and similar.

Any non backward compatible changes are going to affect the community. In some ways, I hate to do too much. But there is a bunch of stuff that would be cleaner if it were implemented as namelist instead of cpps. The CESMCOUPLED cpp for instance is doing a bunch of stuff that cannot be broken apart at this point. Another model might want some of the CESMCOUPLED mods but not all. That creates it's own mess moving forward.

I think the idea of having one namelist set others is interesting, but could get tricky with respect to precedent, and I'm not sure we need it. The fact that a bunch of stuff is set by one flag now doesn't mean it has to stay that way. I think the difficult part will be making the change from cpp to namelist, not setting up the default namelist on a model by model basis. We know how to do the latter well. The transition though will introduce some risk as models break or aren't setup as expected.

The default should be that we may just add some "synonyms" for current cpps (like COUPLED for coupled, NO_I8 for noi8, NETCDF for ncdf) with all synonyms valid. That would be backwards compatible. The next step would be to clean up some relatively lower risk things like gather_scatter_barrier, ORCA_GRID, RASM_MODS by moving them to namelist. Moving coupled or CESMCOUPLED to namelist might be something we decide not to do.

@daveh150
Copy link
Contributor

daveh150 commented Jul 2, 2020 via email

@apcraig
Copy link
Contributor Author

apcraig commented Jul 2, 2020

@daveh150, it's an interesting question. I suspect we won't be able to see the performance difference. If we had an iterative solver that was taking 80% of the model time, and we were considering adding if statements to it, then that might be an issue. In this case, I don't think we'll notice. We can try to do the performance testing though as we move forward. We had similar concerns when we moved from static to dynamic arrays, and in the end, we couldn't see a performance difference. I think the compilers are pretty good these days.

@dabail10
Copy link
Contributor

dabail10 commented Jul 9, 2020

I am attempting to remove some of the CESMCOUPLED ifdefs and moving these options to the namelist. These are mostly related to the NUOPC cap that will be shared by NOAA/EMC and NCAR.

@apcraig
Copy link
Contributor Author

apcraig commented Jul 14, 2020

@eclare108213, I am working thru some code changes and testing the CPPs. Some of the CPPs are not working properly. Even "ncdf". There is some netcdf code that is not protected by the cpp. I just compiled a standard gx3 CICE case without the ncdf flag and it built and ran to completion which is problematic on many levels since it's supposed to read in a netcdf file as the ice_ic. It looks like it sort of opens and reads the file (since netcdf is available on the system and some of the netcdf is not surrounded by the cpp), but then the initial state is not setup properly, it starts with zero sea ice, and then runs and slowly starts to build some ice until the run ends. I don't think this is what we want. I assume you agree we should fix this and similar issues with other cpps? That would probably mean adding more ifdef ncdf as well as some new checks that would cause the model to abort if ncdf is not set and it's trying to read a netcdf file. This should not be hard. I assume we want to try to protect the user as much as possible including things like using "io_netcdf" without the ncdf flag set, right?

@eclare108213
Copy link
Contributor

Yes, this kind of thing should certainly be fixed.

@apcraig
Copy link
Contributor Author

apcraig commented Jul 14, 2020

Sounds good, I will try to work thru several of the most relevant cpps and make sure they are working as we expect. I will try to add or update some tests where it makes sense.

@apcraig
Copy link
Contributor Author

apcraig commented Jul 15, 2020

With respect to CICE_IN_NEMO and ORCA_GRID: these are related to Nemo (the ocean model we use for our coupled system). So they are definitely needed. We will be incorporating CICE6 in our coupled model during the following months, so we might make more use of them. I'm definitely open to renaming them if it improves code consistency.

@phil-blain would you be OK if I made both ORCA_GRID and CICE_IN_NEMO namelist arguments instead of cpps? Thinking of adding a couple logical namelists, orca_halo_grid and cice_nemo_coupled, are those OK? Also, CICE_IN_NEMO appears in several drivers, I'd like to clean that up. @phil-blain @eclare108213, do we have any idea which drivers are being used with NEMO or maybe none of them are right now. Is the CICE_IN_NEMO implementation usually via the standalone driver as a multiple executable coupled system? I would propose we get rid of it in mct/cesm1, nuopc/cmeps, and nuopc/dmi and keep it in direct/hadgem3 and standalone/cice. Thoughts?

@phil-blain
Copy link
Member

Hi Tony,

If they can be converted to namelist arguments, I 100% agree this is a good way forward. Regarding naming, I don't have a strong opinion.

Historically we were using (and are still using for CICE4/NEMO3.6 operational systems) the hadgem driver. It's not a multiple executable system. NEMO code calls subroutines from CICE and a single executable is produced by the NEMO build system.

I think maybe DMI are using these CPPs though.. @TillRasmussen ?

@JFLemieux73 @dupontf other thoughts about these CPPs ? (should François be involved here?)

@TillRasmussen
Copy link
Contributor

As a general note I would prefer to limit the number of CPP flags and if there is a good way of doing this I prefer this.

Some of the NEMO_CICE flags can be eliminated and others have limited influence on productions as they are only used when doing a cold start.

It is correct that we at DMI have updated the DMI driver to also include NEMO. I think that it would be good to include namelist flag for this as well. DMI options are mostly present within the driver, however this could also be eliminated.

Either 1 option with integers
(0=standalone, 1=cpl_NEMO, 2=cpl_HYCOM and 3=cpl_CESM...) or something like that. Alternatively two logicals cpl_ocn and cpl_atm....

@phil-blain
Copy link
Member

comments from @dupontf :

For the CPPs, I think also that some could be namelist variables. CICE_IN_NEMO could be removed if we write our own driver.

ORCA_GRID must have been added (by the MetOffice?) to have CICE outputs on the same grid as NEMO. It is very "cosmetic" and not very justified at the CICE level. We don't use this CPP.

@TillRasmussen
Copy link
Contributor

The following flags can be replaced by an optional mpi communicator input as it is done in driver nuopc/dmi
key_oasis3,
key_oasis3mct,
key_oasis4,
key_iomput,
This requires a very small change to the call to CICE Initialize from Nemo.

@apcraig
Copy link
Contributor Author

apcraig commented Jul 17, 2020

@TillRasmussen, I very much like that idea. I am working on a PR now to update some of the CPPs. I propose that a separate PR be implemented to remove the key_ cpps and to update the nuopc/dmi implementation. That would allow testing to be carried out with the nuopc/dmi configuration that I can't do.

@apcraig apcraig mentioned this issue Jul 17, 2020
16 tasks
@apcraig
Copy link
Contributor Author

apcraig commented Jul 17, 2020

#490 is a set of proposed changes to update the CPPs and document them. I think the one I don't know what to do with is "coupled". I don't believe CESM, E3SM, or RASM are using that flag. Do we know if anyone is using that flag? Should we consider removing it?

@dabail10
Copy link
Contributor

We are still using the "coupled" CPP. The main differences are the treatment of the sea surface tilt terms in the dynamics and where the forcing comes from data files or a coupler. There are some differences in the serial interface as well. The coupled models use MPIserial.

@apcraig
Copy link
Contributor Author

apcraig commented Jul 17, 2020

OK, I couldn't find the flag in a quick look at the CESM scripts. Are you thinking of "coupled" or "CESMCOUPLED"? I'm asking about "coupled". It seems like we could convert the coupled flag to a few different namelist variables if we really do want to keep those features. Is that how we should proceed?

@eclare108213
Copy link
Contributor

@dabail10 thanks for the reminder of what the coupled cpp does. @apcraig there are a few things in the code that are needed for stand-alone runs but should never be used in coupled mode, because the coupling physics is different or otherwise better. The sea surface tilt terms are an example of that (it has to do with the Ekman spiral and the depth of the currents coming in from the ocean model, if I remember correctly). So maybe we should keep the coupled functionality around for this type of thing. We can look at whether it should be split out into multiple namelist variables, but my inclination is to not go overboard in that direction, if it's not needed (i.e. they'd all be set the same all the time, not mixed).

@dabail10
Copy link
Contributor

We use both flags. The "coupled" are supposed to be more general to other coupled models. However, if other coupled models are not using it, then I would be fine with creating some namelist variables or using a different CPP flag. For the serial interface, I believe this has to be an ifdef, because the standalone serial interface is not compiled with MPI I believe.

For ice_dyn_shared.F90, we could introduce a namelist option like: use_tilt to use the coupled sea surface tilt terms or compute them from the geostrophic currents.

In ice_init.F90, the coupled CPP is use to check if oceanmixed_ice (the mixed-layer model) is turned on. This is still a good check to have. I'm not sure that a namelist parameter is the right way to go here though.

In the serial interfaces, maybe we already have an MPI CPP?

I think all of the coupled ifdefs and appropriate code could be removed from the drivers/caps. This is where the data forcing is read or not. In the coupled drivers, this can simply be removed.

Dave

@apcraig
Copy link
Contributor Author

apcraig commented Jul 17, 2020

Following up with some discussion in #490 here. With regard to the coupled mods in the serial comm code, it would be good to clarify if it is ever used. If the "serial MPI" is a separate library that serves as an MPI substitute for models that use MPI but need to run serially without MPI in some cases, then CICE does not benefit at all from having the serial MPI support built in. CICE provides serial and MPI versions of the comm code already. I'm also not sure why any mods would exist in the serial code to enable some fake version of MPI there. Maybe there is something I don't understand.

Just to clarify the use of coupled in the drivers. I think what you're proposing is that in the drivers, the "coupled" flag be removed but the code set to the coupled version. In the standalone CICE, we would also remove the "coupled" flag but set it to the non-"coupled" code. Does that make sense?

@dabail10
Copy link
Contributor

In terms of the "coupled" flag. I definitely think all of the drivers could be updated to remove the ifdef or ifndef as appropriate. So for the standalone driver, you just keep the code in the ifndef coupled. For the coupled drivers do the reverse. I think these date back to a time when we all used the same CICE_RunMod.F90, etc.

@TillRasmussen
Copy link
Contributor

A few comments on the coupled flag:
I think that the sea surface tilt should be a namelist parameter as Dave points to. It is similar to the ocean mixed parameter.
In the same routine. Is there a reason why we keep cosw and sinw as they are parameters set to c1/c0.

communicate: I don't think that the serial communicator should be allowed to use the mpi interface, thus I think that it should be deleted from here. When we played around with the evp scheme we tested first using the serial communicator, then the mpi communicator with 1 processor (this should be similar to adding the cpp flag coupled) and at last mpi with multiple processors. These should all yield the same results. Therefore from my point of view we dont need the serial "communicator" to include mpi at any time.

@TillRasmussen
Copy link
Contributor

We might consider to remove many of the ncdf/USE_NETCDF cpp flags as they are not needed. Most rutines call ice_open, ice_read ... and warn/abort if netcdf is not included. This can be fixed by adding a ncdf cpp flag in ice_open that aborts if not compiled with netcdf.
I will look at this when I am near a reasonable internet connection again.

@apcraig
Copy link
Contributor Author

apcraig commented Jul 22, 2020

We might consider to remove many of the ncdf/USE_NETCDF cpp flags as they are not needed. Most rutines call ice_open, ice_read ... and warn/abort if netcdf is not included. This can be fixed by adding a ncdf cpp flag in ice_open that aborts if not compiled with netcdf.
I will look at this when I am near a reasonable internet connection again.

@TillRasmussen, #490 does this to a large degree. I tried to place the USE_NETCDF flags around the locations of the netcdf implementation and added several aborts there if USE_NETCDF is not set. It's possible this could be further improved, but I think we're on the same page.

@apcraig
Copy link
Contributor Author

apcraig commented Jul 31, 2020

I am going to close this. I believe #490, #496, and CICE-Consortium/Icepack#329 address this issue to a large degree. We have cleaned up a number of CPPs and documented them.

I will open a new issue for the key_ CPPs that might get cleaned up as part of some DMI work. There is also the CESMCOUPLED CPP that is evolving organically.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants