Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update to E3SM master #2

Merged
merged 191 commits into from
Dec 19, 2022

Conversation

eclare108213
Copy link
Owner

Updates the icepack integration branch in my fork with changes from E3SM master. Fixed one conflict in mpas_seaice_velocity_solver.F. BFB in 6-day D smoke test.

trhille and others added 30 commits February 9, 2022 09:11
Get mostRecentAccessTime stream attribute from framework.
Uses the retreat parameterization from Slater et al. (2019) that
sets retreat length as a function of subglacial discharge and ocean
thermal forcing. Compiles but does not yet run.
We need to activate the ismip6GroundedFaceMelt package when
config_calving = 'ismip6_retreat' in order to use
ismip6_2dThermalForcing and ismip6Runoff in the ISMIP6
retreat parameterization.
Add a functional ismip6 retreat parameterization, active when
config_calving = 'ismip6_retreat'. This uses ismip6Runoff and
ismip6_2dThermalForcing fields from the two most recent time-levels
to calculate a retreat rate based on the parameterization of Slater
et al. (2019) [https://doi.org/10.5194/tc-13-2489-2019]. Currently,
ismip6Runoff and ismip6_2dThermalForcing must have the same xtime
values and be read in a stream called 'ismip-gis'.
Remove check for ringing alarm in mpas_li_calving.F when looking for
ismip6-gis stream. This was not necessary and made for more confusing
code structure.
Add a check that ismip6_retreat is not using two forcings with the same
timestamp. This could result either from an error in creating the input
files or from faulty logic in the routine itself.
Only read ismip6-gis stream when mostRecentAccessTime advances, i.e.,
when clock passes a new input time. Still force a read of two
time-levels on init. This still uses saveActualWhen = .true. to get
forcing times, which will be removed in the near future.
Remove unnecessary reset of mostRecentAccessTime after forcing read of
first two input times from forcing files.
Do not force redundant read of ismip6-gis to get forcings and
mostRecentAccessTime at init time.
This code currently assumes units from external ISMIP6 forcings.
These conversions may need to change when we use fields passed from
the E3SM coupler.
Fix variable descriptions and amend a log message.

Co-authored-by: Matt Hoffman <[email protected]>
Check that ismip6-gis stream is found, config_ismip6_retreat_k is
negative, and that config_front_mass_bal_grounded is none, or else throw
an error. Also add error checks after calls to mpas timekeeping
routines, update comments, and expand on routine description.
ismip6_2dThermalForcingPrevious, ismip6_2dThermalForcingCurrent,
ismip6RunoffPrevious, and ismip6RunoffCurrent are not needed in
input streams in Registry.xml. Remove them.
This ensures that careless setup does not lead to clobbering other
input variables like sfcMassBal that might inadvertently be in the same
input file. Also fix typo that looked for 'ismip-gis' instead of
'ismip6-gis' stream.
Add forcingTimeStamp to Registry and add to restart stream to
prevent ismip6_retreat from throwing an error on restart.
Testing indicates that a halo update is needed either on submergedArea
or on calvingVelocity before calling li_apply_front_ablation_velocity.
Those two halo updates give the same result, so updating calvingVelocity
because submergedArea is a local variable. Alternatively, submergedArea
could be added to Registry, but this is simpler. Behavior is still
different across different numbers of processors, even without velocity
solver or advection. This is likely due to an issue within li_apply_front_ablation_velocity,
and will be addressed in another PR.
Use 1950-07-01_00:00:00 as reference time in immutable stream
ismip6-gis. This is the first time in the ISMIP6 forcings files.
Also clean up several calls to mpas_log_write.
Use timestepNumber to prevent resetting forcingTime with
forcingTimeStamp every timestep on a restart. Also use (stream_cursor %
valid) to throw an error if immutable stream 'ismip6-gis' does not
exist in streams.landice.
Merge branch 'trhille/add_ismip6_retreat_parameterization' into develop
At this point this points to a private repo and will need to be moved to
a public repo before merging.
This commit modifies the Makefile to support compiling with or without
the sea level model.  To include it, add "SLM=true" on the make call.
If that is included, the build system expects the SLM submodule to have been
initialized/updated already.  It also activates a cpp directive that
includes calls to the sea level code.
This was an oversight in the previous Makefile commit
	modified:   src/Registry.xml
                    added config_uplift_method = 'sealevelmodel'
                    added config_slm_coupling_interval

	modified:   src/mode_forward/mpas_li_bedtopo.F
                    added a check and reset alarm for coupling time interval

	modified:   src/mode_forward/mpas_li_core.F
                    added a setup and add mpas alarm for coupling interval
Since MALI timstep is finer than SLM timestep, the slmTimeStep gets updated
by a counter value 1 when MPAS alarm for slmcouplinginterval is rining.
jonbob and others added 10 commits December 14, 2022 13:31
…t#5356)

Move bottomDepthEdge calculation to single loop over all edges

After E3SM-Project#5195 was merged, the MPAS-Ocean standalone test
ocean/baroclinic_channel/10km/decomp_test failed to match between 4 and
8 partitions, but only for intel optimized. All compass nightly suite
tests passed for gnu debug, gnu optimized, intel debug.

This PR solves the problem by merging the computation of bottomDepthEdge
into a single edge loop. Previously it was split into two loops,
1:nEdgesOwned (with many other calculations) and another from
nEdgesOwned+1:nEdgesArray(4). The intel optimized compiler must have
changed order-of-operations in these two loops for different partitions.

Fixes E3SM-Project#5219
[BFB]
Add DMPAS-JRA1p5 compset

Adds DMPAS-JRA1p5 compset: D-case with active sea ice only. Also adds PE
layouts for any D-case on Chrysalis and Anvil, S layouts on 20 nodes,
with ~47 and 23 SYPD, respectively.

[BFB]
The use of uninitialized variables (i.e., `Qsur` and `Qsub`) in the computation of the change of energy
due to heat exchange with the environment and advection is fixed. Currently, it is assumed that those
changes in energy are zero.

Fixes E3SM-Project#5332
[BFB]
Increase ELM PEs in BGC runs on Anvil to avoid OOM issues.

[BFB]
…SM-Project#5345)

Move the fates-specific `satellitephenology` call out of an area that is unreachable when `fates` is active.

Fixes NGEET/fates#953.

[Non-B4B]
…roject#5290)

Refactor of MPAS-Seaice velocity solver

Refactor of MPAS sea ice velocity solver to allow easier testing of new
options and methods, including:
* Move domain reference to within subroutines avoiding messy passing of
  variables
* Move wachspress basis routines to separate module
* Move triangular quadrature rules to new module
* Move some mesh related routines to the mesh module
* Rename block variable to blockPtr
* Rename all pool types to end in Pool

Fixes E3SM-Project#5289

[BFB]
The alias for the 20TR BGC compset for BGC is corrected.

[BFB]
@eclare108213 eclare108213 self-assigned this Dec 19, 2022
@eclare108213
Copy link
Owner Author

@darincomeau @apcraig
Darin and I tried to rebase but that was bringing in commits that should already be in my fork. Here I've done a pull from e3sm and push to the icepack integration branch, which does not show those commits. This appears to be cleaner and less confusing (to me at least) than the rebase. Let me know if you think this is right. I don't want to set off on the wrong foot...

@darincomeau
Copy link
Collaborator

darincomeau commented Dec 19, 2022

Tagging @jonbob also since he'll be the one integrating this into e3sm/master.

Darin and I tried to rebase but that was bringing in commits that should already be in my fork.

Just to clarify, the rebase replays the commits in your branch on top of e3sm/master, so the commits aren't duplicated, but rather the originals are 'lost' as the history is rewritten.

The code here should be the same as what we did in the rebase, it's just the history that looks very different. Here all of the new e3sm commits are at the top of the history, and this branch's commits are much further down. I don't think it matters too much now as this is a working branch, and we still have the PR records of everything that's going in. When we actually open a PR into e3sm/master, we'll need to do a rebase at that point, so there's a clean history of all the commits we're actually bringing in at the top. That step may involve re-resolving conflicts (I'm not totally sure; actually, I'm pretty sure).

Having said all that, keeping this branch up to date with e3sm/master this way (merge rather than rebase) is good with me.

@eclare108213
Copy link
Owner Author

After this is merged, I'll update the icepack pointer to the latest version on the E3SM-Project/Icepack cice-consortium/e3sm-icepack-initial-integration branch. @apcraig can I help with the rsnow fix? We're manually commenting out those lines at the moment, would be good to get that into my fork soon.

@eclare108213 eclare108213 requested a review from jonbob December 19, 2022 17:04
@eclare108213
Copy link
Owner Author

I still have the other branch on which we did the rebase, and we can use that one instead if @jonbob prefers. Fixing a bunch of conflicts at the end of the process would be rather aggravating.

@darincomeau
Copy link
Collaborator

darincomeau commented Dec 19, 2022

Fixing a bunch of conflicts at the end of the process would be rather aggravating.

I think that this is going to be unavoidable unfortunately. Even if we rebase now, rebasing later will still involve re-resolving conflicts, as it replays all of the commits. At least thats what I think will happen.
Also rebasing would require loosening branch protections for the force push, which I completely understand wanting to keep in place.

@jonbob
Copy link
Collaborator

jonbob commented Dec 19, 2022

@eclare108213 - how many conflicts did you run into? There wasn't much done in the seaice code recently, so it should have been minimal

@darincomeau
Copy link
Collaborator

@jonbob the conflict arose from the renaming of block to blockPtr from this: E3SM-Project#5290

@apcraig
Copy link
Collaborator

apcraig commented Dec 19, 2022

I agree with @darincomeau, If you rebase later, the rebase is going to request the conflicts be resolved again one commit at a time. The other option is that you just keep this merge branch going forward and when you PR to E3SM main, you squash merge it. Would that preserve the main history and just add the new changes? Is squash merging allowed/encouraged in E3SM?

@eclare108213
Copy link
Owner Author

There was only one small conflict this time, but I imagine there will be others as we bring in other Icepack modules. Adrian is actively changing the dynamics code, which shouldn't conflict much with our changes in Icepack. My preference is pull/push rather than rebase, all things considered, unless this isn't okay with E3SM more generally.

@jonbob
Copy link
Collaborator

jonbob commented Dec 19, 2022

@eclare108213 - there are no other seaice PRs on the horizon, as far as I know. I'm happy to take care of conflicts when I merge this to master, if you would rather not rebase

@eclare108213
Copy link
Owner Author

Thanks @jonbob. Then I'll continue down this path, since it's familiar to me and doesn't require --force.

@eclare108213 eclare108213 merged commit 33d6cb0 into eclare108213/seaice/icepack-integration Dec 19, 2022
@njeffery njeffery mentioned this pull request Sep 30, 2023
@eclare108213 eclare108213 deleted the update_E3SM_2 branch November 8, 2023 16:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.