Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementation of CCPP timestep_init and timestep_final phases #534

Merged
merged 11 commits into from
Jan 8, 2021

Conversation

climbfuji
Copy link
Collaborator

@climbfuji climbfuji commented Dec 18, 2020

This PR contains the following changes:

  • move time vary physics from run to timestep_init
  • remove dependency of time vary physics on GFS DDTs
  • clean up o3 and h2o physics (use existing arrays instead of duplicating them and wasting memory)

Associated PRs:

ufs-community/ufs-weather-model#337
NOAA-EMC/fv3atm#217
#534
NCAR/ccpp-framework#344
NOAA-EMC/GFDL_atmos_cubed_sphere#47

For regression testing information, see ufs-community/ufs-weather-model#337.

@climbfuji climbfuji marked this pull request as ready for review December 28, 2020 17:33
@climbfuji
Copy link
Collaborator Author

I am opening this PR for review. Although unlikely, I may have to pull in updates from NCAR master in case anything gets committed until it is the turn for this PR. But the code changes itself should be final.

Copy link
Collaborator

@tanyasmirnova tanyasmirnova left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From all affected by this PR subroutines I am familiar with the gcycle.F90. Changes look good to me.

@SMoorthi-emc
Copy link
Collaborator

Dom,
I am not sure what changed and I don't see the updated code.
Besides, I am on leave until the beginning of next year. So, may be I should not be reviewing this PR I guess.
Moorthi

@climbfuji
Copy link
Collaborator Author

Dom,
I am not sure what changed and I don't see the updated code.
Besides, I am on leave until the beginning of next year. So, may be I should not be reviewing this PR I guess.
Moorthi

Absolutely not, I guess you shouldn't even be checking your emails ;-) No hurries at all, this is not yet a PR (just an issue noting the problem and suggesting a solution).

Copy link
Collaborator

@SMoorthi-emc SMoorthi-emc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have looked at the code, but cannot verify if everything is correct.
Nevertheless, I have to trust the CCPP team.

@grantfirl
Copy link
Collaborator

@climbfuji Maybe I should know this, but for purposes of reviewing, is there any documentation that explains the new timestep_init and timestep_final CCPP phases? What is possible to do within them, what data should they have access to, etc.?

@climbfuji
Copy link
Collaborator Author

@climbfuji Maybe I should know this, but for purposes of reviewing, is there any documentation that explains the new timestep_init and timestep_final CCPP phases? What is possible to do within them, what data should they have access to, etc.?

I was looking at the ccpp-framework issues and meeting minutes, there is no good description of those. Maybe @gold2718 wrote a proposal back when we discussed it first time. In our meeting minutes, it appears first and with limited information on 06/06/2019, then again 06/04/2020:

https://github.com/NCAR/ccpp-framework/wiki/CCPP-Framework-Meeting-Minutes-2019-06-06
https://github.com/NCAR/ccpp-framework/wiki/CCPP-Framework-Meeting-Minutes-2020-06-04

In short: timestep_init and timestep_final captures processes that setup or finalize the physics for the current timestep. This is exactly what is in the current time_vary physics group. Similar as for the init and finalize phases, the rule is that no external loops over blocked data structures are allowed, i.e. all columns of an MPI rank must be passed at once. Threading is allowed inside the timestep_init/timestep_final physics, though.

The restriction is for the same reason as for the init and final phases, namely that such steps likely involve reading data (updated tables etc) from disk, or MPI synchronization.

@climbfuji
Copy link
Collaborator Author

@SMoorthi-emc @grantfirl Thanks for reviewing. Note: So far, regression tests have passed on the following platforms against the existing baselines. This means that we can be fairly confident that the changes were made correctly. jet.intel is about to finish, same for cheyenne.intel. Denise is running the wcoss tests,

	modified:   tests/RegressionTests_cheyenne.gnu.log
	modified:   tests/RegressionTests_gaea.intel.log
	modified:   tests/RegressionTests_hera.gnu.log
	modified:   tests/RegressionTests_hera.intel.log
	modified:   tests/RegressionTests_orion.intel.log

@climbfuji
Copy link
Collaborator Author

@SMoorthi-emc @grantfirl Thanks for reviewing. Note: So far, regression tests have passed on the following platforms against the existing baselines. This means that we can be fairly confident that the changes were made correctly. jet.intel is about to finish, same for cheyenne.intel. Denise is running the wcoss tests,

	modified:   tests/RegressionTests_cheyenne.gnu.log
	modified:   tests/RegressionTests_gaea.intel.log
	modified:   tests/RegressionTests_hera.gnu.log
	modified:   tests/RegressionTests_hera.intel.log
	modified:   tests/RegressionTests_orion.intel.log

Update: all regression tests passed on all tier-1 platforms. That's a good indication that we got the code changes right.

Copy link
Collaborator

@grantfirl grantfirl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved

@climbfuji climbfuji merged commit 9ae4cea into NCAR:master Jan 8, 2021
if (kdt == 1) then
t_2delt = t
t_1delt = t
qv_2delt = qv
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks nice and clean. Since the RTs passed, I guess there was no need for the max function? Or, is it because there are no RTs using ZC MP anymore?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, we are running plenty of Zhao-Carr MP tests, including restart tests. They all passed. Means that the ICs already have a lower limit larger than that particular qmin (1.0E-10).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will that always be the case that ICs have already enforced a minimum value prior to this point in the physics? In other words, was the reason for removing it because it truly isn't necessary to enforce a minimum at this point? For a PR that is not supposed to change the answer, did we just get lucky?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are right. Let's fix this with your update of the corresponding SCM code. Thanks for finding this.

! Initialize CCPP error handling variables
errmsg = ''
errflg = 0

if (is_initialized) return

nblks = size(Model%blksz)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't find an obvious place where this non-uniform block checking stuff would have gone. Is it magically handled in the CCPP framework now?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this was a consistency check that was added a while back when the CCPP static mode was developed. It is no longer needed, because the logic in fv3atm takes care of it for the remainder of the lifetime of ccpp_prebuild.py.

if (nscyc > 0) then
if (mod(kdt,nscyc) == 1) THEN
call gcycle (me, nthrds, nx, ny, isc, jsc, nsst, tile_num, nlunit, &
input_nml_file, lsoil, lsoil_lsm, kice, idate, ialb, isot, ivegsrc, &
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This had to be a ton of fun passing all of these in!

@@ -523,12 +522,14 @@ subroutine radinit( si, NLAY, imp_physics, me )
!> -# Set up control variables and external module variables in
!! module physparam
#if 0
! DH* WHAT IS THIS?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to GitHub's blame function, this comment has existed since the original creation of the file, which was probably just copied from GFS_radiation_driver.F90 at the time. Maybe the comment was from Rusty Benson when he made IPD?

Copy link
Collaborator

@bensonr bensonr Jan 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It most likely was something in from the radiation driver prior to the GFS physics cleanup that culminated in the IPD. I didn't knowingly change any science during the IPD creation.

@@ -16,15 +16,16 @@ end subroutine GFS_suite_interstitial_rad_reset_finalize
!!
subroutine GFS_suite_interstitial_rad_reset_run (Interstitial, Model, errmsg, errflg)

use GFS_typedefs, only: GFS_control_type,GFS_interstitial_type
use machine, only: kind_phys
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is kind_phys needed in this subroutine now?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an oversight, it's not needed.

@grantfirl
Copy link
Collaborator

I finally made my way through this -- really good work. I think that I understand it well enough to transfer the changes to SCM variants.


!--- determine if diagnostics buckets need to be cleared
sec_zero = nint(Model%fhzero*con_hr)
if (sec_zero >= nint(max(Model%fhswr,Model%fhlwr))) then
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's funny that in the latest SCM PR where I added support for diagnostics intervals other than every timestep, I removed the clear calls from the host in favor of what was in the physics_time_vary run. Now, it will need to be added back. Not a big deal, just funny timing.

@climbfuji climbfuji deleted the timestep_init_final branch June 27, 2022 03:26
HelinWei-NOAA pushed a commit to HelinWei-NOAA/ccpp-physics that referenced this pull request Feb 26, 2023
This adds support for 32-bit physics to the FV3, based on prior work on the Neptune model.

Co-authored-by: Dusan Jovic <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants