Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

shorten cohort memory arrays to nlevleafmem #769

Closed
wants to merge 15 commits into from

Conversation

rgknox
Copy link
Contributor

@rgknox rgknox commented Aug 9, 2021

Description:

There are two arrays used to track the carbon budget of leaf layers in the plant: ts_net_uptake and year_net_uptake. A few notes about these arrays:

  1. These two arrays are attached to the cohort structure
  2. These two arrays were as large as nlevleaf, which is the maximum possible number of leaf layers, which due to the nature of our leaf layering scheme, had to be large as the system is unbounded (ie 30)
  3. To track and fill these arrays, we were performing loops of ~30 iterations inside various cohort loops (expensive)

This PR changes the way we track leaf layer carbon balance, by remembering only the 4 lowest layers in the plant's crown, instead of all 30. The memory of these layers is used solely for trimming, and trimming only happens at the bottom, so we need not remember all of the layers, just the bottom. We track 4 layers, because that is 1 larger than the number of layer memory needed to perform a regression on the trimming calculation.

Various changes to the code are in response to the change in this layering, such as how we fuse cohorts with memory in different layers. Also, there is a difference between the maximum number of layers a plant could have when on allometry, versus the number of layers it currently has. The latter may be lower than the maximum if a plant is not replacing leaf tissues lost to turnover.

Fixes: #272
Addresses #644

Collaborators:

@ckoven @glemieux @rosiealice

Expectation of Answer Changes:

Answers should change, but its unlikely any of our tests will catch the change because trimming only occurs annually.

Checklist:

  • My change requires a change to the documentation.
  • I have updated the in-code documentation .AND. (the technical note .OR. the wiki) accordingly.
  • I have read the CONTRIBUTING document.
  • FATES PASS/FAIL regression tests were run
  • If answers were expected to change, evaluation was performed and provided

Test Results:

TBD

CTSM (or) E3SM (specify which) test hash-tag:

CTSM (or) E3SM (specify which) baseline hash-tag:

FATES baseline hash-tag:

Test Output:

@rgknox rgknox added the PR status: Not Ready The author is signaling that this PR is a work in progress and not ready for integration. label Aug 9, 2021
@rgknox
Copy link
Contributor Author

rgknox commented Aug 12, 2021

To Do:

  1. Evaluate long term simulations to see how different results are with base (b4b not expected)
  2. Benchmark memory and timing changes

@rgknox rgknox removed the PR status: Not Ready The author is signaling that this PR is a work in progress and not ready for integration. label Dec 12, 2022
@rgknox
Copy link
Contributor Author

rgknox commented Jan 2, 2023

This branch is now producing nigh indistinguishable results compared to base at a single test site.

leafmem-v14-base-p1

Now running long term gridded smoke tests on cheyenne.

@rgknox
Copy link
Contributor Author

rgknox commented Jan 30, 2023

I plan to carve this up in to smaller chunks and submit incrementally, closing (but will rise again!)

@rgknox rgknox closed this Jan 30, 2023
@rgknox rgknox deleted the trim-mem-updates branch October 31, 2023 19:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

large share of memory footprint in two variables
2 participants