Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SP-mode not working as it should in ELM-FATES #953

Closed
ckoven opened this issue Nov 29, 2022 · 8 comments · Fixed by E3SM-Project/E3SM#5345
Closed

SP-mode not working as it should in ELM-FATES #953

ckoven opened this issue Nov 29, 2022 · 8 comments · Fixed by E3SM-Project/E3SM#5345

Comments

@ckoven
Copy link
Contributor

ckoven commented Nov 29, 2022

Based on some simulations that @JessicaNeedham has run, we've realized that ELM-FATES isn't running SP-mode properly. What appears to be happening is that LAI is set in the first timestep but then doesn't update to follow the phenology dataset, it just stays fixed. I just did a run using CTSM-FATES, and SP-mode is working as expected there, i.e. LAI follows the satellite dataset. So something in the ELM driver data does not seem to be passing the SP-mode LAI data properly.

@ckoven
Copy link
Contributor Author

ckoven commented Nov 29, 2022

Just for reference/evidence, attached are jupyter notebook cells showing timeseries of LAI at some random gridcell from the two runs. The first one, with the flat line, is using the ELM-FATES output; the second one, with a periodic curve is the CTSM-FATES output. Interestingly, the initial values aren't even identical between the two runs either, not sure if that is relevant or not.

Screen Shot 2022-11-29 at 1 24 01 PM

Screen Shot 2022-11-29 at 1 34 30 PM

@glemieux
Copy link
Contributor

Nice plots showing the issue Charlie. I'll prioritize this.

@glemieux
Copy link
Contributor

@ckoven what's the e3sm hash that your are working from?

@JessicaNeedham
Copy link
Contributor

@glemieux the problem occurs in E3SM runs that are on c63cce2.

@glemieux
Copy link
Contributor

I think I determined part of the issue. The call to SatellitePhenology in elm_drv is in the wrong place and unreachable:
https://github.com/glemieux/E3SM/blob/3c7d4fd8efb65ab7bf919b7fac92b7aad6e2e51d/components/elm/src/main/elm_driver.F90#L1120-L1124.

I have a branch based off of c63cce2 that has a quick fix to correct this: https://github.com/glemieux/E3SM/tree/fates-satphen-call-fix. I'm not sure if the doalb check is really necessary here, as it isn't used in the interpMonthly call when using fates sp mode in elm-fates and it was recently removed from clm-fates iirc. Here's the quick plot near the lat/lon you were showing above, which is changing now, but doesn't really look the same as the clm-fates plot:
Screen Shot 2022-11-30 at 12 31 34 AM

@rgknox
Copy link
Contributor

rgknox commented Nov 30, 2022

Nice find @glemieux , that fix makes sense to me

@ckoven
Copy link
Contributor Author

ckoven commented Nov 30, 2022

Thanks @glemieux, that is great news!

@JessicaNeedham
Copy link
Contributor

Thanks @glemieux! I ran a 50 year spmode simulation using your branch and it looks like the issue is fixed. Attached figure is the last ten years of the simulation showing the same point as in @ckoven 's CTSM run. Periodicity and LAI values look good although a bit higher than the CTSM values. However, the two runs are using different surface data files which might explain this.

Screenshot 2022-12-01 at 09 54 10

@glemieux glemieux moved this from ❕Todo to 🟢 In Progress in FATES issue board Dec 5, 2022
bishtgautam added a commit to E3SM-Project/E3SM that referenced this issue Dec 14, 2022
…ext (PR #5345)

Move the fates-specific `satellitephenology` call out of an area that is unreachable when `fates` is active.

Fixes NGEET/fates#953.

[Non-B4B]
Repository owner moved this from 🟢 In Progress to ✔ Done in FATES issue board Dec 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

4 participants