-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-permitted parameter value combinations #420
Comments
@ckoven , could we make use of the parameter checking routine? Found here: https://github.com/NGEET/fates/blob/master/main/EDPftvarcon.F90#L1603 This would be the ideal place to check that parameters and what is derived from them don't exceed expected ranges. |
@rgknox that's a good idea to put this stuff into the fortran in the parameter checking routine. there are some subtleties that we'll have to figure out, but I think it makes sense. |
Not sure if this is a good idea, but could we just limit the LAI+SAI to be
less than nlevleaf*dinc? (and thus slightly change the definitions of one
or both of those things). As in, force the LAI prediction to not go out of
bounds?
Or, is it better that the model actually crashes if it goes over a software
rather than a biological limitation?
If we started off with some crazy high LAI, then the trimming routine
should eventually get it under control, so not crashing probably wouldn't
actually result in that many profiles limited by nlevleaf*dinc at
equilibrium. The main problem I can see with that is someone setting dinc
too small, and, say accidentally limiting LAI to numbers that might in fact
be reasonable. Maybe either nlevleaf and dinc should be related to each
other to prevent that from happening?
My feeling is that policing the parameter combination might be too hard,
given ow many parameters go into all of this.
Would any of this be fixed with a leaf-area-per-unit-volume framework?
Le ven. 28 sept. 2018 à 01:27, Charlie Koven <[email protected]> a
écrit :
… @rgknox <https://github.com/rgknox> that's a good idea to put this stuff
into the fortran in the parameter checking routine. there are some
subtleties that we'll have to figure out, but I think it makes sense.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#420 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AMWsQ_ovFU_Lq6ofWmVuwtMMDx4TLdsFks5ufV7UgaJpZM4W9Wk3>
.
--
------------------------------------------------------------
Dr Rosie A. Fisher
Staff Scientist
Terrestrial Sciences Section
Climate and Global Dynamics
National Center for Atmospheric Research
1850 Table Mesa Drive
Boulder, Colorado, 80305, USA
and
Visitor @ C.E.R.F.A.C.S
Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique
42 Avenue Gaspard Coriolis 31057, Toulouse, France
http://www.cgd.ucar.edu/staff/rfisher/
|
@rosiealice I tried that (admittedly in a hackish way) and kept getting energy balance errors, so I stopped pursuing that angle. I do think it may make sense to have this function fail in a less absolute way than it currently does. Perhaps a middle ground is to put in logic that the model won't let you define traits based around slatop that will exceed the size of the lai+sai array, but if subsequent SLA scaling out to slamax causes the crown of a given cohort to thicken beyond the size of the array then it will just clip the values? |
Hi All,
I think we may need to keep track somewhere of parameter relationships that will result in the model crashing or behaving in non-physically-meaningful ways. Lots of the parameters on their own have permitted ranges (which we probably ought to track of in parameter file metadata), but in some cases they relate to each other in ways that can lead to more emergent nonsensical outcomes.
E.g.: A case that I've run into recently is in the canopy depth. Basically since tree-level LAI = leaf area / crown area, and leaf area = sla * bleaf = sla * fates_allom_d2bl1 * dbh ^ fates_allom_d2bl2, and crown area = fates_allom_d2ca_coefficient * dbh ^ fates_allom_d2bl2, this means that tree-level lai = sla * fates_allom_d2bl1 / fates_allom_d2ca_coefficient across size classes (assuming fates_allom_blca_expnt_diff = 0, if not then all bets are off). For the model to not crash total lai+sai must <= nlevleaf * dinc, because that sets the size of the array for the canopy logic. Since sai is proportional to lai (i.e. sai = fates_allom_sai_scaler * lai), this means that, in order not to crash, (1 + fates_allom_sai_scaler) * sla * fates_allom_d2bl1 / fates_allom_d2ca_coefficient <= nlevleaf * dinc.
In a parameter sensitivity experiment that I'm currently running, which has a pretty arbitrary distribution of permitted fates_allom_d2bl1 values but observed distributions of sla and fates_allom_d2ca_coefficient, some nonnegligable fraction of this joint distribution actually goes out of the permitted bounds and the model crashes. Now that I've realized this, I've put some logic into the ensemble generator to test for this condition and pre-exclude those ensemble members, but I'm guessing this is something that other people might run into too, so I wanted to note it somewhere. If anyone else has run into or does run into similar issues, please lets keep track of them (here or somewhere better, maybe an appendix of the tech note).
The text was updated successfully, but these errors were encountered: