You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By design in ClimaCore and ClimaAtmos, running LES configurations should be natural extensions of our available software tools. The stability and quality of results is then dependent on the interaction of our numerical discretization choices and the subgrid-scale turbulent schemes used in the "box" configuration. The LES SGS closures rely on identical code design paths as the dynamical core, as all tendencies are supplied as additive functions. The aim is to prioritise running LES cases typically reported in intercomparison studies on GPUs.
Cost/Benefits/Risks
b: It would be good to show the capability of using ClimaCore and ClimaAtmos for LES. This will be also useful for e.g. microphysics research, and for nested domain (or derived forcing) experiments in the future.
Support for parameterized tendencies for sgs closures based on local measure of subgrid scale turbulence (locality makes sure GPU implementations are straightforward).
Support for required LES forcing terms (this is currently available through the standard ClimaCore/ ClimaAtmos tools).
Full compatibility with CUDA extensions: Currently there are features in a subset of assembled LES cases which are not robustly GPU compatible (in forcing function assemblies rather than in the sgs parameterization).
Results and Deliverables
The key steps to demonstrate stable LES for canonical test cases in ClimaAtmos are as follows:
Prototype
Demonstrate Smagorinsky-Lilly type SGS closure compatible with GPU runs (This serves as a simple example of one turbulence closure choice - future work will extend this to other published local SGS closures.)
Case: BOMEX
Case: RICO (1M microphysics)
[80%] Case: DYCOMS (CPU option available. Pending: GPU CUDA extensions for integral operators via ClimaComms0.6)
[50%] Case: ISDAC (CPU option available. Pending: GPU CUDA extensions for integral operators via ClimaComms0.6)
GABLS
Bomex
Dycoms RF01
TRMM (with 1M)
Close GPU specific issues that may affect LES tendency assembly (in column-wise computations - ClimaCore).
At this stage, we assume that derivatives appearing in rate-of-strain measures will be computed with Cartesian transformations of velocity components. Covariant derivatives (to allow consistent tensor operations) are thus not strictly necessary for the current task scope.
Production
Documentation for LES closures with tested parameter ranges
CPU Scaling assessment for typical high resolution configurations
GPU Scaling assessment for typical high resolution configurations
Paper draft
Additional work:
Covariant derivatives with Christoffel symbols (While not strictly necessary in the orthogonal grid system assuming we can transform variables to Cartesian coordinates, having this capability within ClimaCore would complete our list of
available operations. This requires updates to the Operators module in ClimaCore and any supporting CUDA extensions.
Other information:
(Current estimates show a 1:3 wallclock:simtime relation, this will be optimised and systematically tested as part of the documentation effort.)
Longer-term context:
This is a prerequisite to coupling exercises with ocean and land-model components.
Task Breakdown And Schedule
SDI Revision Log
CC
The text was updated successfully, but these errors were encountered:
@akshaysridhar - Could I add DYCOMS RF02 to the list of cases? It's the one that has drizzle. I think we have all the setups needed, since this case used to be run for TurbulenceConvection.jl I would just have to put it together and add it to the CI
The Climate Modeling Alliance
Software Design Issue 📜
Purpose
By design in ClimaCore and ClimaAtmos, running LES configurations should be natural extensions of our available software tools. The stability and quality of results is then dependent on the interaction of our numerical discretization choices and the subgrid-scale turbulent schemes used in the "box" configuration. The LES SGS closures rely on identical code design paths as the dynamical core, as all tendencies are supplied as additive functions. The aim is to prioritise running LES cases typically reported in intercomparison studies on GPUs.
Cost/Benefits/Risks
b: It would be good to show the capability of using ClimaCore and ClimaAtmos for LES. This will be also useful for e.g. microphysics research, and for nested domain (or derived forcing) experiments in the future.
People and Personnel
Components
Results and Deliverables
The key steps to demonstrate stable LES for canonical test cases in ClimaAtmos are as follows:
Prototype
At this stage, we assume that derivatives appearing in rate-of-strain measures will be computed with Cartesian transformations of velocity components. Covariant derivatives (to allow consistent tensor operations) are thus not strictly necessary for the current task scope.
Production
Additional work:
available operations. This requires updates to the Operators module in ClimaCore and any supporting CUDA extensions.
Other information:
(Current estimates show a 1:3 wallclock:simtime relation, this will be optimised and systematically tested as part of the documentation effort.)
Longer-term context:
This is a prerequisite to coupling exercises with ocean and land-model components.
Task Breakdown And Schedule
SDI Revision Log
CC
The text was updated successfully, but these errors were encountered: