Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Current best practice for block structured AMR #1588

Open
pgrete opened this issue Jan 12, 2024 · 1 comment
Open

Current best practice for block structured AMR #1588

pgrete opened this issue Jan 12, 2024 · 1 comment
Labels

Comments

@pgrete
Copy link

pgrete commented Jan 12, 2024

I'm in the process of adding support for OpenPMD output to our block structured AMR code https://github.com/parthenon-hpc-lab/parthenon

In light of the open PR on the standard for mesh refinement openPMD/openPMD-standard#252 I'm wondering what the current best practice is (also with regard to achieving good performance at scale).

In our case, each rank owns variable number of meshblocks (though their size is fixed) with a (potentially variable) number of variables and at variable levels (depending on the chosen ordering of blocks).

The most straightforward approach approach would be to create one mesh record per bock and variable.
Alternatively, I imagine pooling record by level (so that the coordinate information is shared wrt to dx) to increase the size of the output buffer.
Are there other approaches/recommendations?

And what's the impact on performance when we write one chunk per block (which at the api level would effectively be a serial "write" as each block/variable combination is unique)?
Are the actual writes on flush optimized/pooled/...?

For reference, our current HDF5 output wrote the data of all blocks in parallel for each variable with the corresponding offsets.
The coordinate information was stored separately, so that this large output buffer didn't need to handle different dx.
This approach is currently not compatible with the OpenPMD standard for meshes with varying dx as each record has a tight connection to fixed coordinates, correct?

Thanks,

Philipp

Software Environment:
Have you already installed openPMD-api?
If so, please tell us which version of openPMD-api your question is about:

  • version of openPMD-api: [0.15.2]
  • installed openPMD-api via: [from source]
@BenWibking
Copy link

BenWibking commented Jan 12, 2024

I think all existing AMR codes that use openPMD have one field per variable per AMR level. This is true for WarpX, Quokka, and CarpetX (e.g., https://github.com/quokka-astro/quokka/blob/development/src/openPMD.cpp#L108 and https://github.com/eschnett/CarpetX/blob/main/CarpetX/src/io_openpmd.cxx). There's a full list of simulation codes here: https://github.com/openPMD/openPMD-projects?tab=readme-ov-file#scientific-simulations

Because ADIOS2 has built-in sparsity support, any part of the domain that is not covered by grids can simply be left un-written without any penalty.

AFAIK the writes will be pooled on flush in order to minimize filesystem metadata overhead (this is what the docs say: https://openpmd-api.readthedocs.io/en/latest/usage/firstwrite.html#flush-chunk)

Since I had discussed this with @ax3l awhile ago re both Quokka and Parthenon, he might be able to chime in, particularly what the analysis tools assume about AMR ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants