Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Fix: Avoid
_like
function in Chunking
When we prepare chunked reads, we assume a single chunk for all backends but ADIOS2. Preparing the returned data, we use `data = np.full_like(record_component, np.nan)`. It turns out that numpy seems to trigger a `__getitem__` access or full copy of our `record_component` at this point, which causes severe slowdown. This was first seen for particles, but affects every read where we do not slice a subset. Co-authored-by: AlexanderSinn <[email protected]>
- Loading branch information