You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Opening a dataset with a very large array with object dtype is much slower than it should be. I initially noticed this when working with a dataset spanned by around 24000 netcdf files. I have been using kerchunk references to load them with a consolidated metadata key so I was expecting it to be fairly quick to open, but it actually took several minutes. I realized that all this time was spent on one variable consisting of strings, so when dropping it the whole dataset opens up in seconds. Sharing this would be a bit difficult so instead I will illustrate this with a simple easy to reproduce example with the latest released versions of xarray and zarr installed:
str_array=np.arange(100000000).astype(str)
ds=xr.DataArray(dims=('x',), data=str_array).to_dataset(name='str_array')
ds['str_array'] =ds.str_array.astype('O') # Needs to actually be object dtype to show the problemds.to_zarr('str_array.zarr')
%timexr.open_zarr('str_array.zarr/')
CPUtimes: user8.24s, sys: 5.23s, total: 13.5sWalltime: 12.9s
I did some digging and found that pretty much all the time was spent on the check being done by contains_cftime_datetimes in
This operation is not lazy and ends up requiring every single chunk for this variable to be opened, all for the sake of checking the very first element in the entire array. A quick fix I tried is updating contains_cftime_datetimes to do the following:
defcontains_cftime_datetimes(var) ->bool:
"""Check if an xarray.Variable contains cftime.datetime objects"""ifvar.dtype==np.dtype("O") andvar.size>0:
ndims=len(var.shape)
first_idx=np.zeros(ndims, dtype='int32')
array=var[*first_idx].datareturn_contains_cftime_datetimes(array)
else:
returnFalse
This drastically reduced the time to open the dataset as expected:
I would like to make a PR with this change but I realize that this change could effect every backend, and although I have been using xarray for many years this would be my first contribution and so I would like to briefly discuss it in case there are better ways to address the issue. Thanks!
The text was updated successfully, but these errors were encountered:
Feel free to start working on that PR. 👍 It looks like _contains_cftime_datetimes tries to do a similar thing as your solution so I think the change should be there.
Great, thanks! It's actually the var.data attribute access itself that's triggering the loading so that's why I needed to put the change there, but I see your point that I should probably update contains_cftime_datetimes as well since selecting the first element again is stylistically redundant. In any case, I'll go ahead and quickly get to work at preparing a PR for this.
What is your issue?
Opening a dataset with a very large array with object dtype is much slower than it should be. I initially noticed this when working with a dataset spanned by around 24000 netcdf files. I have been using kerchunk references to load them with a consolidated metadata key so I was expecting it to be fairly quick to open, but it actually took several minutes. I realized that all this time was spent on one variable consisting of strings, so when dropping it the whole dataset opens up in seconds. Sharing this would be a bit difficult so instead I will illustrate this with a simple easy to reproduce example with the latest released versions of xarray and zarr installed:
I did some digging and found that pretty much all the time was spent on the check being done by
contains_cftime_datetimes
inxarray/xarray/core/common.py
Line 1793 in d385e20
This operation is not lazy and ends up requiring every single chunk for this variable to be opened, all for the sake of checking the very first element in the entire array. A quick fix I tried is updating
contains_cftime_datetimes
to do the following:This drastically reduced the time to open the dataset as expected:
I would like to make a PR with this change but I realize that this change could effect every backend, and although I have been using xarray for many years this would be my first contribution and so I would like to briefly discuss it in case there are better ways to address the issue. Thanks!
The text was updated successfully, but these errors were encountered: