Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix regression: IndexVariable.copy(deep=True) casts dtype=U to object #3095

Merged
merged 7 commits into from
Aug 2, 2019

Conversation

crusaderky
Copy link
Contributor

@crusaderky crusaderky commented Jul 11, 2019

@codecov
Copy link

codecov bot commented Jul 11, 2019

Codecov Report

Merging #3095 into master will decrease coverage by 0.09%.
The diff coverage is 90%.

@@            Coverage Diff            @@
##           master    #3095     +/-   ##
=========================================
- Coverage   95.75%   95.65%   -0.1%     
=========================================
  Files          63       63             
  Lines       12804    12807      +3     
=========================================
- Hits        12260    12251      -9     
- Misses        544      556     +12

return ('%s(array=%r, dtype=%r)'
% (type(self).__name__, self.array, self.dtype))

def copy(self, deep: bool = True) -> 'PandasIndexAdapter':
obj = object.__new__(PandasIndexAdapter)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to use the constructor directly here rather than object.__new__?

If not, please add a comment explaining why :)

Copy link
Contributor Author

@crusaderky crusaderky Jul 12, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can use __init__, but it's going to be needlessly slower. If you prefer I can add a fastpath=False parameter to __init__, which is the pattern already used elsewhere

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather stick with init unless we can find an example of using xarray's public APIs where performance is meaningfully effected. Until then it's probably premature optimization.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

xarray/core/indexing.py Outdated Show resolved Hide resolved
def copy(self, deep: bool = True) -> 'PandasIndexAdapter':
# Not the same as just writing `self.array.copy(deep=deep)`, as
# shallow copies of the underlying numpy.ndarrays become deep ones
# upon pickling
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite follow this comment -- how does pickling relate to this new method?

Copy link
Contributor Author

@crusaderky crusaderky Jul 15, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pandas.Index.copy(deep=False) creates new identical views of the underlying numpy arrays.
A numpy view becomes a real, deep-copied numpy array upon pickling. This is by design to prevent people from accidentally sending a huge numpy array over network or IPC when they just want to send a slice of it. Crucially though, this (IMHO improvable) design makes no distinction when the base array is the same size or smaller than the view.

Back to the method at hand: if two xarray.DataArray/Dataset share the same underlying pandas.Index, which is the whole intent of copy(deep=False), when they are pickled together they continue sharing the same data. This is already true when you shallow-copy xarray.Variable.
But if instead of

array = self.array.copy(deep=True) if deep else self.array

we wrote

array = self.array.copy(deep=deep)

which would be tempting as it is much more readable, everything would be the same initially, until you pickle the original index and its shallow copy together, at which point the copy would automatically and silently become a deep one, causing double RAM/disk usage upon storing/unpickling.

>>> idx = xarray.IndexVariable('x', list(range(500000)))._data.array
>>> len(pickle.dumps((idx, idx)))                                                                                                       
4000281
>>> len(pickle.dumps((idx, idx.copy(deep=False))))                                                                                     
8000341

This was a real problem I faced a couple of years ago where I had to dump to disk (for forensic purposes) about 10,000 intermediate steps of a Monte Carlo simulation, each step being a DataArray or Dataset. The data variables were all dask-backed, so they were fine. But among the indices I had a 'scenario' dimension with 500,000 points, dtype='<U13'.

Before pickling: 13 * 500,000 = 6.2 MB; the 50,000 shallow copies of it being just views
After pickling: 13 * 500,000 * 10,000 = I needed a new hard disk!

This issue was from a couple of years ago and has since been fixed. My (apparently overcomplicated) code above prevents it from showing up again.

P.S. this, by the way, is a big problem with xarray.align when you are dealing with mixed numpy+dask arrays

>>> a = xarray.DataArray(da.ones(500000, chunks=10000), dims=['x'])                                                                     
>>> b = xarray.concat([a] + [xarray.DataArray(i) for i in range(1000)], dim='y')                                                        
>>> c = b.sum()
>>> c.compute()
<xarray.DataArray ()>
array(2.497505e+11)
# Everything is fine so far (not very fast due to the dask chunks of size 1 but still)...
>>> client = distributed.Client()
>>> c.compute()

Watch the RAM usage of your dask client rocket to 8 GB, and several GBs worker-side too.
This will take a while because, client side, you just created 3.8 GB (500000 * 8 * 1000) worth of pickle file and are now sending it over the network.

[EDIT] nvm this last example. The initial concat() is already sending the RAM usage to 4 GB, meaning it's not calling numpy.ndarray.broadcast_to under the hood as it used to do...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explanation, this makes complete sense.

@crusaderky
Copy link
Contributor Author

@shoyer is there anything outstanding on this?

@crusaderky
Copy link
Contributor Author

@shoyer ping

@crusaderky
Copy link
Contributor Author

@shoyer this has now been waiting for 16 days, is there anybody who could pick the review up in your absence?

Copy link
Member

@shoyer shoyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for delay here!

As you can tell, I've been pretty distracted from code reviews for xarray recently.

xarray/core/indexing.py Outdated Show resolved Hide resolved
def copy(self, deep: bool = True) -> 'PandasIndexAdapter':
# Not the same as just writing `self.array.copy(deep=deep)`, as
# shallow copies of the underlying numpy.ndarrays become deep ones
# upon pickling
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explanation, this makes complete sense.

@crusaderky
Copy link
Contributor Author

@shoyer ready for merge

@max-sixty max-sixty merged commit dba85bd into pydata:master Aug 2, 2019
@max-sixty
Copy link
Collaborator

Thanks @crusaderky !

@crusaderky crusaderky deleted the object_index branch August 2, 2019 14:37
dcherian added a commit to yohai/xarray that referenced this pull request Aug 3, 2019
* master: (68 commits)
  enable sphinx.ext.napoleon (pydata#3180)
  remove type annotations from autodoc method signatures (pydata#3179)
  Fix regression: IndexVariable.copy(deep=True) casts dtype=U to object (pydata#3095)
  Fix distributed.Client.compute applied to DataArray (pydata#3173)
  More annotations in Dataset (pydata#3112)
  Hotfix for case of combining identical non-monotonic coords (pydata#3151)
  changed url for rasterio network test (pydata#3162)
  to_zarr(append_dim='dim0') doesn't need mode='a' (pydata#3123)
  BUG: fix+test groupby on empty DataArray raises StopIteration (pydata#3156)
  Temporarily remove pynio from py36 CI build (pydata#3157)
  missing 'about' field (pydata#3146)
  Fix h5py version printing (pydata#3145)
  Remove the matplotlib=3.0 constraint from py36.yml (pydata#3143)
  disable codecov comments (pydata#3140)
  Merge broadcast_like docstrings, analyze implementation problem (pydata#3130)
  Update whats-new for pydata#3125 and pydata#2334 (pydata#3135)
  Fix tests on big-endian systems (pydata#3125)
  XFAIL tests failing on ARM (pydata#2334)
  Add broadcast_like. (pydata#3086)
  Better docs and errors about expand_dims() view (pydata#3114)
  ...
dcherian added a commit to dcherian/xarray that referenced this pull request Aug 6, 2019
* master:
  enable sphinx.ext.napoleon (pydata#3180)
  remove type annotations from autodoc method signatures (pydata#3179)
  Fix regression: IndexVariable.copy(deep=True) casts dtype=U to object (pydata#3095)
  Fix distributed.Client.compute applied to DataArray (pydata#3173)
dcherian added a commit to dcherian/xarray that referenced this pull request Aug 6, 2019
* master:
  enable sphinx.ext.napoleon (pydata#3180)
  remove type annotations from autodoc method signatures (pydata#3179)
  Fix regression: IndexVariable.copy(deep=True) casts dtype=U to object (pydata#3095)
  Fix distributed.Client.compute applied to DataArray (pydata#3173)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

REGRESSION: copy(deep=True) casts unicode indices to object
3 participants