-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stateful user-defined accessors #3268
Comments
Demonstation on the circular reference issue: import gc
import weakref
import xarray
class C:
pass
@xarray.register_dataset_accessor('foo')
class Foo:
def __init__(self, obj):
self.obj = obj
ds = xarray.Dataset()
w = weakref.ref(ds)
print("No accessor, in scope:", w() is not None)
del ds
print("No accessor, descoped:", w() is not None)
ds = xarray.Dataset()
ds.foo
w = weakref.ref(ds)
print("with accessor, in scope:", w() is not None)
del ds
print("with accessor, descoped:", w() is not None)
gc.collect()
print("with accessor, after gc pass:", w() is not None) Output:
|
Interesting, thanks! As an accessor maintainer, I can ensure that at least one accessor implementation is storing state ;-). But this state is based on the xarray object itself: for example, we derive georeferencing information and store the names of the coordinate variabless we know are going to be useful to us later. That is, every new call to Getting rid of the caching logic would mean some performance loss to us, yes, but I don't know if it's "worse" than the circular reference issue you describe or not. |
@fmaussion could you change your accessor code to store its state in |
|
The circular reference issue could also be worked around in a user-friendly way by having the decorator automatically add methods to the decorated class, copying the design of import weakref
class C:
def __init__(self, owner):
self._owner = weakref.ref(owner)
if hasattr(self, "__post_init__"):
self.__post_init__()
@property
def owner(self):
out = self._owner()
if out is None:
raise AttributeError("Orphaned accessor")
return out This would also allow for shallow copies to change the pointer. |
Yes, although I never bothered to get this done (my library is quite niche and not mean to become widely used), but if we remove the caching mechanism in xarray, it would be one more incentive to make the switch.
Yes. In almost all use cases I know of this won't happen. Typically, the users have to create (or open) an xarray object that salem understands before calling the accessor anyway.
No. I didn't think about that before today ;-). But you are right: if users of salem's accessor change the coordinates after calling the accessor, then bad things might happen. Experience shows that this rarely happens (never happened to me), but I can see how this can fire back. Altogether, these are good arguments to remove the caching mechanism I believe. |
It isn't just methods that use Accessors are also not preserved when indexing a DataArray out of a Dataset (#3205), cc @djhoese. I had not contemplated the issue of circular references, which I agree is not ideal. If we had realized that when creating accessors in the first place we might have chosen a different design, but there are a number of users who rely upon it.
I like the look of this solution. It will still require users to update their code to avoid circular references (e.g., by removing their own |
I don't mind needing to update our accessor code. My only request is don't have a version that suddenly breaks it so that we only work on the old version or the new version. 😉 |
Hi all, Starting from a class with the core of the code and an accessor to implement the user API: import xarray
class BaseEstimator():
def fit(self, this_ds, x=None):
# Do something with this_ds:
x = x**2
# and create a new array with results:
da = xarray.DataArray(x).rename('fit_data')
# Return results:
return da
def transform(self, this_ds, **kw):
# Do something with this_ds:
val = kw['y'] + this_ds['fit_data']
# and create a new array with results:
da = xarray.DataArray(val).rename('trf_data')
# Return results:
return da
@xarray.register_dataset_accessor('my_accessor')
class Foo:
def __init__(self, obj):
self.obj = obj
self.added = list()
def add(self, da):
self.obj[da.name] = da
self.added.append(da.name)
return self.obj
def clean(self):
for v in self.added:
self.obj = self.obj.drop(v)
self.added.remove(v)
return self.obj
def fit(self, estimator, **kw):
this_da = estimator.fit(self, **kw)
return self.add(this_da)
def transform(self, estimator, **kw):
this_da = estimator.transform(self.obj, **kw)
return self.add(this_da)
Now if we consider this workflow: ds = xarray.Dataset()
ds['ext_data'] = xarray.DataArray(1.)
my_estimator = BaseEstimator()
ds = ds.my_accessor.fit(my_estimator, x=2.)
print("Before clean:")
print("xr.DataSet var :", list(ds.data_vars))
print("accessor.obj var:", list(ds.my_accessor.obj.data_vars))
print("\nAfter clean:")
# ds.my_accessor.clean() # This does nothing to ds but clean the accessor.obj
# ds = ds.my_accessor.clean() # Cleaning ok for both ds and accessor.obj
ds_clean = ds.my_accessor.clean() # Cleaning ok on new ds, does nothing to ds as expected but clean in accessor.obj
print("xr.DataSet var :", list(ds.data_vars))
print("accessor.obj var :", list(ds.my_accessor.obj.data_vars))
print("Cleaned xr.DataSet var:", list(ds_clean.data_vars)) We have the following output: Before clean:
xr.DataSet var : ['ext_data', 'fit_data']
accessor.obj var: ['ext_data', 'fit_data']
After clean:
xr.DataSet var : ['ext_data', 'fit_data']
accessor.obj var : ['ext_data']
Cleaned xr.DataSet var: ['ext_data'] The issue is clear here: the base space dataset has the 'fit_data' variable but not the accessor object: they've been "disconnected" and it's not apparent to users. So if users later proceed to run the "transform": ds.my_accessor.transform(my_estimator, y=2.) they get an KeyError raised because the 'fit_data' is not in the accessor, although it still appears on the list of the ds variables, which is more than confusing. Sorry for this long post, I'm not sure it's relevant to this issue but it seems so to me. |
@gmaze def clean(self):
return self.obj.drop(self.added) The new dataset returned will have no accessor cache, and will recreate an instance on the fly on first access. |
@crusaderky thanks for the explanation, that's a solution to my pb. Although I understand that since accessor will be created from scratch, a dataset copy won't propagate the accessor properties (in this case the list of added variables): ds = xarray.Dataset()
ds['ext_data'] = xarray.DataArray(1.)
my_estimator = BaseEstimator() # With "clean" method from @crusaderky
ds.my_accessor.fit(my_estimator, x=2.)
ds.my_accessor.transform(my_estimator, y=3.)
ds2 = ds.copy()
ds = ds.my_accessor.clean()
ds2 = ds2.my_accessor.clean()
print(ds.data_vars)
print(ds2.data_vars) gives: Data variables:
ext_data float64 1.0
Data variables:
ext_data float64 1.0
fit_data float64 4.0
trf_data float64 7.0 "Cleaning" the dataset works as expected, but the copy (ds2) has en empty list of added variables so the "clean" method doesn't have the expected result. We have the same behavior for deep copy. Would that make any sense that the xr.DataSet.copy() method also return a copy of the accessors ? |
It's been discussed above in this same thread. It's impossible without breaking the accessor API, as it would require you (the accessor developer) to define a copy method. Have you considered storing a flag in def add(self, da):
da.attrs["cleanable"] = True
self.obj[da.name] = da
return self.obj
def clean(self):
return self.obj.drop([
k for k, v in self.obj.variables.items()
if v.attrs.get("cleanable")
]) |
Ok, I get it. Probably the accessor is not the best solution in my case. But why is asking accessor developers to define a copy method an issue ? That wouldn't be mandatory but only required in situations where propagating functional informations could be useful. Sorry if that's a naive question for you guys. |
Thing is, the whole thing is undefined. Dataset.copy() will create a copy of everything up to and excluding the numpy arrays. In order to allow you to retain accessor state, we'd need to plant a hook in it and invoke some agreed duck-type API in your object that basically states, "I called copy(), and this is the new object I created, please create a copy of yourself accordingly making extra sure you don't retain references to components of the previous object". |
Alright, I think I get it, thanks for the clarification @crusaderky |
If anybody decorates a stateful class with
@register_dataarray_accessor
or@register_dataset_accessor
, the instance will lose its state on any method that invokes_to_temp_dataset
, as well as on a shallow copy.While in the case of
_to_temp_dataset
it could be possible to spend (substantial) effort to retain the state, on the case of copy() it's impossible without modifying the accessor duck API, as one would need to tamper with the accessor instance in place and modify the pointer back to the DataArray/Dataset.This issue is so glaring that it makes me strongly suspect that nobody saves any state in accessor classes. This kind of use would also be problematic in practical terms, as the accessor object would have a hard time realising when its own state is no longer coherent with the referenced DataArray/Dataset.
This design also carries the problem that it introduces a circular reference in the DataArray/Dataset. This means that, after someone invokes an accessor method on his DataArray/Dataset, then the whole object - including the numpy buffers! - won't be instantly collected when it's dereferenced by the user, and it will have to instead wait for the next
gc
pass. This could cause huge increases in RAM usage overnight in a user application, which would be very hard to logically link to a change that just added a custom method.Finally, with #3250, this statefulness forces us to increase the RAM usage of all datasets and dataarrays by an extra slot, for all users, even if this feature is quite niche.
Proposed solution
Get rid of accessor caching altogether, and just recreate the accessor object from scratch every time it is invoked. In the documentation, clarify that the
__init__
method should not perform anything computationally intensive.The text was updated successfully, but these errors were encountered: