-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Refactor] TensorClass drop _tensordict argument in constructor #175
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
documentation needs to be updated with constructor signature and the arguments
tensordict/prototype/tensorclass.py
Outdated
] | ||
wrapper.__signature__ = init_sig.replace(parameters=params + new_params) | ||
|
||
return wrapper | ||
|
||
|
||
def _build_from_tensordict(cls, tensordict): | ||
return cls(_tensordict=tensordict) | ||
def _from_tensordict(cls, tensordict): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest adding a copy
parameter to allow just storing the reference or deep copy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you say a bit more about how you imagine this working?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see here.
tensordict/prototype/tensorclass.py
Outdated
def _from_tensordict(cls, tensordict): | ||
tc = cls(**tensordict, batch_size=tensordict.batch_size) | ||
tc.__dict__["tensordict"] = tensordict | ||
return tc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def _from_tensordict(cls, tensordict): | |
tc = cls(**tensordict, batch_size=tensordict.batch_size) | |
tc.__dict__["tensordict"] = tensordict | |
return tc | |
def _from_tensordict(cls, tensordict, copy=True): | |
#TODO assert tensordict is tensordict | |
if copy: | |
tc = cls(**tensordict, batch_size=tensordict.batch_size) | |
else: | |
input_dict = {key: None for key in tensordict.keys()} | |
init(self, **input_dict, batch_size=tensordict.batch_size) | |
self.tensordict = _tensordict | |
tc.__dict__["tensordict"] = tensordict | |
return tc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
init
here is the dataclass' __init__
?
I see where you're doing with this, just a bit awkward because currently the internals of _from_tensordict
don't have access to init
. We'll have to rewrite it as a wrapper of some sort.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wouldn't calling cls.__init__
be enough?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But that's the tensorclass' init method right? So that's effectively what I'm currently doing anyway with
tc = cls(**tensordict, batch_size=tensordict.batch_size)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, my bad...in this PR we are still overwriting cls
instead of using datacls
.
you are right then, we need to do a wrapper like other functions
f"Keys from the tensordict ({set(tensordict.keys())}) must " | ||
f"correspond to the class attributes ({expected_keys})." | ||
) | ||
tc = cls(**tensordict, batch_size=tensordict.batch_size) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't this a bit overkilling it?
IIUC we deconstruct the tensordict, reconstruct another one, then replace it with the old one. Since the construction is quite time consuming, do you think we can find another solution?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my very same objection, but @tcbegley measured the overhead and seems negligible. Do you have a specific test case in mind that we could try to double check the benchmark?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The overhead is mainly due to the checks, so if we could do the checks only once it'd be cool (TensorDIct(dictionary, batch_size, device, _run_checks=False)
).
Given that the tensordict that is provided is presumably already checked we should be good no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
off-topic: why the _
in a parameter name? It should be used only for protected methods/variables
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly for that: we don't want users to call that. We may refactor this in the future to hide it a bit more.
You should use that only if you build a tensordict out of another tensordict and the checks have been done, which will only occur in with dedicated methods.
I'm open to suggestions to make it cleaner :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See this example:
td = TensorDict({"a": torch.randn(3), "b": torch.randn(2)}, [3], _run_checks=False) # passes
td[2] # breaks
the code breaks where it should not (it should break earlier).
If we make that arg public, we're telling the users that this would be ok-ish to construct the TD like this, which isn't.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we initialise an empty tensordict in the constructor, and then populate it via __setattr__
from inside the dataclass' constructor, I think we will also need to set _run_checks=False
inside there to really reduce overhead. Is it ok to always have that turned off though? Presumably we still want checks when running something like
tc.X = torch.rand(100, 10)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we initialise an empty tensordict in the constructor, and then populate it via
__setattr__
from inside the dataclass' constructor, I think we will also need to set_run_checks=False
inside there to really reduce overhead. Is it ok to always have that turned off though? Presumably we still want checks when running something liketc.X = torch.rand(100, 10)
yep in that case we need to run it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly for that: we don't want users to call that. We may refactor this in the future to hide it a bit more. You should use that only if you build a tensordict out of another tensordict and the checks have been done, which will only occur in with dedicated methods. I'm open to suggestions to make it cleaner :)
couldn't we simply store a self._checked
boolean attribute that gets set to true the first time, and get rid of that argument constructor?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That could be an option, I should go through the tensordict.py file and check if that would work in all cases
Merging this, we can add the _run_checks later if appropriate |
Description
This PR refactors the generated init method of tensorclasses. The motivation was to try and make the signature less confusing.
Previously
batch_size
was a positional or keyword argument with default valueNone
, however, usually ifbatch_size
was not specified, this would lead to aValueError
, so during normal instantiationbatch_size
was a required argument.However,
batch_size
was not required if the user constructed a tensorclass by passing aTensorDict
via the_tensordict
argument. Hence it was not possible for us to makebatch_size
a required argument.The changes:
batch_size
is now a required keyword-only argument. If you don't specify it, Python will raise aTypeError
._tensordict
argument in tensorclass constructors.tc = MyTensorClass.from_tensordict(td)
.I've opened this as a separate PR to get specific feedback on whether we want to adopt this approach.
CC @apbard