Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Refactor] TensorClass drop _tensordict argument in constructor #175

Merged
merged 5 commits into from
Jan 23, 2023

Conversation

tcbegley
Copy link
Contributor

Description

This PR refactors the generated init method of tensorclasses. The motivation was to try and make the signature less confusing.

Previously batch_size was a positional or keyword argument with default value None, however, usually if batch_size was not specified, this would lead to a ValueError, so during normal instantiation batch_size was a required argument.

However, batch_size was not required if the user constructed a tensorclass by passing a TensorDict via the _tensordict argument. Hence it was not possible for us to make batch_size a required argument.

The changes:

  • batch_size is now a required keyword-only argument. If you don't specify it, Python will raise a TypeError.
  • There is no longer a _tensordict argument in tensorclass constructors.
  • Constructing a tensorclass from a tensordict is still possible, but like this: tc = MyTensorClass.from_tensordict(td).

I've opened this as a separate PR to get specific feedback on whether we want to adopt this approach.

CC @apbard

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 20, 2023
Copy link
Contributor

@apbard apbard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

documentation needs to be updated with constructor signature and the arguments

]
wrapper.__signature__ = init_sig.replace(parameters=params + new_params)

return wrapper


def _build_from_tensordict(cls, tensordict):
return cls(_tensordict=tensordict)
def _from_tensordict(cls, tensordict):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest adding a copy parameter to allow just storing the reference or deep copy

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you say a bit more about how you imagine this working?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see here.

tensordict/prototype/tensorclass.py Outdated Show resolved Hide resolved
Comment on lines 203 to 206
def _from_tensordict(cls, tensordict):
tc = cls(**tensordict, batch_size=tensordict.batch_size)
tc.__dict__["tensordict"] = tensordict
return tc
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def _from_tensordict(cls, tensordict):
tc = cls(**tensordict, batch_size=tensordict.batch_size)
tc.__dict__["tensordict"] = tensordict
return tc
def _from_tensordict(cls, tensordict, copy=True):
#TODO assert tensordict is tensordict
if copy:
tc = cls(**tensordict, batch_size=tensordict.batch_size)
else:
input_dict = {key: None for key in tensordict.keys()}
init(self, **input_dict, batch_size=tensordict.batch_size)
self.tensordict = _tensordict
tc.__dict__["tensordict"] = tensordict
return tc

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

init here is the dataclass' __init__?

I see where you're doing with this, just a bit awkward because currently the internals of _from_tensordict don't have access to init. We'll have to rewrite it as a wrapper of some sort.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wouldn't calling cls.__init__ be enough?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But that's the tensorclass' init method right? So that's effectively what I'm currently doing anyway with

tc = cls(**tensordict, batch_size=tensordict.batch_size)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, my bad...in this PR we are still overwriting cls instead of using datacls.
you are right then, we need to do a wrapper like other functions

f"Keys from the tensordict ({set(tensordict.keys())}) must "
f"correspond to the class attributes ({expected_keys})."
)
tc = cls(**tensordict, batch_size=tensordict.batch_size)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this a bit overkilling it?
IIUC we deconstruct the tensordict, reconstruct another one, then replace it with the old one. Since the construction is quite time consuming, do you think we can find another solution?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my very same objection, but @tcbegley measured the overhead and seems negligible. Do you have a specific test case in mind that we could try to double check the benchmark?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The overhead is mainly due to the checks, so if we could do the checks only once it'd be cool (TensorDIct(dictionary, batch_size, device, _run_checks=False)).
Given that the tensordict that is provided is presumably already checked we should be good no?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

off-topic: why the _ in a parameter name? It should be used only for protected methods/variables

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly for that: we don't want users to call that. We may refactor this in the future to hide it a bit more.
You should use that only if you build a tensordict out of another tensordict and the checks have been done, which will only occur in with dedicated methods.
I'm open to suggestions to make it cleaner :)

Copy link
Contributor

@vmoens vmoens Jan 23, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See this example:

td = TensorDict({"a": torch.randn(3), "b": torch.randn(2)}, [3], _run_checks=False) # passes 
td[2]  # breaks

the code breaks where it should not (it should break earlier).
If we make that arg public, we're telling the users that this would be ok-ish to construct the TD like this, which isn't.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we initialise an empty tensordict in the constructor, and then populate it via __setattr__ from inside the dataclass' constructor, I think we will also need to set _run_checks=False inside there to really reduce overhead. Is it ok to always have that turned off though? Presumably we still want checks when running something like

tc.X = torch.rand(100, 10)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we initialise an empty tensordict in the constructor, and then populate it via __setattr__ from inside the dataclass' constructor, I think we will also need to set _run_checks=False inside there to really reduce overhead. Is it ok to always have that turned off though? Presumably we still want checks when running something like

tc.X = torch.rand(100, 10)

yep in that case we need to run it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly for that: we don't want users to call that. We may refactor this in the future to hide it a bit more. You should use that only if you build a tensordict out of another tensordict and the checks have been done, which will only occur in with dedicated methods. I'm open to suggestions to make it cleaner :)

couldn't we simply store a self._checked boolean attribute that gets set to true the first time, and get rid of that argument constructor?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That could be an option, I should go through the tensordict.py file and check if that would work in all cases

@vmoens vmoens added the Refactor Refactoring code - not a new feature label Jan 23, 2023
@vmoens
Copy link
Contributor

vmoens commented Jan 23, 2023

Merging this, we can add the _run_checks later if appropriate

@vmoens vmoens merged commit 3cf2703 into tensorclass-post-init Jan 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. Refactor Refactoring code - not a new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants