-
Notifications
You must be signed in to change notification settings - Fork 23.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge Tensor and Variable types. #28287
Conversation
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <[email protected]> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <[email protected]> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <[email protected]> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <[email protected]> ghstack-source-id: 699c091cdce86520c6519f759dec74088ecfb5f2 Pull Request resolved: #28287
return Tensor(self_impl_copy); | ||
} | ||
|
||
/// NOTE: `var.variable_data()` in C++ has the same semantics as `tensor.data` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does that mean that var.variable_data()
is the same as var.detach()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @yf225 this is just preexisting
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the only difference between var.variable_data()
(aka. tensor.data
in Python) and var.detach()
(aka. tensor.detach()
in Python) is that the former doesn't share version counter, but the latter does.
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <[email protected]> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <[email protected]> ghstack-source-id: f8c9827ec97a36adaa0c8c952956d278ed25a2bc Pull Request resolved: #28287
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <[email protected]> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <[email protected]> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. Signed-off-by: Edward Z. Yang <[email protected]> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. Signed-off-by: Edward Z. Yang <[email protected]> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. Signed-off-by: Edward Z. Yang <[email protected]> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <[email protected]> ghstack-source-id: 7ea0c4c1b3f1a417f3abd911423f2e10f4f22df8 Pull Request resolved: #28287
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`) - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`) Some other notes: - I'm not sure what was going with the old template implementation of `extract_vars`, but I couldn't get the sfinae version to work. Replacing it with an overloading based version made it work. Signed-off-by: Edward Z. Yang <[email protected]> [ghstack-poisoned]
This diff is now rebased past my other changes! |
CircleCI build failures summaryAs of commit 658d692:
Here are the reasons each build failed:
This comment was automatically generated by Dr. CI. Please report bugs/suggestions on the GitHub issue tracker. This comment has been revised 7 time(s). |
This PR eliminates the static distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. To do this, I need Tensor to have API parity with Variable. I have already moved most of the methods I don't want in Tensor off Variable. These implementations are all placed in Tensor.cpp. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`) - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`) Some other notes: - I'm not sure what was going with the old template implementation of `extract_vars`, but I couldn't get the sfinae version to work. Replacing it with an overloading based version made it work. Signed-off-by: Edward Z. Yang <[email protected]> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <[email protected]> ghstack-source-id: 0b361ad594befa2d4d824c6129acd04308b062c6 Pull Request resolved: #28287
This PR eliminates the static distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. To do this, I need Tensor to have API parity with Variable. I have already moved most of the methods I don't want in Tensor off Variable. These implementations are all placed in Tensor.cpp. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`) - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`) Some other notes: - I'm not sure what was going with the old template implementation of `extract_vars`, but I couldn't get the sfinae version to work. Replacing it with an overloading based version made it work. Signed-off-by: Edward Z. Yang <[email protected]> [ghstack-poisoned]
This PR eliminates the static distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. To do this, I need Tensor to have API parity with Variable. I have already moved most of the methods I don't want in Tensor off Variable. These implementations are all placed in Tensor.cpp. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`) - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`) Some other notes: - I'm not sure what was going with the old template implementation of `extract_vars`, but I couldn't get the sfinae version to work. Replacing it with an overloading based version made it work. Signed-off-by: Edward Z. Yang <[email protected]> [ghstack-poisoned]
…t on Tensor." Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml This also replaces `current_version` with just `_version`. This is a carved out portion of #28287, rebased past Tensor-Variable merge. Signed-off-by: Edward Z. Yang <[email protected]> Differential Revision: [D18504934](https://our.internmc.facebook.com/intern/diff/D18504934) [ghstack-poisoned]
#29667) Summary: Pull Request resolved: #29667 Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml This also replaces `current_version` with just `_version`. This is a carved out portion of #28287, rebased past Tensor-Variable merge. Signed-off-by: Edward Z. Yang <[email protected]> Test Plan: Imported from OSS Differential Revision: D18504934 Pulled By: ezyang fbshipit-source-id: be7adf45b637daffe2b0b1631eb31d967525fc31
Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml This is a carved out portion of pytorch#28287, rebased past Tensor-Variable merge. Signed-off-by: Edward Z. Yang <[email protected]> ghstack-source-id: 0d2141e0e0f3eedf49c5878cce3f3461c496c889 Pull Request resolved: pytorch#29667
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <[email protected]> ghstack-source-id: f17eaa61054794b38ddf19e4bdaa96e3da266732 Pull Request resolved: pytorch#28287
This PR eliminates the static distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. To do this, I need Tensor to have API parity with Variable. I have already moved most of the methods I don't want in Tensor off Variable. These implementations are all placed in Tensor.cpp. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`) - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`) Some other notes: - I'm not sure what was going with the old template implementation of `extract_vars`, but I couldn't get the sfinae version to work. Replacing it with an overloading based version made it work. Signed-off-by: Edward Z. Yang <[email protected]> Differential Revision: [D18571426](https://our.internmc.facebook.com/intern/diff/D18571426) [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <[email protected]> ghstack-source-id: 0b4323746b01bd5f8186bcacedb9ac15fb79ee07 Pull Request resolved: #28287
This PR eliminates the static distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. To do this, I need Tensor to have API parity with Variable. I have already moved most of the methods I don't want in Tensor off Variable. These implementations are all placed in Tensor.cpp. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`) - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`) Some other notes: - I'm not sure what was going with the old template implementation of `extract_vars`, but I couldn't get the sfinae version to work. Replacing it with an overloading based version made it work. Signed-off-by: Edward Z. Yang <[email protected]> Differential Revision: [D18571426](https://our.internmc.facebook.com/intern/diff/D18571426) [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <[email protected]> ghstack-source-id: 79ab8d7dc421fca0370c45b7d971b960cabe42ca Pull Request resolved: #28287
This PR eliminates the static distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. To do this, I need Tensor to have API parity with Variable. I have already moved most of the methods I don't want in Tensor off Variable. These implementations are all placed in Tensor.cpp. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`) - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`) Some other notes: - I'm not sure what was going with the old template implementation of `extract_vars`, but I couldn't get the sfinae version to work. Replacing it with an overloading based version made it work. Signed-off-by: Edward Z. Yang <[email protected]> Differential Revision: [D18571426](https://our.internmc.facebook.com/intern/diff/D18571426) [ghstack-poisoned]
This PR eliminates the static distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. To do this, I need Tensor to have API parity with Variable. I have already moved most of the methods I don't want in Tensor off Variable. These implementations are all placed in Tensor.cpp. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`) - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`) Some other notes: - I'm not sure what was going with the old template implementation of `extract_vars`, but I couldn't get the sfinae version to work. Replacing it with an overloading based version made it work. Signed-off-by: Edward Z. Yang <[email protected]> Differential Revision: [D18571426](https://our.internmc.facebook.com/intern/diff/D18571426) [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <[email protected]> ghstack-source-id: 67b78048b596c479fe1a1aea6f8aad13ae97230a Pull Request resolved: #28287
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <[email protected]> ghstack-source-id: ebf5f229c7c2bae2bd018331fa2c15d8ad3add9c Pull Request resolved: pytorch/pytorch#28287
Stack from ghstack:
This PR eliminates the static distinction between
Tensor and Variable. Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.
To do this, I need Tensor to have API parity with Variable. I have already
moved most of the methods I don't want in Tensor off Variable.
These implementations are all placed in Tensor.cpp.
One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)
This diff is BC breaking in a few ways:
torch::autograd
functions no longer works, you have to explicitly qualifythem with
torch::autograd
(examples:torch/nn/parallel/data_parallel.h
)they are different types (e.g., for the purposes of templating, or enable_if checks)
will not work until you delete the (now) redundant overload/specialization.
(examples:
torch/nn/modules/container/any.h
,torch/csrc/utils/pybind.h
)Some other notes:
extract_vars
,but I couldn't get the sfinae version to work. Replacing it with an overloading based version
made it work.
Signed-off-by: Edward Z. Yang [email protected]
Differential Revision: D18571426