-
Notifications
You must be signed in to change notification settings - Fork 516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable support for LTC Input/Output Mapping #764
Enable support for LTC Input/Output Mapping #764
Conversation
With this op enabled, tensors were being copied, which resulted in incorrect aliasing.
As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext
@silvasean Keeping a map of aliased output -> input tensors is a feature that we're planning to use internally at Cerebras -- is this something you'd like us to keep in the generic |
ac6fca3
to
52a0b23
Compare
Isn't that something that upstream already provides or could be moved there? It definitely seems like a generally useful feature and not specific to MLIR or Cerebras in any way. |
Upstream we're just calling a method in the |
Oh got it. Let's keep the functionality in there, as it seems generally useful. |
python/torch_mlir/csrc/base_lazy_backend/mlir_lowering_context.h
Outdated
Show resolved
Hide resolved
* Save InputOutputAliases to TorchMlirComputation * Implement GetResultShape for TorchMlirLoweringContext * Use optional return type for GetResultShape * Remove support for aten::detach With this op enabled, tensors were being copied, which resulted in incorrect aliasing. * Add newline before printing I/O alias mapping * Changed printout to use "Input param" as label instead of "Input" * Remote shape inference function for aten::detach * Moved implementation of SetUpAlias to MlirLoweringContext As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext * Use updated PyTorch API * Remove GetResultShape Complements this upstream PyTorch PR: pytorch/pytorch#75828 This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration) MLIR: func @graph(%arg0: !torch.vtensor<[1,5],f32>, %arg1: !torch.vtensor<[1],si64>, ..., %arg6: !torch.vtensor<[10,5],f32>, %arg7: !torch.vtensor<[10],f32>, ...) { ... return %arg0, %arg1, %17, %23, ... : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[10,5],f32>, !torch.vtensor<[10],f32>, ... } Input/Output Alias Mapping: Output: 0 -> Input: 0 Output: 1 -> Input: 1 Output: 2 -> Input: 6 Output: 3 -> Input: 7 The aten::detach op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.
* Save InputOutputAliases to TorchMlirComputation * Implement GetResultShape for TorchMlirLoweringContext * Use optional return type for GetResultShape * Remove support for aten::detach With this op enabled, tensors were being copied, which resulted in incorrect aliasing. * Add newline before printing I/O alias mapping * Changed printout to use "Input param" as label instead of "Input" * Remote shape inference function for aten::detach * Moved implementation of SetUpAlias to MlirLoweringContext As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext * Use updated PyTorch API * Remove GetResultShape Complements this upstream PyTorch PR: pytorch/pytorch#75828 This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration) MLIR: func @graph(%arg0: !torch.vtensor<[1,5],f32>, %arg1: !torch.vtensor<[1],si64>, ..., %arg6: !torch.vtensor<[10,5],f32>, %arg7: !torch.vtensor<[10],f32>, ...) { ... return %arg0, %arg1, %17, %23, ... : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[10,5],f32>, !torch.vtensor<[10],f32>, ... } Input/Output Alias Mapping: Output: 0 -> Input: 0 Output: 1 -> Input: 1 Output: 2 -> Input: 6 Output: 3 -> Input: 7 The aten::detach op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.
* Save InputOutputAliases to TorchMlirComputation * Implement GetResultShape for TorchMlirLoweringContext * Use optional return type for GetResultShape * Remove support for aten::detach With this op enabled, tensors were being copied, which resulted in incorrect aliasing. * Add newline before printing I/O alias mapping * Changed printout to use "Input param" as label instead of "Input" * Remote shape inference function for aten::detach * Moved implementation of SetUpAlias to MlirLoweringContext As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext * Use updated PyTorch API * Remove GetResultShape Complements this upstream PyTorch PR: pytorch/pytorch#75828 This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration) MLIR: func @graph(%arg0: !torch.vtensor<[1,5],f32>, %arg1: !torch.vtensor<[1],si64>, ..., %arg6: !torch.vtensor<[10,5],f32>, %arg7: !torch.vtensor<[10],f32>, ...) { ... return %arg0, %arg1, %17, %23, ... : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[10,5],f32>, !torch.vtensor<[10],f32>, ... } Input/Output Alias Mapping: Output: 0 -> Input: 0 Output: 1 -> Input: 1 Output: 2 -> Input: 6 Output: 3 -> Input: 7 The aten::detach op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.
* Save InputOutputAliases to TorchMlirComputation * Implement GetResultShape for TorchMlirLoweringContext * Use optional return type for GetResultShape * Remove support for aten::detach With this op enabled, tensors were being copied, which resulted in incorrect aliasing. * Add newline before printing I/O alias mapping * Changed printout to use "Input param" as label instead of "Input" * Remote shape inference function for aten::detach * Moved implementation of SetUpAlias to MlirLoweringContext As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext * Use updated PyTorch API * Remove GetResultShape Complements this upstream PyTorch PR: pytorch/pytorch#75828 This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration) MLIR: func @graph(%arg0: !torch.vtensor<[1,5],f32>, %arg1: !torch.vtensor<[1],si64>, ..., %arg6: !torch.vtensor<[10,5],f32>, %arg7: !torch.vtensor<[10],f32>, ...) { ... return %arg0, %arg1, %17, %23, ... : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[10,5],f32>, !torch.vtensor<[10],f32>, ... } Input/Output Alias Mapping: Output: 0 -> Input: 0 Output: 1 -> Input: 1 Output: 2 -> Input: 6 Output: 3 -> Input: 7 The aten::detach op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.
* Save InputOutputAliases to TorchMlirComputation * Implement GetResultShape for TorchMlirLoweringContext * Use optional return type for GetResultShape * Remove support for aten::detach With this op enabled, tensors were being copied, which resulted in incorrect aliasing. * Add newline before printing I/O alias mapping * Changed printout to use "Input param" as label instead of "Input" * Remote shape inference function for aten::detach * Moved implementation of SetUpAlias to MlirLoweringContext As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext * Use updated PyTorch API * Remove GetResultShape Complements this upstream PyTorch PR: pytorch/pytorch#75828 This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration) MLIR: func @graph(%arg0: !torch.vtensor<[1,5],f32>, %arg1: !torch.vtensor<[1],si64>, ..., %arg6: !torch.vtensor<[10,5],f32>, %arg7: !torch.vtensor<[10],f32>, ...) { ... return %arg0, %arg1, %17, %23, ... : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[10,5],f32>, !torch.vtensor<[10],f32>, ... } Input/Output Alias Mapping: Output: 0 -> Input: 0 Output: 1 -> Input: 1 Output: 2 -> Input: 6 Output: 3 -> Input: 7 The aten::detach op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.
* Save InputOutputAliases to TorchMlirComputation * Implement GetResultShape for TorchMlirLoweringContext * Use optional return type for GetResultShape * Remove support for aten::detach With this op enabled, tensors were being copied, which resulted in incorrect aliasing. * Add newline before printing I/O alias mapping * Changed printout to use "Input param" as label instead of "Input" * Remote shape inference function for aten::detach * Moved implementation of SetUpAlias to MlirLoweringContext As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext * Use updated PyTorch API * Remove GetResultShape Complements this upstream PyTorch PR: pytorch/pytorch#75828 This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration) MLIR: func @graph(%arg0: !torch.vtensor<[1,5],f32>, %arg1: !torch.vtensor<[1],si64>, ..., %arg6: !torch.vtensor<[10,5],f32>, %arg7: !torch.vtensor<[10],f32>, ...) { ... return %arg0, %arg1, %17, %23, ... : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[10,5],f32>, !torch.vtensor<[10],f32>, ... } Input/Output Alias Mapping: Output: 0 -> Input: 0 Output: 1 -> Input: 1 Output: 2 -> Input: 6 Output: 3 -> Input: 7 The aten::detach op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.
* Save InputOutputAliases to TorchMlirComputation * Implement GetResultShape for TorchMlirLoweringContext * Use optional return type for GetResultShape * Remove support for aten::detach With this op enabled, tensors were being copied, which resulted in incorrect aliasing. * Add newline before printing I/O alias mapping * Changed printout to use "Input param" as label instead of "Input" * Remote shape inference function for aten::detach * Moved implementation of SetUpAlias to MlirLoweringContext As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext * Use updated PyTorch API * Remove GetResultShape Complements this upstream PyTorch PR: pytorch/pytorch#75828 This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration) MLIR: func @graph(%arg0: !torch.vtensor<[1,5],f32>, %arg1: !torch.vtensor<[1],si64>, ..., %arg6: !torch.vtensor<[10,5],f32>, %arg7: !torch.vtensor<[10],f32>, ...) { ... return %arg0, %arg1, %17, %23, ... : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[10,5],f32>, !torch.vtensor<[10],f32>, ... } Input/Output Alias Mapping: Output: 0 -> Input: 0 Output: 1 -> Input: 1 Output: 2 -> Input: 6 Output: 3 -> Input: 7 The aten::detach op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.
* Save InputOutputAliases to TorchMlirComputation * Implement GetResultShape for TorchMlirLoweringContext * Use optional return type for GetResultShape * Remove support for aten::detach With this op enabled, tensors were being copied, which resulted in incorrect aliasing. * Add newline before printing I/O alias mapping * Changed printout to use "Input param" as label instead of "Input" * Remote shape inference function for aten::detach * Moved implementation of SetUpAlias to MlirLoweringContext As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext * Use updated PyTorch API * Remove GetResultShape Complements this upstream PyTorch PR: pytorch/pytorch#75828 This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration) MLIR: func @graph(%arg0: !torch.vtensor<[1,5],f32>, %arg1: !torch.vtensor<[1],si64>, ..., %arg6: !torch.vtensor<[10,5],f32>, %arg7: !torch.vtensor<[10],f32>, ...) { ... return %arg0, %arg1, %17, %23, ... : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[10,5],f32>, !torch.vtensor<[10],f32>, ... } Input/Output Alias Mapping: Output: 0 -> Input: 0 Output: 1 -> Input: 1 Output: 2 -> Input: 6 Output: 3 -> Input: 7 The aten::detach op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.
* Save InputOutputAliases to TorchMlirComputation * Implement GetResultShape for TorchMlirLoweringContext * Use optional return type for GetResultShape * Remove support for aten::detach With this op enabled, tensors were being copied, which resulted in incorrect aliasing. * Add newline before printing I/O alias mapping * Changed printout to use "Input param" as label instead of "Input" * Remote shape inference function for aten::detach * Moved implementation of SetUpAlias to MlirLoweringContext As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext * Use updated PyTorch API * Remove GetResultShape Complements this upstream PyTorch PR: pytorch/pytorch#75828 This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration) MLIR: func @graph(%arg0: !torch.vtensor<[1,5],f32>, %arg1: !torch.vtensor<[1],si64>, ..., %arg6: !torch.vtensor<[10,5],f32>, %arg7: !torch.vtensor<[10],f32>, ...) { ... return %arg0, %arg1, %17, %23, ... : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[10,5],f32>, !torch.vtensor<[10],f32>, ... } Input/Output Alias Mapping: Output: 0 -> Input: 0 Output: 1 -> Input: 1 Output: 2 -> Input: 6 Output: 3 -> Input: 7 The aten::detach op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.
* Save InputOutputAliases to TorchMlirComputation * Implement GetResultShape for TorchMlirLoweringContext * Use optional return type for GetResultShape * Remove support for aten::detach With this op enabled, tensors were being copied, which resulted in incorrect aliasing. * Add newline before printing I/O alias mapping * Changed printout to use "Input param" as label instead of "Input" * Remote shape inference function for aten::detach * Moved implementation of SetUpAlias to MlirLoweringContext As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext * Use updated PyTorch API * Remove GetResultShape Complements this upstream PyTorch PR: pytorch/pytorch#75828 This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration) MLIR: func @graph(%arg0: !torch.vtensor<[1,5],f32>, %arg1: !torch.vtensor<[1],si64>, ..., %arg6: !torch.vtensor<[10,5],f32>, %arg7: !torch.vtensor<[10],f32>, ...) { ... return %arg0, %arg1, %17, %23, ... : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[10,5],f32>, !torch.vtensor<[10],f32>, ... } Input/Output Alias Mapping: Output: 0 -> Input: 0 Output: 1 -> Input: 1 Output: 2 -> Input: 6 Output: 3 -> Input: 7 The aten::detach op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.
* Save InputOutputAliases to TorchMlirComputation * Implement GetResultShape for TorchMlirLoweringContext * Use optional return type for GetResultShape * Remove support for aten::detach With this op enabled, tensors were being copied, which resulted in incorrect aliasing. * Add newline before printing I/O alias mapping * Changed printout to use "Input param" as label instead of "Input" * Remote shape inference function for aten::detach * Moved implementation of SetUpAlias to MlirLoweringContext As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext * Use updated PyTorch API * Remove GetResultShape Complements this upstream PyTorch PR: pytorch/pytorch#75828 This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration) MLIR: func @graph(%arg0: !torch.vtensor<[1,5],f32>, %arg1: !torch.vtensor<[1],si64>, ..., %arg6: !torch.vtensor<[10,5],f32>, %arg7: !torch.vtensor<[10],f32>, ...) { ... return %arg0, %arg1, %17, %23, ... : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[10,5],f32>, !torch.vtensor<[10],f32>, ... } Input/Output Alias Mapping: Output: 0 -> Input: 0 Output: 1 -> Input: 1 Output: 2 -> Input: 6 Output: 3 -> Input: 7 The aten::detach op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.
* Save InputOutputAliases to TorchMlirComputation * Implement GetResultShape for TorchMlirLoweringContext * Use optional return type for GetResultShape * Remove support for aten::detach With this op enabled, tensors were being copied, which resulted in incorrect aliasing. * Add newline before printing I/O alias mapping * Changed printout to use "Input param" as label instead of "Input" * Remote shape inference function for aten::detach * Moved implementation of SetUpAlias to MlirLoweringContext As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext * Use updated PyTorch API * Remove GetResultShape Complements this upstream PyTorch PR: pytorch/pytorch#75828 This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration) MLIR: func @graph(%arg0: !torch.vtensor<[1,5],f32>, %arg1: !torch.vtensor<[1],si64>, ..., %arg6: !torch.vtensor<[10,5],f32>, %arg7: !torch.vtensor<[10],f32>, ...) { ... return %arg0, %arg1, %17, %23, ... : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[10,5],f32>, !torch.vtensor<[10],f32>, ... } Input/Output Alias Mapping: Output: 0 -> Input: 0 Output: 1 -> Input: 1 Output: 2 -> Input: 6 Output: 3 -> Input: 7 The aten::detach op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.
* Save InputOutputAliases to TorchMlirComputation * Implement GetResultShape for TorchMlirLoweringContext * Use optional return type for GetResultShape * Remove support for aten::detach With this op enabled, tensors were being copied, which resulted in incorrect aliasing. * Add newline before printing I/O alias mapping * Changed printout to use "Input param" as label instead of "Input" * Remote shape inference function for aten::detach * Moved implementation of SetUpAlias to MlirLoweringContext As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext * Use updated PyTorch API * Remove GetResultShape Complements this upstream PyTorch PR: pytorch/pytorch#75828 This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration) MLIR: func @graph(%arg0: !torch.vtensor<[1,5],f32>, %arg1: !torch.vtensor<[1],si64>, ..., %arg6: !torch.vtensor<[10,5],f32>, %arg7: !torch.vtensor<[10],f32>, ...) { ... return %arg0, %arg1, %17, %23, ... : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[10,5],f32>, !torch.vtensor<[10],f32>, ... } Input/Output Alias Mapping: Output: 0 -> Input: 0 Output: 1 -> Input: 1 Output: 2 -> Input: 6 Output: 3 -> Input: 7 The aten::detach op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.
* Save InputOutputAliases to TorchMlirComputation * Implement GetResultShape for TorchMlirLoweringContext * Use optional return type for GetResultShape * Remove support for aten::detach With this op enabled, tensors were being copied, which resulted in incorrect aliasing. * Add newline before printing I/O alias mapping * Changed printout to use "Input param" as label instead of "Input" * Remote shape inference function for aten::detach * Moved implementation of SetUpAlias to MlirLoweringContext As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext * Use updated PyTorch API * Remove GetResultShape Complements this upstream PyTorch PR: pytorch/pytorch#75828 This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration) MLIR: func @graph(%arg0: !torch.vtensor<[1,5],f32>, %arg1: !torch.vtensor<[1],si64>, ..., %arg6: !torch.vtensor<[10,5],f32>, %arg7: !torch.vtensor<[10],f32>, ...) { ... return %arg0, %arg1, %17, %23, ... : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[10,5],f32>, !torch.vtensor<[10],f32>, ... } Input/Output Alias Mapping: Output: 0 -> Input: 0 Output: 1 -> Input: 1 Output: 2 -> Input: 6 Output: 3 -> Input: 7 The aten::detach op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.
* Save InputOutputAliases to TorchMlirComputation * Implement GetResultShape for TorchMlirLoweringContext * Use optional return type for GetResultShape * Remove support for aten::detach With this op enabled, tensors were being copied, which resulted in incorrect aliasing. * Add newline before printing I/O alias mapping * Changed printout to use "Input param" as label instead of "Input" * Remote shape inference function for aten::detach * Moved implementation of SetUpAlias to MlirLoweringContext As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext * Use updated PyTorch API * Remove GetResultShape Complements this upstream PyTorch PR: pytorch/pytorch#75828 This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration) MLIR: func @graph(%arg0: !torch.vtensor<[1,5],f32>, %arg1: !torch.vtensor<[1],si64>, ..., %arg6: !torch.vtensor<[10,5],f32>, %arg7: !torch.vtensor<[10],f32>, ...) { ... return %arg0, %arg1, %17, %23, ... : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[10,5],f32>, !torch.vtensor<[10],f32>, ... } Input/Output Alias Mapping: Output: 0 -> Input: 0 Output: 1 -> Input: 1 Output: 2 -> Input: 6 Output: 3 -> Input: 7 The aten::detach op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.
* Test conv with dynamic dimensions Signed-off-by: Tung D. Le <[email protected]> * Borrow insertAllocandDealloc from the pooling ops Signed-off-by: Tung D. Le <[email protected]> * Test ResNet50 with unknown dimensions Signed-off-by: Tung D. Le <[email protected]> * Use IndexExpr for conv Signed-off-by: Tung D. Le <[email protected]> * Clean up Signed-off-by: Tung D. Le <[email protected]> * Clean up Signed-off-by: Tung D. Le <[email protected]> * Use IndexExpr for pooling ops Signed-off-by: Tung D. Le <[email protected]> * clang-format Signed-off-by: Tung D. Le <[email protected]>
Complements this upstream PyTorch PR: pytorch/pytorch#75828
This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration)
The
aten::detach
op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.cc: @antoniojkim @ke1337