Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TOSA] Fix aten.view and aten.slice.tensor #1768

Closed
wants to merge 1 commit into from

Conversation

AmosLewis
Copy link
Collaborator

@AmosLewis AmosLewis commented Jan 3, 2023

Find this error in nod-ai/SHARK-Studio#494
Deal with aten.view input -1
Deal with aten.slice start -1

slice op explaination: https://cran.r-project.org/web/packages/torch/vignettes/indexing.html

@AmosLewis
Copy link
Collaborator Author

AmosLewis commented Jan 17, 2023

func.func @torch.aten.slice(%arg0: !torch.vtensor<[1,128,2],f32>) -> !torch.vtensor<[1,1,2],f32> {
  %int-1 = torch.constant.int -1
  %int1 = torch.constant.int 1
  %int0 = torch.constant.int 0
  %0 = torch.aten.slice.Tensor %arg0, %int1, %int-1, %int0, %int1 : !torch.vtensor<[1,128,2],f32>, !torch.int, !torch.int, !torch.int, !torch.int -> !torch.vtensor<[1,1,2],f32>
  return %0 : !torch.vtensor<[1,1,2],f32>
}

➜ ~ torch-mlir-opt -convert-torch-to-tosa /tmp/slice.mlir

module {
  func.func @torch.aten.slice(%arg0: !torch.vtensor<[1,128,2],f32>) -> !torch.vtensor<[1,1,2],f32> {
    %0 = torch_c.to_builtin_tensor %arg0 : !torch.vtensor<[1,128,2],f32> -> tensor<1x128x2xf32>
    %int-1 = torch.constant.int -1
    %int1 = torch.constant.int 1
    %int0 = torch.constant.int 0
    %1 = "tosa.slice"(%0) {size = array<i64: 1, 1, 2>, start = array<i64: 0, 127, 0>} : (tensor<1x128x2xf32>) -> tensor<1x1x2xf32>
    %2 = torch_c.from_builtin_tensor %1 : tensor<1x1x2xf32> -> !torch.vtensor<[1,1,2],f32>
    return %2 : !torch.vtensor<[1,1,2],f32>
  }
}

@AmosLewis
Copy link
Collaborator Author

func.func @torch.aten.view(%arg0: !torch.vtensor<[1,128],si64>) -> !torch.vtensor<[1,128],si64> {
  %int-1 = torch.constant.int -1
  %int128 = torch.constant.int 128
  %0 = torch.prim.ListConstruct %int-1, %int128 : (!torch.int, !torch.int) -> !torch.list<int>
  %1 = torch.aten.view %arg0, %0 : !torch.vtensor<[1,128],si64>, !torch.list<int> -> !torch.vtensor<[1,128],si64>
  return %1 : !torch.vtensor<[1,128],si64>
}

➜ ~ torch-mlir-opt -convert-torch-to-tosa /tmp/view_torchbackend.mlir

module {
  func.func @torch.aten.view(%arg0: !torch.vtensor<[1,128],si64>) -> !torch.vtensor<[1,128],si64> {
    %0 = torch_c.to_builtin_tensor %arg0 : !torch.vtensor<[1,128],si64> -> tensor<1x128xi64>
    %int-1 = torch.constant.int -1
    %int128 = torch.constant.int 128
    %1 = torch.prim.ListConstruct %int-1, %int128 : (!torch.int, !torch.int) -> !torch.list<int>
    %2 = "tosa.reshape"(%0) {new_shape = array<i64: 1, 128>} : (tensor<1x128xi64>) -> tensor<1x128xi64>
    %3 = torch_c.from_builtin_tensor %2 : tensor<1x128xi64> -> !torch.vtensor<[1,128],si64>
    return %3 : !torch.vtensor<[1,128],si64>
  }
}

@AmosLewis
Copy link
Collaborator Author

➜  ~ torch-mlir-opt -pass-pipeline='builtin.module(torch-backend-to-tosa-backend-pipeline)'   /tmp/view_torchbackend.mlir
module {
  func.func @torch.aten.view(%arg0: tensor<1x128xi64>) -> tensor<1x128xi64> {
    return %arg0 : tensor<1x128xi64>
  }
}

➜  ~ torch-mlir-opt -pass-pipeline='builtin.module(torch-backend-to-tosa-backend-pipeline)'   /tmp/slice.mlir            
module {
  func.func @torch.aten.slice(%arg0: tensor<1x128x2xf32>) -> tensor<1x1x2xf32> {
    %0 = "tosa.slice"(%arg0) {size = array<i64: 1, 1, 2>, start = array<i64: 0, 127, 0>} : (tensor<1x128x2xf32>) -> tensor<1x1x2xf32>
    return %0 : tensor<1x1x2xf32>
  }
}

@AmosLewis AmosLewis force-pushed the slice branch 2 times, most recently from b0a5f34 to 7feb66a Compare January 18, 2023 01:19
@AmosLewis AmosLewis requested a review from ramiro050 January 18, 2023 04:01
@AmosLewis AmosLewis force-pushed the slice branch 6 times, most recently from 4232ba5 to e18993c Compare January 19, 2023 01:21
@@ -3061,13 +3079,18 @@ LogicalResult ConvertAtenOp<AtenSliceTensorOp>::matchAndRewrite(
if (!matchPattern(op.getStart(), m_TorchConstantInt(&start)))
return rewriter.notifyMatchFailure(op, "start must be a Scalar constant");

if (start < 0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add an e2e test that checks these changes?

// CHECK: %[[VAL_6:.*]] = torch_c.from_builtin_tensor %[[VAL_5]] : tensor<1x1x2xf32> -> !torch.vtensor<[1,1,2],f32>
// CHECK: return %[[VAL_6]] : !torch.vtensor<[1,1,2],f32>
// CHECK: }
func.func @torch.aten.slice(%arg0: !torch.vtensor<[1,128,2],f32>) -> !torch.vtensor<[1,1,2],f32> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not needed

@@ -2645,6 +2645,24 @@ LogicalResult ConvertAtenOp<AtenViewOp>::matchAndRewrite(
return rewriter.notifyMatchFailure(op,
"size must consist of Scalar constants");

// # the size -1 is inferred from other dimensions
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to first make sure there is at most one -1 in the list and return a notifyMatchFailure if that is not the case

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

}
for (size_t i = 0; i < outShape.size(); i++) {
if (outShape[i] < 0) {
outShape[i] = totalSize / otherSize;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would add a break after this line to make it very clear that this is expected to only runs once

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

if (start < 0)
return rewriter.notifyMatchFailure(op, "Currently unsupported: start < 0");
if (start < 0) {
start = start + selfType.getShape()[dim];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to check that it is positive after this

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


int64_t end;
if (!matchPattern(op.getEnd(), m_TorchConstantInt(&end)))
return rewriter.notifyMatchFailure(op, "end must be a Scalar constant");

if (end <= 0) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is correct. For the end == 0 case, end should remain zero. Make sure to also e2e test this edge case

@AmosLewis
Copy link
Collaborator Author

Split the aten.view op into a new patch #1815

@AmosLewis AmosLewis closed this Jan 31, 2023
@AmosLewis AmosLewis deleted the slice branch January 19, 2024 19:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants