Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[torch-dialect] emit aten.as_strided op #2280

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

Vremold
Copy link
Collaborator

@Vremold Vremold commented Jul 3, 2023

This patch is an extension for PR #1742 . The difference is that, this patch tries to decompose aten.as_strided in torch dialect directly, rather than rely on dialect conversion pass to handle it. Also, this patch adds a folder for aten.as_strided when it does nothing.

@Vremold Vremold requested review from ramiro050 and AmosLewis July 3, 2023 09:26
@Vremold Vremold force-pushed the wjw-emit-aten.as_strided-and-convert branch from 8a0b2f2 to 1e04260 Compare July 5, 2023 02:56
@Vremold
Copy link
Collaborator Author

Vremold commented Jul 5, 2023

My CI failed several times. Could you please help me find out why? @ramiro050 . Thanks!

@Vremold Vremold force-pushed the wjw-emit-aten.as_strided-and-convert branch from fb8e9de to c8ea0a0 Compare July 16, 2023 13:15
@mgehre-amd
Copy link
Contributor

mgehre-amd commented Jul 17, 2023

The tests hang when a test cases crashes. I see python: /main_checkout/torch-mlir/externals/llvm-project/llvm/../mlir/include/mlir/IR/StorageUniquerSupport.h:180: static ConcreteT mlir::detail::StorageUserBase<mlir::torch::Torch::ValueTensorType, mlir::torch::Torch::BaseTensorType, mlir::torch::Torch::detail::ValueTensorTypeStorage, mlir::detail::TypeUniquer>::get(MLIRContext *, Args...) [ConcreteT = mlir::torch::Torch::ValueTensorType, BaseT = mlir::torch::Torch::BaseTensorType, StorageT = mlir::torch::Torch::detail::ValueTensorTypeStorage, UniquerT = mlir::detail::TypeUniquer, Traits = <>, Args = <std::optional<llvm::ArrayRef<long>>, mlir::Type>]: Assertion succeeded(ConcreteT::verify(getDefaultDiagnosticEmitFn(ctx), args...))' failed. in the CI output.
Does ./tools/e2e_test.sh --config=make_fx_tosa -v succeed locally for you?

@Vremold
Copy link
Collaborator Author

Vremold commented Jul 17, 2023

The tests hang when a test cases crashes. I see python: /main_checkout/torch-mlir/externals/llvm-project/llvm/../mlir/include/mlir/IR/StorageUniquerSupport.h:180: static ConcreteT mlir::detail::StorageUserBase<mlir::torch::Torch::ValueTensorType, mlir::torch::Torch::BaseTensorType, mlir::torch::Torch::detail::ValueTensorTypeStorage, mlir::detail::TypeUniquer>::get(MLIRContext *, Args...) [ConcreteT = mlir::torch::Torch::ValueTensorType, BaseT = mlir::torch::Torch::BaseTensorType, StorageT = mlir::torch::Torch::detail::ValueTensorTypeStorage, UniquerT = mlir::detail::TypeUniquer, Traits = <>, Args = <std::optional<llvm::ArrayRef<long>>, mlir::Type>]: Assertion succeeded(ConcreteT::verify(getDefaultDiagnosticEmitFn(ctx), args...))' failed. in the CI output. Does ./tools/e2e_test.sh --config=make_fx_tosa -v succeed locally for you?

Thanks for your reminder. I found the code mistakes and fix them in the lated commit.

@ramiro050
Copy link
Collaborator

This is a tricky op to support in torch-mlir because it interacts with the storage of the tensor directly, and at the torch-mlir level there is no concept of storage. See discussion here: #1742 (comment)

How are you encountering this op?

@Vremold
Copy link
Collaborator Author

Vremold commented Jul 19, 2023

This is a tricky op to support in torch-mlir because it interacts with the storage of the tensor directly, and at the torch-mlir level there is no concept of storage. See discussion here: #1742 (comment)

How are you encountering this op?

I understand your concern about as_strided operates on a discontiguous tensor. I encounter this op in a torchscript model, but the original torch model is missed. So I'm not sure whether it's from original torch model or just created by torch.jit something.

@Vremold
Copy link
Collaborator Author

Vremold commented Jul 19, 2023

So is there any possibility to support this op in Torch-MLIR, if it does come from a PyTorch model?

@mgehre-amd
Copy link
Contributor

I also remember seeing this op coming from dynamo/make_fx decompositions.
If the ops is not safe to handle in general, can we statically analyze cases where it is safe to handle?

@ramiro050
Copy link
Collaborator

If the ops is not safe to handle in general, can we statically analyze cases where it is safe to handle?

Yeah, that would be the approach to take. My guess is that this op is being generating by PyTorch from decomposing a higher level op, so we need to identify the pattern and fold it back into the high level op.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants