-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
torch.aten.unfold #849
Comments
80 tasks
renxida
pushed a commit
to llvm/torch-mlir
that referenced
this issue
Oct 8, 2024
# Description Implementation of the op for `torch.aten.unfold`: [TorchToLinalg Op Support #347](nod-ai/SHARK-ModelDev#849) Documentation of op can be found here: [PyTorch Docs](https://pytorch.org/docs/stable/generated/torch.Tensor.unfold.html) For this op, we apply a sliding window of some `size` along a single `dimension`, with `step` in between iterations. `Declaration: aten::unfold(Tensor(a) self, int dimension, int size, int step) -> Tensor(a)` The resulting `unfolded` tensor modifies the shape of `dimension` to be equal to the number of blocks that the sliding windows extracts/inserts, with an additional dimension of `size` appended (the number of cols of the output tensor directly translates from the size of the sliding window). So if we had a tensor of rank 3 (A x B x C), with dimension = 1, size = 2 and step = 2: (A x B x C) |=> (A x (B - size) // step + 1 x C x size) After extracting the window from the input tensor, we insert the (1 x size) slice into the output tensor. We can make this simpler by mapping the output indices from the input indices, like they do in the official implementation: [PyTorch Code](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/lowering.py#L1694)
Support added here llvm/torch-mlir#3772 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Assigned to @stbaione
The text was updated successfully, but these errors were encountered: