Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cast number to float when shape function takes Scalar arg #1978

Merged
merged 1 commit into from
Mar 28, 2023

Conversation

ramiro050
Copy link
Collaborator

To keep things simple in shape functions, Scalar inputs are considered floats. This means that when inserting the shape functions into the IR, we must cast any !torch.numbers into floats so that the operand type matches the expected type in the shape function. This commit adds the cast from Scalar to float.

To keep things simple in shape functions, `Scalar` inputs are
considered `float`s. This means that when inserting the shape
functions into the IR, we must cast any `!torch.number`s into `float`s
so that the operand type matches the expected type in the shape
function. This commit adds the cast from `Scalar` to `float`.
@ramiro050 ramiro050 merged commit d803ab4 into llvm:main Mar 28, 2023
@ramiro050 ramiro050 deleted the fix-handling-number branch March 28, 2023 16:30
ramiro050 added a commit to ramiro050/torch-mlir that referenced this pull request Apr 24, 2023
)

To keep things simple in shape functions, `Scalar` inputs are
considered `float`s. This means that when inserting the shape
functions into the IR, we must cast any `!torch.number`s into `float`s
so that the operand type matches the expected type in the shape
function. This commit adds the cast from `Scalar` to `float`.
ramiro050 added a commit that referenced this pull request Apr 25, 2023
To keep things simple in shape functions, `Scalar` inputs are
considered `float`s. This means that when inserting the shape
functions into the IR, we must cast any `!torch.number`s into `float`s
so that the operand type matches the expected type in the shape
function. This commit adds the cast from `Scalar` to `float`.
gpetters94 pushed a commit to gpetters94/mlir-npcomp that referenced this pull request May 10, 2023
)

To keep things simple in shape functions, `Scalar` inputs are
considered `float`s. This means that when inserting the shape
functions into the IR, we must cast any `!torch.number`s into `float`s
so that the operand type matches the expected type in the shape
function. This commit adds the cast from `Scalar` to `float`.
gpetters94 pushed a commit to gpetters94/mlir-npcomp that referenced this pull request Jul 7, 2023
)

To keep things simple in shape functions, `Scalar` inputs are
considered `float`s. This means that when inserting the shape
functions into the IR, we must cast any `!torch.number`s into `float`s
so that the operand type matches the expected type in the shape
function. This commit adds the cast from `Scalar` to `float`.
gpetters94 pushed a commit to gpetters94/mlir-npcomp that referenced this pull request Jul 7, 2023
)

To keep things simple in shape functions, `Scalar` inputs are
considered `float`s. This means that when inserting the shape
functions into the IR, we must cast any `!torch.number`s into `float`s
so that the operand type matches the expected type in the shape
function. This commit adds the cast from `Scalar` to `float`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants