Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for mv. #1444

Closed
wants to merge 1 commit into from
Closed

Add support for mv. #1444

wants to merge 1 commit into from

Conversation

dellis23
Copy link
Collaborator

See #1423.

Please let me know if there are better places to put anything or conventions I missed. One thing I'm unclear on: I see in the codebase a lot of explicit __init__s with calls to super that do nothing else. Is there a reason for this I'm missing?

Also, I ran into issues with what I assume is an op that was deleted (frobenius norm). I had to do a lot of manual surgery to get things working again. Is there a better procedure for this? I can do that in a separate commit if so and rebase this off of it so this is cleaner.

Also: should I do the same for other dialects? I see a number of other dialects for matmul, but I only see TorchToLinalg in the PR I was referencing. I figured it would be good to get feedback on this first before doing it.

I tried to move the new test case to its own file, but I'm getting an import error when I add an entry to the import list in test_suite/__init__.py:

ImportError: cannot import name 'mv' from 'torch_mlir_e2e_test.test_suite' (/usr/local/google/home/danielellis/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/test_suite/__init__.py)

I really have no clue what it could be. The imports are the exact same as the other files in the directory and don't appear circular. The file is definitely present on disk. I also tried changing the name of the file. It's the same error I get when I put in a bogus name. Is there some magic I'm missing?

@@ -415,6 +415,42 @@ class ConvertAtenMatmulOp : public OpConversionPattern<AtenMatmulOp> {
};
} // namespace

namespace {
class ConvertAtenMvOp : public OpConversionPattern<AtenMvOp> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would make more sense to add a decomposition of this op into AtenMatmulOp, since that op performs this same handling:

// Third Case: Matrix-Vec Multiplication.

The decomposition would happen in the DecomposeComplexOps.cpp file

@ramiro050
Copy link
Collaborator

One thing I'm unclear on: I see in the codebase a lot of explicit __init__s with calls to super that do nothing else. Is there a reason for this I'm missing?

This is is probably due to someone adding the __init__ at some point on a test and everyone else just copying to match style. If they are not needed, we can remove all of them in a separate PR.

Also, I ran into issues with what I assume is an op that was deleted (frobenius norm). I had to do a lot of manual surgery to get things working again. Is there a better procedure for this? I can do that in a separate commit if so and rebase this off of it so this is cleaner.

Can you elaborate more on the error you were seeing? This op should be working.

should I do the same for other dialects?

Adding support for other dialects is always optional. The expectation is that people simply add what they need. (However, if you add it as a decomposition, other dialects that already support AtenMatmulOp will get your op for free!)

I tried to move the new test case to its own file, but I'm getting an import error when I add an entry to the import list in test_suite/init.py

The main file is in a different directory than the test suite, so the import of torch_mlir_e2e_test is made from the python bindings generated when you build torch-mlir rather than the torch_mlir_e2e_test directory in the source code. You need to rebuild torch-mlir to have your file copied over to the python bindings directory.

@dellis23
Copy link
Collaborator Author

Can you elaborate more on the error you were seeing? This op should be working.

When I ran one of the scripts (I believe it was update_torch_ods.sh), it removed AtenFrobeniusNormDimOp from the output file. Then when I tried building / continuing on, I kept getting compilation errors. I resolved those by removing the various parts related to that op.

@@ -399,7 +400,6 @@ def emit_with_mutating_variants(key, **kwargs):
emit("aten::nll_loss_backward : (Tensor, Tensor, Tensor, Tensor?, int, int, Tensor) -> (Tensor)")
emit("aten::bincount : (Tensor, Tensor?, int) -> (Tensor)")
emit("aten::linalg_vector_norm : (Tensor, Scalar, int[]?, bool, int?) -> (Tensor)")
emit("aten::frobenius_norm.dim : (Tensor, int[], bool) -> (Tensor)")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you remove this line before or after running ./build_tools/update_torch_ods.sh? This is the line that generates the ODS for the op. Also, when I run thing locally on main, I don't see this op disappear.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hadn't removed anything; I didn't have any good reason to pick on frobenius :P

Perhaps it was something temporary with us running against head pytorch? If it's working fine for you at head now I'll try clearing it all out and rerunning once I'm able to move over to the decomposition approach.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, maybe it was a temporary thing with PyTorch. There is a commit that makes changes to frobenius_norm that would appear when running update_torch_ods.sh: pytorch/pytorch@1c0f0b3

The commit has been merged into PyTorch and reverted twice, so this might come back in the future.

@dellis23
Copy link
Collaborator Author

dellis23 commented Oct 3, 2022

Closing this in favor of #1453

@dellis23 dellis23 closed this Oct 3, 2022
@powderluv powderluv deleted the add-mv-op branch July 19, 2023 19:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants