-
Notifications
You must be signed in to change notification settings - Fork 516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support pytorch extensions #895
Conversation
@silvasean Since you originally wrote most of these pieces, I'd appreciated an opinion. |
drive-by thank you for the PR :D |
This looks awesome. Before committing this, I would like to somehow make it load-bearing for the project so it gets tested/etc. To do that, I think we should add a minimal custom op end-to-end, which I think would consist of:
I think it could be as simple as a custom unary op -- something for folks to follow the bread crumbs when adding their own. I think the QPyTorch folks are going to want to follow this too. |
I wrote and tested all of that (in fact, it's literally written for a unary no-op---seems we had the same idea), but two things stopped me from including it in the PR:
If you folks don't mind, I don't mind. Edit: this was my working branch that includes all of the non-torch-extension pieces of the code. I extracted this PR from that branch. |
What form does the custom op take? Is it just a Python module? Or is there C++ code involved?
Yeah, I'm sympathetic to this. However, PyTorch's op list is already pretty "unclean" and has all sorts of random stuff with varying levels of "core-ness" vs "custom-ness". I don't think that |
Sounds good. I'll start adding the rest of the code into the PR.
My example was a python wrapper around a C++ extension, similar to the tutorial from the PyTorch docs. It's small, but it does include a cmake file, requires compilation, etc. The short version is: Python side: import os
import torch
# Register custom.nop as a side-effect of importing this module.
current_dir = os.path.dirname(os.path.abspath(__file__))
lib = os.path.join(*[current_dir, os.pardir, os.pardir, 'custom_nop', 'libnop.so'])
torch.ops.load_library(lib) C++ side: #include <torch/script.h> // One-stop header.
#include <iostream>
torch::Tensor nop(torch::Tensor t) {
std::cout << "no-op executed\n";
return t;
}
TORCH_LIBRARY(custom, m) {
m.def("nop(Tensor t) -> Tensor");
m.impl("nop", &nop);
} |
Ah, ok. Then I think this can be a standalone build inside build_tools. Perhaps Or for simplicity, if there is a way to add this as a subdirectory of the existing build, perhaps bside the jit ir importer, that would be convenient and avoid a manual step to build build_tools/custom_op_example. |
add05cd
to
a15b4c1
Compare
python/torch_mlir/dialects/torch/importer/jit_ir/build_tools/shape_lib_gen.py
Outdated
Show resolved
Hide resolved
python/torch_mlir/dialects/torch/importer/jit_ir/build_tools/shape_lib_gen.py
Outdated
Show resolved
Hide resolved
python/torch_mlir/_torch_mlir_custom_op_example/torch_mlir_custom_op_example.cpp
Outdated
Show resolved
Hide resolved
All set for merge, I think. Happy to take any last comments or requests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks Bob!!
Can you squash the history before merging to main?
PyTorch allows new operators to be registered dynamically in modules. Torch-mlir already makes it fairly straightforward to add support for new operators, and this commit just extends that support to allow new PyTorch ops to come from a external module. This does *not* allow ops to be dynamically loaded into torch-mlir. Torch-mlir must still be compiled with support built-in. Add a `_torch_mlir_custom_op_example` subpackage to `torch_mlir` which registers an demonstration op. It will not be imported by default when importing torch_mlir. It's strictly for testing and documentation. Adds an end-to-end test for the `torch_mlir_custom_op_example::identity` op. With all these changes, we should now be actively testing PyTorch extension support with all future patches.
a48a6a4
to
c7211df
Compare
Done. I don't have write access to the repo, so I'll need someone to hit the button. Thanks! |
Just invited you. Can't believe you didn't have it! |
PyTorch allows new operators to be registered dynamically in modules. Torch-mlir already makes it fairly straightforward to add support for new operators, and this commit just extends that support to allow new PyTorch ops to come from a external module. This does *not* allow ops to be dynamically loaded into torch-mlir. Torch-mlir must still be compiled with support built-in. Add a `_torch_mlir_custom_op_example` subpackage to `torch_mlir` which registers an demonstration op. It will not be imported by default when importing torch_mlir. It's strictly for testing and documentation. Adds an end-to-end test for the `torch_mlir_custom_op_example::identity` op. With all these changes, we should now be actively testing PyTorch extension support with all future patches.
Signed-off-by: Tong Chen <[email protected]>
PyTorch allows new operators to be registered dynamically in modules. Torch-mlir already makes it fairly straightforward to add support for new operators, and this commit just extends that support to allow new PyTorch ops to come from an external module.
Previously, there was no hook to allow extensions to be loaded before querying the op registry, so extension ops would never be generated. Moreover, a bug caused ops imported from an extension to be impossible to support via shape inference.
This does not allow ops to be dynamically loaded into torch-mlir. Torch-mlir must still be compiled with support built-in.
This should make tasks like this one in #462 (and to a lesser extent #504) somewhat easier.