-
Notifications
You must be signed in to change notification settings - Fork 458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Qualcomm AI Engine Direct - Enable custom operator #8726
base: main
Are you sure you want to change the base?
Qualcomm AI Engine Direct - Enable custom operator #8726
Conversation
Summary: - Support to register op package in QNN Backend - Add example script to run torch custom op with QNN Op package - Allow op package override torch built-in operator - Add op package example - Modify the flag of dlopen for QNN library - Generate custom op based on the meta and _schema.arguments of torch.fx.Node - Add README for the custom op
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/8726
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 1 Unrelated FailureAs of commit 064d12f with merge base 5a594a7 ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
Hi @cccclai, This PR is to support custom kernel in QNN Backend. |
|
||
|
||
@impl(my_op_lib, "mul3", dispatch_key="CompositeExplicitAutograd") | ||
def mul3_impl(a: torch.Tensor) -> torch.Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I may not know enough details, but can you tell me how qnn backend knows this custom op can be consumed? Previously I was thinking qnn custom ops can be registered in a specific namespace, but it looks a bit different to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Certainly. QNN uses the op package mechanism to create custom ops. Once the op package is prepared, it is registered using qnn_backend_register_op_package. In the op builder, you provide the corresponding op_package_name , qnn_op_type_name and schema to create a QNN node.
In the executorh, through compile_spec, you pass QnnExecuTorchOpPackageOptions which includes the op package info and custom_op_name. Using the custom_op_name, such as my_op_lib.mul3.default, a CustomOp builder is created to consume the custom op.
Summary:
Reproduce commands: