-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(//core): Align with prim::Enter in module fallback #991
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to C++ style guidelines:
diff --git a/workspace/core/lowering/register_trt_placeholder_ops.cpp b/tmp/changes.txt
index 5ba8171..17d7d3f 100644
--- a/workspace/core/lowering/register_trt_placeholder_ops.cpp
+++ b/tmp/changes.txt
@@ -10,7 +10,10 @@ c10::AliasAnalysisKind aliasAnalysisFromSchema() {
RegisterOperators trt_placeholder_ops_reg({
/// Op marks a Tensor to be conveted from an Torch Tensor
/// to a TRT constant Tensor
- Operator("trt::const(Tensor val) -> Tensor", [](Stack& stack) { /*noop*/ }, aliasAnalysisFromSchema()),
+ Operator(
+ "trt::const(Tensor val) -> Tensor",
+ [](Stack& stack) { /*noop*/ },
+ aliasAnalysisFromSchema()),
});
} // namespace jit
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to C++ style guidelines:
diff --git a/workspace/core/lowering/register_trt_placeholder_ops.cpp b/tmp/changes.txt
index 5ba8171..17d7d3f 100644
--- a/workspace/core/lowering/register_trt_placeholder_ops.cpp
+++ b/tmp/changes.txt
@@ -10,7 +10,10 @@ c10::AliasAnalysisKind aliasAnalysisFromSchema() {
RegisterOperators trt_placeholder_ops_reg({
/// Op marks a Tensor to be conveted from an Torch Tensor
/// to a TRT constant Tensor
- Operator("trt::const(Tensor val) -> Tensor", [](Stack& stack) { /*noop*/ }, aliasAnalysisFromSchema()),
+ Operator(
+ "trt::const(Tensor val) -> Tensor",
+ [](Stack& stack) { /*noop*/ },
+ aliasAnalysisFromSchema()),
});
} // namespace jit
ERROR: Some files do not conform to style guidelines
@andi4191 Can you briefly explain what these changes are doing ? |
@peri044 : This PR is using a compilation_context node for each prim::Enter and prim::Exit in the segmented blocks to align with the API update in torch. prim::Enter expects a Tensor as the argument. This issue was identified in DLFW staging. @narendasan: The changes should be fine for master branch, right? Or should we keep it restricted to release/ngc/* branch for now? |
/nvidia-ci |
Signed-off-by: Anurag Dixit <[email protected]>
0a738cb
to
b8c398a
Compare
@narendasan @peri044 : Verified this fix working on ToT. Please review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Signed-off-by: Anurag Dixit [email protected]
Description
Aligned with prim::Enter API update in module fallback
Fixes # (issue)
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: