Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix LTC Decoupling #815

Conversation

antoniojkim
Copy link
Collaborator

We previously had to merge in some temporary "hacks" to decouple LTC from the TS backend. Now that the proper fix has been merged into PyTorch master, we need to fix up this branch to use those changes.

Also, added PyTorch submodule for codegen purposes. Previously we were assuming that a pytorch clone was located in a specific place relative to torch_mlir. However, that is not an assumption that we could generally make. Adding PyTorch as a submodule fixes this.

CC: @ke1337 @wconstab

@antoniojkim antoniojkim self-assigned this Apr 29, 2022
@henrytwo
Copy link
Member

Do we need to wait for pytorch/pytorch#76535 to land before merging this in?

@antoniojkim
Copy link
Collaborator Author

Do we need to wait for pytorch/pytorch#76535 to land before merging this in?

Not necessarily. I've pinned my development branch as the PyTorch submodule so it will work even before that PR lands. I can always follow up and change the submodule to point to latest master when it does land


virtual TorchMlirOpVector
Lower(TorchMlirFunction function, TorchMlirLoweringContext* loctx) const;

private:
// The hash of the dag WITH size info. Used for shape caching
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: missing period at end of line

@antoniojkim
Copy link
Collaborator Author

@silvasean Gentle reminder to please review this when you can

namespace torch {
namespace lazy {

DimensionNode::DimensionNode(OpKind op, OpList operands, hash_t hash_seed)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any doc on the strategy here for dynamic shapes?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is, but its very much a work in progress I believe. This class is just a placeholder introduced for now. We'll need to make further changes to this once they finalize the design and implemention in PyTorch master.

@antoniojkim antoniojkim merged commit 6ab5f01 into llvm:torch_mlir_ltc_backend May 3, 2022
antoniojkim added a commit that referenced this pull request May 26, 2022
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
antoniojkim added a commit that referenced this pull request May 26, 2022
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
antoniojkim added a commit that referenced this pull request Jun 30, 2022
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
antoniojkim added a commit that referenced this pull request Jun 30, 2022
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
antoniojkim added a commit that referenced this pull request Jul 5, 2022
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
antoniojkim added a commit that referenced this pull request Jul 7, 2022
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
henrytwo pushed a commit that referenced this pull request Jul 8, 2022
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
henrytwo pushed a commit that referenced this pull request Jul 8, 2022
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
henrytwo pushed a commit that referenced this pull request Jul 12, 2022
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
antoniojkim added a commit that referenced this pull request Jul 15, 2022
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
antoniojkim added a commit that referenced this pull request Jul 19, 2022
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
antoniojkim added a commit that referenced this pull request Jul 22, 2022
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
henrytwo pushed a commit that referenced this pull request Jul 29, 2022
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
henrytwo pushed a commit that referenced this pull request Jul 29, 2022
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
henrytwo pushed a commit that referenced this pull request Jul 30, 2022
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
qedawkins pushed a commit to nod-ai/torch-mlir that referenced this pull request Oct 3, 2022
* Test the first input as constant

Signed-off-by: Tung D. Le <[email protected]>

* Set contant inputs by IMPORTER_FORCE_CONSTANT

Signed-off-by: Tung D. Le <[email protected]>

* Add a new check-backend-constant

Signed-off-by: Tung D. Le <[email protected]>

* Add constant tests to Jenkins; Add doc for constant tests

Signed-off-by: Tung D. Le <[email protected]>

* Clean up

Signed-off-by: Tung D. Le <[email protected]>

* Typos

Signed-off-by: Tung D. Le <[email protected]>

* Only compile models in the backend run for constant test

Signed-off-by: Tung D. Le <[email protected]>

* Fix a wrong key

Signed-off-by: Tung D. Le <[email protected]>

* Rename test_to_enable_static_dynamic to test_to_enable_dict

Signed-off-by: Tung D. Le <[email protected]>

* Update on test failed

Signed-off-by: Tung D. Le <[email protected]>

Co-authored-by: Alexandre Eichenberger <[email protected]>
qedawkins pushed a commit to nod-ai/torch-mlir that referenced this pull request Oct 3, 2022
* readme to be kept in sync, looks like on window, the softlink is transformed in something else (llvm#810)

Signed-off-by: Alexandre Eichenberger <[email protected]>
Signed-off-by: Yasushi Negishi <[email protected]>

* Windows instrumentation support (llvm#801)

* Windows instrumentation support

Signed-off-by: Nathaniel McVicar <[email protected]>

* Normalize memory reporting to kb

Signed-off-by: Nathaniel McVicar <[email protected]>

Co-authored-by: Kevin O'Brien <[email protected]>
Co-authored-by: Alexandre Eichenberger <[email protected]>
Co-authored-by: chentong319 <[email protected]>
Signed-off-by: Yasushi Negishi <[email protected]>

* Test expected errors for shape inference (llvm#805)

* LIT tests for error in shape inference

Signed-off-by: Tung D. Le <[email protected]>

* Add more ops

Signed-off-by: Tung D. Le <[email protected]>

* Tests RNNOps

Signed-off-by: Tung D. Le <[email protected]>

* Add tests for more ops

Signed-off-by: Tung D. Le <[email protected]>

* Undo a change

Signed-off-by: Tung D. Le <[email protected]>

* Capitalize column

Signed-off-by: Tung D. Le <[email protected]>

Co-authored-by: chentong319 <[email protected]>
Signed-off-by: Yasushi Negishi <[email protected]>

* Set alignment for LLVM GlobalOp (llvm#812)

Signed-off-by: Tung D. Le <[email protected]>
Signed-off-by: Yasushi Negishi <[email protected]>

* z/OS instrumentation support (llvm#813)

Signed-off-by: Steven Royer <[email protected]>
Signed-off-by: Yasushi Negishi <[email protected]>

* ONNX backend test  for constant inputs (llvm#815)

* Test the first input as constant

Signed-off-by: Tung D. Le <[email protected]>

* Set contant inputs by IMPORTER_FORCE_CONSTANT

Signed-off-by: Tung D. Le <[email protected]>

* Add a new check-backend-constant

Signed-off-by: Tung D. Le <[email protected]>

* Add constant tests to Jenkins; Add doc for constant tests

Signed-off-by: Tung D. Le <[email protected]>

* Clean up

Signed-off-by: Tung D. Le <[email protected]>

* Typos

Signed-off-by: Tung D. Le <[email protected]>

* Only compile models in the backend run for constant test

Signed-off-by: Tung D. Le <[email protected]>

* Fix a wrong key

Signed-off-by: Tung D. Le <[email protected]>

* Rename test_to_enable_static_dynamic to test_to_enable_dict

Signed-off-by: Tung D. Le <[email protected]>

* Update on test failed

Signed-off-by: Tung D. Le <[email protected]>

Co-authored-by: Alexandre Eichenberger <[email protected]>
Signed-off-by: Yasushi Negishi <[email protected]>

* Prepare to upgrade squeeze to OpSet 13 (llvm#804)

* pass compile

Signed-off-by: Tong Chen <[email protected]>

* control and test

Signed-off-by: Tong Chen <[email protected]>

* fix

Signed-off-by: Tong Chen <[email protected]>

* format

Signed-off-by: Tong Chen <[email protected]>

* format

Signed-off-by: Tong Chen <[email protected]>

* docs

Signed-off-by: Tong Chen <[email protected]>

* remove debug print

Signed-off-by: Tong Chen <[email protected]>

Co-authored-by: Kevin O'Brien <[email protected]>
Co-authored-by: Alexandre Eichenberger <[email protected]>
Signed-off-by: Yasushi Negishi <[email protected]>

* Add the data_layout attribute to ModuleOp to denote endianness (llvm#819)

* Annotate ModuleOp with endian information

Signed-off-by: Tung D. Le <[email protected]>

* Enable failed end-to-end tests caused by constant inputs

Signed-off-by: Tung D. Le <[email protected]>

* Add lit tests for ModuleOp

Signed-off-by: Tung D. Le <[email protected]>
Signed-off-by: Yasushi Negishi <[email protected]>

* Add optimization rules for unnecessary reshape ops.

Signed-off-by: Yasushi Negishi <[email protected]>

* Fix compile errors

Signed-off-by: Yasushi Negishi <[email protected]>

* Fix issues of RemoveIdentityReshapePattern

Signed-off-by: Yasushi Negishi <[email protected]>

* Add test cases for CombinedReshapePattern and RemoveIdentityReshapePattern.

Signed-off-by: Yasushi Negishi <[email protected]>

* Update utils/gen_onnx_mlir.py to generate src/Dialect/ONNX/ONNXOps.td.inc correctly.

Signed-off-by: Yasushi Negishi <[email protected]>

* Fix link errors.

Signed-off-by: Yasushi Negishi <[email protected]>

* Fix build errors on Jenkins

Signed-off-by: Yasushi Negishi <[email protected]>

Co-authored-by: Alexandre Eichenberger <[email protected]>
Co-authored-by: NathanielMcVicar <[email protected]>
Co-authored-by: Kevin O'Brien <[email protected]>
Co-authored-by: chentong319 <[email protected]>
Co-authored-by: Tung D. Le <[email protected]>
Co-authored-by: Steven Royer <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants