-
Notifications
You must be signed in to change notification settings - Fork 519
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP][TORCH][MLIR] Add test module for fairseq xlmr model #566
Conversation
I am getting the following error:
|
If I change the kwargs to I get the following error message:
|
After making the kwargs mentioned in #566 (comment) and removing the bpe and load_checkpoint_heads variable from the hub_utils.from_pretrained call, again I get this message:
|
* Support opaque attribute and alignment in krnl.global Signed-off-by: Tung D. Le <[email protected]> * Edit lit tests Signed-off-by: Tung D. Le <[email protected]> * Use StringAttr to lower to LLVM Signed-off-by: Tung D. Le <[email protected]> * Normalize KrnlGlobalOp Signed-off-by: Tung D. Le <[email protected]> * Elide opaque attribute in krnl.global Signed-off-by: Tung D. Le <[email protected]> * assert condition Signed-off-by: Tung D. Le <[email protected]> * Default offset Signed-off-by: Tung D. Le <[email protected]> Co-authored-by: Alexandre Eichenberger <[email protected]>
…ering (#3351) Addresses [Shark-Turbine #196](nod-ai/SHARK-TestSuite#196) Related tracker [Shark-Turbine #566](nod-ai/SHARK-ModelDev#566) Related onnx.Resize issues [Shark-Turbine #616](nod-ai/SHARK-ModelDev#616)
…ering (llvm#3351) Addresses [Shark-Turbine llvm#196](nod-ai/SHARK-TestSuite#196) Related tracker [Shark-Turbine llvm#566](nod-ai/SHARK-ModelDev#566) Related onnx.Resize issues [Shark-Turbine llvm#616](nod-ai/SHARK-ModelDev#616)
This addresses 7 of the model failures I'm seeing in the test suite. See [Shark-Turbine issue #566](nod-ai/SHARK-ModelDev#566). Need the op ```linalg.conv_2d_ngchw_gfchw_q``` to be added upstream before merging this. See [llvm-project PR #92136 ](llvm/llvm-project#92136). A small additional expansion to operand quantization is included in this patch to address a model failure that occurs when unblocking the quantized group convolutions in one of these onnx models.
…ering (llvm#3351) Addresses [Shark-Turbine llvm#196](nod-ai/SHARK-TestSuite#196) Related tracker [Shark-Turbine llvm#566](nod-ai/SHARK-ModelDev#566) Related onnx.Resize issues [Shark-Turbine llvm#616](nod-ai/SHARK-ModelDev#616)
This addresses 7 of the model failures I'm seeing in the test suite. See [Shark-Turbine issue llvm#566](nod-ai/SHARK-ModelDev#566). Need the op ```linalg.conv_2d_ngchw_gfchw_q``` to be added upstream before merging this. See [llvm-project PR #92136 ](llvm/llvm-project#92136). A small additional expansion to operand quantization is included in this patch to address a model failure that occurs when unblocking the quantized group convolutions in one of these onnx models.
This commit adds a test module for fairseq's xlmr model.
The original code is available at:
https://github.com/pytorch/fairseq/blob/main/fairseq/models/roberta/model_xlmr.py