Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP][TORCH][MLIR] Add test module for fairseq xlmr model #566

Closed
wants to merge 1 commit into from

Conversation

vivekkhandelwal1
Copy link
Collaborator

@vivekkhandelwal1 vivekkhandelwal1 commented Feb 8, 2022

This commit adds a test module for fairseq's xlmr model.
The original code is available at:
https://github.com/pytorch/fairseq/blob/main/fairseq/models/roberta/model_xlmr.py

@vivekkhandelwal1 vivekkhandelwal1 changed the title This commit adds a test module for fairseq's xlmr model. [WIP][TORCH][MLIR] Add test module for fairseq xlmr model Feb 8, 2022
@vivekkhandelwal1
Copy link
Collaborator Author

I am getting the following error:


Unexpected outcome summary:

****** Failed tests - 1 tests
    FAIL - "FairseqXlmrModule_basic"
        Compilation error: Traceback (most recent call last):
          File "/home/vivek/work/02_07/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/torchscript/framework.py", line 303, in run_tests
            compiled = config.compile(test.program_factory())
          File "/home/vivek/work/02_07/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/torchscript/configs/linalg_on_tensors_backend.py", line 37, in compile
            module = convert_torchscript_module_to_torch_backend_contract_mlir(
          File "/home/vivek/work/02_07/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/torchscript/configs/utils.py", line 60, in convert_torchscript_module_to_torch_backend_contract_mlir
            scripted = torch.jit.script(program)
          File "/home/vivek/work/02_07/mlir_venv/lib/python3.8/site-packages/torch/jit/_script.py", line 1265, in script
            return torch.jit._recursive.create_script_module(
          File "/home/vivek/work/02_07/mlir_venv/lib/python3.8/site-packages/torch/jit/_recursive.py", line 454, in create_script_module
            return create_script_module_impl(nn_module, concrete_type, stubs_fn)
          File "/home/vivek/work/02_07/mlir_venv/lib/python3.8/site-packages/torch/jit/_recursive.py", line 466, in create_script_module_impl
            method_stubs = stubs_fn(nn_module)
          File "/home/vivek/work/02_07/mlir_venv/lib/python3.8/site-packages/torch/jit/_recursive.py", line 735, in infer_methods_to_compile
            stubs.append(make_stub_from_method(nn_module, method))
          File "/home/vivek/work/02_07/mlir_venv/lib/python3.8/site-packages/torch/jit/_recursive.py", line 66, in make_stub_from_method
            return make_stub(func, method_name)
          File "/home/vivek/work/02_07/mlir_venv/lib/python3.8/site-packages/torch/jit/_recursive.py", line 51, in make_stub
            ast = get_jit_def(func, name, self_name="RecursiveScriptModule")
          File "/home/vivek/work/02_07/mlir_venv/lib/python3.8/site-packages/torch/jit/frontend.py", line 264, in get_jit_def
            return build_def(parsed_def.ctx, fn_def, type_line, def_name, self_name=self_name, pdt_arg_types=pdt_arg_types)
          File "/home/vivek/work/02_07/mlir_venv/lib/python3.8/site-packages/torch/jit/frontend.py", line 315, in build_def
            build_stmts(ctx, body))
          File "/home/vivek/work/02_07/mlir_venv/lib/python3.8/site-packages/torch/jit/frontend.py", line 137, in build_stmts
            stmts = [build_stmt(ctx, s) for s in stmts]
          File "/home/vivek/work/02_07/mlir_venv/lib/python3.8/site-packages/torch/jit/frontend.py", line 137, in <listcomp>
            stmts = [build_stmt(ctx, s) for s in stmts]
          File "/home/vivek/work/02_07/mlir_venv/lib/python3.8/site-packages/torch/jit/frontend.py", line 287, in __call__
            return method(ctx, node)
          File "/home/vivek/work/02_07/mlir_venv/lib/python3.8/site-packages/torch/jit/frontend.py", line 528, in build_Assign
            rhs = build_expr(ctx, stmt.value)
          File "/home/vivek/work/02_07/mlir_venv/lib/python3.8/site-packages/torch/jit/frontend.py", line 287, in __call__
            return method(ctx, node)
          File "/home/vivek/work/02_07/mlir_venv/lib/python3.8/site-packages/torch/jit/frontend.py", line 724, in build_Call
            raise NotSupportedError(kw_expr.range(), 'keyword-arg expansion is not supported')
        torch.jit.frontend.NotSupportedError: keyword-arg expansion is not supported:
          File "/home/vivek/work/02_07/torch-mlir/e2e_testing/torchscript/model_xlmr.py", line 42
                    bpe="sentencepiece",
                    load_checkpoint_heads=True,
                    **kwargs
                      ~~~~~~ <--- HERE
                )
                return RobertaHubInterface(x["args"], x["task"], x["models"][0])



Summary:
    Failed: 1```

@vivekkhandelwal1
Copy link
Collaborator Author

vivekkhandelwal1 commented Feb 8, 2022

If I change the kwargs to
kwargs = {"data" : None, "user_dir" : None, "bpe_codes" : None, "sentencepiece_model" : None, "bpe_merges" : None, "bpe_vocab" : None, "xlmr.base" : None, "bpe" : "sentencepiece", "load_checkpoint_heads" : True}

I get the following error message:

    FAIL - "FairseqXlmrModule_basic"
        Compilation error: Traceback (most recent call last):
          File "/home/vivek/work/02_07/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/torchscript/framework.py", line 302, in run_tests
            golden_trace = generate_golden_trace(test)
          File "/home/vivek/work/02_07/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/torchscript/framework.py", line 292, in generate_golden_trace
            test.program_invoker(tracer, TestUtils())
          File "/home/vivek/work/02_07/torch-mlir/e2e_testing/torchscript/model_xlmr.py", line 48, in FairseqXlmrModule_basic
            module.forward()
          File "/home/vivek/work/02_07/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/torchscript/framework.py", line 272, in __call__
            output = self.__wrapped__(*args, **kwargs)
          File "/home/vivek/work/02_07/torch-mlir/e2e_testing/torchscript/model_xlmr.py", line 30, in forward
            x = hub_utils.from_pretrained(
        TypeError: from_pretrained() got multiple values for keyword argument 'bpe'

@vivekkhandelwal1
Copy link
Collaborator Author

vivekkhandelwal1 commented Feb 8, 2022

After making the kwargs mentioned in #566 (comment) and removing the bpe and load_checkpoint_heads variable from the hub_utils.from_pretrained call, again I get this message:

            raise NotSupportedError(kw_expr.range(), 'keyword-arg expansion is not supported')
        torch.jit.frontend.NotSupportedError: keyword-arg expansion is not supported:
          File "/home/vivek/work/02_07/torch-mlir/e2e_testing/torchscript/model_xlmr.py", line 40
                     "xlmr.xxl": "http://dl.fbaipublicfiles.com/fairseq/models/xlmr/xlmr.xxl.tar.gz",
                   },
                    **kwargs
                      ~~~~~~ <--- HERE
                )
                return RobertaHubInterface(x["args"], x["task"], x["models"][0])

@vivekkhandelwal1 vivekkhandelwal1 deleted the fairseq-xlmr branch February 10, 2022 06:26
qedawkins pushed a commit to nod-ai/torch-mlir that referenced this pull request Oct 3, 2022
* Support opaque attribute and alignment in krnl.global

Signed-off-by: Tung D. Le <[email protected]>

* Edit lit tests

Signed-off-by: Tung D. Le <[email protected]>

* Use StringAttr to lower to LLVM

Signed-off-by: Tung D. Le <[email protected]>

* Normalize KrnlGlobalOp

Signed-off-by: Tung D. Le <[email protected]>

* Elide opaque attribute in krnl.global

Signed-off-by: Tung D. Le <[email protected]>

* assert condition

Signed-off-by: Tung D. Le <[email protected]>

* Default offset

Signed-off-by: Tung D. Le <[email protected]>

Co-authored-by: Alexandre Eichenberger <[email protected]>
rsuderman pushed a commit that referenced this pull request May 17, 2024
…ering (#3351)

Addresses [Shark-Turbine
#196](nod-ai/SHARK-TestSuite#196)

Related tracker [Shark-Turbine
#566](nod-ai/SHARK-ModelDev#566)

Related onnx.Resize issues [Shark-Turbine
#616](nod-ai/SHARK-ModelDev#616)
BaneTrifa pushed a commit to BaneTrifa/torch-mlir that referenced this pull request May 24, 2024
…ering (llvm#3351)

Addresses [Shark-Turbine
llvm#196](nod-ai/SHARK-TestSuite#196)

Related tracker [Shark-Turbine
llvm#566](nod-ai/SHARK-ModelDev#566)

Related onnx.Resize issues [Shark-Turbine
llvm#616](nod-ai/SHARK-ModelDev#616)
vivekkhandelwal1 pushed a commit that referenced this pull request Jun 3, 2024
This addresses 7 of the model failures I'm seeing in the test suite. See
[Shark-Turbine issue
#566](nod-ai/SHARK-ModelDev#566).

Need the op ```linalg.conv_2d_ngchw_gfchw_q``` to be added upstream
before merging this. See [llvm-project PR #92136
](llvm/llvm-project#92136).

A small additional expansion to operand quantization is included in this
patch to address a model failure that occurs when unblocking the
quantized group convolutions in one of these onnx models.
sjarus pushed a commit to sjarus/torch-mlir that referenced this pull request Jun 6, 2024
…ering (llvm#3351)

Addresses [Shark-Turbine
llvm#196](nod-ai/SHARK-TestSuite#196)

Related tracker [Shark-Turbine
llvm#566](nod-ai/SHARK-ModelDev#566)

Related onnx.Resize issues [Shark-Turbine
llvm#616](nod-ai/SHARK-ModelDev#616)
sjarus pushed a commit to sjarus/torch-mlir that referenced this pull request Jun 6, 2024
This addresses 7 of the model failures I'm seeing in the test suite. See
[Shark-Turbine issue
llvm#566](nod-ai/SHARK-ModelDev#566).

Need the op ```linalg.conv_2d_ngchw_gfchw_q``` to be added upstream
before merging this. See [llvm-project PR #92136
](llvm/llvm-project#92136).

A small additional expansion to operand quantization is included in this
patch to address a model failure that occurs when unblocking the
quantized group convolutions in one of these onnx models.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant