Skip to content
This repository has been archived by the owner on Jan 20, 2023. It is now read-only.

None of the attackers or interpreters work for the Transformer QA model. #679

Closed
codeviking opened this issue Dec 23, 2020 · 0 comments · Fixed by allenai/allennlp-models#249
Assignees

Comments

@codeviking
Copy link
Contributor

Task

Reading Comprehension

Model

Transformer QA

Feature

All model attacks and interpretations.

Issue

Trying to run an attack or interpretation for the transformer QA model results in a blank white page (this is what happens when an error is thrown in the demo).

I believe it's because transformer QA doesn't support these types of requests.

@codeviking codeviking added this to the TugBoat Support milestone Dec 23, 2020
codeviking added a commit that referenced this issue Dec 23, 2020
Right now the `/info` route for each model returns `null` for the
`attackers` and `interpreters` if they're not specified. For
instance:

```
❯ curl -s https://demo.allennlp.org/api/bidaf | jq 'del(.model_card_data)'
{
  "allennlp": "1.3.0",
  "archive_file": "https://storage.googleapis.com/allennlp-public-models/bidaf-model-2020.03.19.tar.gz",
  "attackers": null,
  "id": "bidaf",
  "interpreters": null,
  "overrides": null,
  "predictor_name": "reading_comprehension",
  "pretrained_model_id": "rc-bidaf",
  "use_old_load_method": false
}
```

This is incorrect, as when they're not specified, the default
interpreters and attackers are loaded instead. This PR fixes that
by assigning these fields default values.

```
❯ curl -s http://localhost:8080/api/bidaf/ | jq 'del(.model_card_data)'
{
  "allennlp": "1.3.0",
  "archive_file": "https://storage.googleapis.com/allennlp-public-models/bidaf-model-2020.03.19.tar.gz",
  "attackers": [
    "hotflip",
    "input_reduction"
  ],
  "id": "bidaf",
  "interpreters": [
    "simple_gradient",
    "smooth_gradient",
    "integrated_gradient"
  ],
  "overrides": null,
  "predictor_name": "reading_comprehension",
  "pretrained_model_id": "rc-bidaf",
  "use_old_load_method": false
}
```

I also tested a model that overrides the field by assigning both
fields to an empty list:

```
❯ cat allennlp_demo/transformer_qa/model.json
{
    "id": "transformer-qa",
    "pretrained_model_id": "rc-transformer-qa",
    "attackers": [],
    "interpreters": []
}
```

This change will allow us to use this in the front-end to toggle
attackers on and off on a model basis. This will allow us to
resolve issues like [this one](#679)
in a dyanmic way, rather than hard-coding things to be on or off
on a per-demo basis.
codeviking added a commit that referenced this issue Dec 23, 2020
…rs. (#680)

Right now the `/info` route for each model returns `null` for the
`attackers` and `interpreters` if they're not specified. For
instance:

```
❯ curl -s https://demo.allennlp.org/api/bidaf | jq 'del(.model_card_data)'
{
  "allennlp": "1.3.0",
  "archive_file": "https://storage.googleapis.com/allennlp-public-models/bidaf-model-2020.03.19.tar.gz",
  "attackers": null,
  "id": "bidaf",
  "interpreters": null,
  "overrides": null,
  "predictor_name": "reading_comprehension",
  "pretrained_model_id": "rc-bidaf",
  "use_old_load_method": false
}
```

This is incorrect, as when they're not specified, the default
interpreters and attackers are loaded instead. This PR fixes that
by assigning these fields default values.

```
❯ curl -s http://localhost:8080/api/bidaf/ | jq 'del(.model_card_data)'
{
  "allennlp": "1.3.0",
  "archive_file": "https://storage.googleapis.com/allennlp-public-models/bidaf-model-2020.03.19.tar.gz",
  "attackers": [
    "hotflip",
    "input_reduction"
  ],
  "id": "bidaf",
  "interpreters": [
    "simple_gradient",
    "smooth_gradient",
    "integrated_gradient"
  ],
  "overrides": null,
  "predictor_name": "reading_comprehension",
  "pretrained_model_id": "rc-bidaf",
  "use_old_load_method": false
}
```

I also tested a model that overrides the field by assigning both
fields to an empty list:

```
❯ cat allennlp_demo/transformer_qa/model.json
{
    "id": "transformer-qa",
    "pretrained_model_id": "rc-transformer-qa",
    "attackers": [],
    "interpreters": []
}
```

This change will allow us to use this in the front-end to toggle
attackers on and off on a model basis. This will allow us to
resolve issues like [this one](#679)
in a dynamic way, rather than hard-coding things to be on or off
on a per-demo basis.
codeviking added a commit that referenced this issue Dec 23, 2020
This uses the list of `intepreters` in the `ModelInfo` returned
for the current model to toggle the interpreters that are displayed.

After this change when Transformer QA is selected, the option to
interpret the results no longer displays (which avoids part of this
bug: #679).

On the other hand when models like BiDAF or ELMo-BiDAF are selected,
the option to invoke each fo the interpreters remains (since they're
supported for those models).
codeviking added a commit that referenced this issue Dec 23, 2020
This uses the list of `intepreters` in the `ModelInfo` returned
for the current model to toggle the interpreters that are displayed.

After this change when Transformer QA is selected, the option to
interpret the results no longer displays (which avoids part of this
bug: #679).

On the other hand when models like BiDAF or ELMo-BiDAF are selected,
the option to invoke each fo the interpreters remains (since they're
supported for those models).
@AkshitaB AkshitaB self-assigned this Jan 8, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants