Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace MAML with an actually correct implementation #328

Closed
Chillee opened this issue Mar 23, 2021 · 1 comment
Closed

Replace MAML with an actually correct implementation #328

Chillee opened this issue Mar 23, 2021 · 1 comment
Assignees

Comments

@Chillee
Copy link
Contributor

Chillee commented Mar 23, 2021

dragen1860/MAML-Pytorch#59

@zou3519 zou3519 self-assigned this Apr 7, 2021
zou3519 added a commit to zou3519/benchmark that referenced this issue Apr 7, 2021
This is related to pytorch#328. This PR adds an actually correct
implementation of maml to the repo. The previous implementation doesn't
actually compute higher order gradients where it is supposed to.

I'm not familiar with how torchbench works so please let me know if
there are additional files that need to be modified.

Test Plan:

Ran the following:
```
python test.py -k test_maml_omniglot_example_cpu
python test.py -k test_maml_omniglot_eval_cpu
python test.py -k test_maml_omniglot_train_cpu
```

Future work:
- Delete the maml example that is currently in this repo (or rename it
to make it clear that it's doing something different from the paper that
it is trying to reproduce).
zou3519 added a commit to zou3519/benchmark that referenced this issue May 24, 2021
This is related to pytorch#328. This PR adds an actually correct
implementation of maml to the repo. The previous implementation doesn't
actually compute higher order gradients where it is supposed to.

I'm not familiar with how torchbench works so please let me know if
there are additional files that need to be modified.

Test Plan:

Ran the following:
```
python test.py -k test_maml_omniglot_example_cpu
python test.py -k test_maml_omniglot_eval_cpu
python test.py -k test_maml_omniglot_train_cpu
```

Future work:
- Delete the maml example that is currently in this repo (or rename it
to make it clear that it's doing something different from the paper that
it is trying to reproduce).
zou3519 added a commit to zou3519/benchmark that referenced this issue May 26, 2021
This is related to pytorch#328. This PR adds an actually correct
implementation of maml to the repo. The previous implementation doesn't
actually compute higher order gradients where it is supposed to.

I'm not familiar with how torchbench works so please let me know if
there are additional files that need to be modified.

Test Plan:

Ran the following:
```
python test.py -k test_maml_omniglot_example_cpu
python test.py -k test_maml_omniglot_eval_cpu
python test.py -k test_maml_omniglot_train_cpu
```

Future work:
- Delete the maml example that is currently in this repo (or rename it
to make it clear that it's doing something different from the paper that
it is trying to reproduce).
zou3519 added a commit to zou3519/benchmark that referenced this issue Sep 21, 2022
Fixes pytorch#328

Here's some context:
- We discovered that the implementation doesn't actually use maml
(dragen1860/MAML-Pytorch#59)
- We filed pytorch#328
- We added maml_omniglot pytorch#349
as the correct version of of the maml model.
- We didn't delete the maml model (because I was worried that it was
doing a "different" type of maml that I hadn't seen before that is still
valid).

The last step to resolve this issue is to delete the incorrect MAML
example, unless we have reasons to keep it around.
@xuzhao9
Copy link
Contributor

xuzhao9 commented Oct 2, 2022

Closed, we keep this "incorrect" model implementation because it is useful for dynamo correctness testing.

@xuzhao9 xuzhao9 closed this as completed Oct 2, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants