-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More on Reproducing CIFAR10 supervised results #10
Comments
@vikasverma1077 or @alexmlamb - any thoughts? |
Hi @drcdr thanks for your interest. Unfortunately, I do not have time to go through the details for your experiments at the moment. I would recommend using the same packages as in the README and reproduce the results first. Several people have reproduced the results so I am pretty sure it will work for you as well. answer to your questions:
|
Thanks for taking the time to look into it. It's good that you got similar results for manifold mixup on the preactresnet architectures. Also if you fixed the data loader for a newer pytorch version, can you open a pull request for that? I think other users would benefit from that change.
I'd have to check, but I wonder if the choice of layers to mix in could be set incorrectly for WRN? The paper says: "When using Manifold Mixup, we selected the layer to perform mixing uniformly at random from a set of eligible layers. In all our experiments, for the PreActResNets architectures, the eligible layers for mixing in Manifold Mixup were : the input layer, the output from the first resblock, and the output from the second resblock. For Wide-ResNet-20-10 architecture, the eligible layers for mixing in Manifold Mixup were: the input layer and the output from the first resblock." So maybe the code is mixing in too many layers for WRN? I haven't investigated closely. |
Thanks guys, I'm trying to figure out where to go next. Trying a model or two in torchvision 0.2.1 seems like a good idea, given what you both have said; I would just need some time. I could maybe try and diff these 3 models in the two torchvisions, but I suppose that's not 100% conclusive either. I'm trying to think through the relative high variability between Best and End, and what that means. But since the primary goal here was reproducibility, I suppose I should focus on that first. I'll try and repost in a few days. Thanks! |
Can you clarify which of the results in the table you posted are from your experiments and which are taken from the paper? |
Yes. The first three columns (Header, Err μ, Err σ) are taken from the the first two columns of Table 1(a) in the paper. The rest of the columns refer to my experiments. |
What is the difference between "Manifold Mixup (α = 2)" and "Manifold Mixup (α = 2) , but not mixup_hidden" for the WRN results? |
"Manifold Mixup (α = 2)": I ran the command line as given on README.md, for "Manifold mixup WRN-28-10" "Manifold Mixup (α = 2) , but not mixup_hidden": I accidentally used '--train mixup' instead of '--train mixup_hidden', but otherwise the same as "Manifold Mixup (α = 2)" |
I've run the first two experiments, on WRN28_10, using the same packages on the README. Results for Best Error:
Also, I compared the printouts of the WRN model, from both torchvision 0.2.1 and 0.3, they are identical. |
UpdateHere is a table of Test Error results, with updates from using the packages on the README (columns K-O).
Here's the plot of TestError vs. Epoch: Summary
I think this issue could be kept open to track (1) [WRN28 MM worse] and possibly (3) [PARN18-vanilla test-error divergence] and (4) [what results do you get for line 8, e.g.]. If there is anything you can think of that I can do for (1) or (3), please let me know. Possible PR@alexmlamb - re the pull request: would you want me to first test with the latest pytorch/torchvision beforehand (torch vision is now 0.5.0!)? For anyone who wants to run CIFAR10 with torchvsion 0.3.0, the change is one line in load_data.py:
Other changes may be needed for other datasets, but I don't have the time/GPU cards to test all of these. Also, in torchvision, there is actually a warning for MNIST (but not for CIFAR10) - see: |
[This is similar to #5, but with the current code base and more networks.]
I am trying to recreate the Manifold Mixup CIFAR10 results, it seems that Manifold Mixup is a very promising development! I'm using the command lines from the project's README.md. I'm using Windows10, TitanXP, Python 3.7, PyTorch nightly (1.2, 7/6/2019), torchvision 0.3, and other packages the same or (mostly) slightly newer. My manifold_mixup version is 10/16/2019.
I only had to make one slight change, for torchvision 0.3: get_sampler(train_data.targets, ...) instead of get_sample(train_data.train_labels, ...).
Below, I show the test results from your paper, along with the results that I got. End is the final test error; best is the best test error during the run. The column "z" is a z-score, based on the mean μ and stdev σ from the arXiv paper, and my results. A negative z-score indicates that my results had a lower test error; a positive z-score = a higher test error. CLFR=="Command Line From README.md".
The results are mixed, and I'm not sure why; I thought you might have some thoughts. I'm seeing:
I accidentally tried Manifold Mixup without mixup_hidden for WRN28-10 (i.e. mixup, alpha=2.0), and actually got the mean result reported in the paper.
Any ideas? Some questions:
Also, here is a plot of the test error, for each of the scenarios above. (The pink wrn28_10_mixup_alpha=0 is shortened / offset to the left, because it's from a restart.) Notably:
The text was updated successfully, but these errors were encountered: