Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

20NG Training Set Reported Results Question #11

Open
jharrang opened this issue Jan 7, 2022 · 3 comments
Open

20NG Training Set Reported Results Question #11

jharrang opened this issue Jan 7, 2022 · 3 comments

Comments

@jharrang
Copy link

jharrang commented Jan 7, 2022

For your 20NG results reported in your original paper (88.6± 0.1 ), was the model trained on the full public 20NG train set, of size 11314, or were the reported results generated using the training code currently in this repo here and here, which appears to exclude the validation set from the data used to train the model? (The latter would use a training set of size 10183).

Thanks!

@allenhaozhu
Copy link
Owner

Yes, we exclude the validation set. This code is released a few days ago because some guys sent me a mail asking the detail of text classification. And after I simply checked the results, I release codes for text classification. If there is any issue please let me know.

@vencentDebug
Copy link

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument target in method wrapper_nll_loss_forward)

use parser.add_argument('--no-cuda', action='store_true', default=True,
help='Disables CUDA training.')
args.device = 'cpu'

@vencentDebug
Copy link

when I use only "CPU", it notices the problem. Then I use "GPU", it also happens another problem. train_feats = torch.spmm(adj, train_feats)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and CPU! (when checking argument for argument mat2 in method wrapper__mm)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants