-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use Bert or Bart or Roberta or GPT for translation #1599
Comments
you have preprocessed data with source and target to run machine translation architectures, however roberta requires different data preprocessing. You can check examples/ folder for tutorials on how to preprocess data for roberta |
I want to run the translations only. And I wondering how to use roberta or bart for the same. So once the preprocessing is done ... would there be changes in above command for roberta ? Also, what would be preprocessing steps for Bart ? |
I'm interested in this too. Are there tutorials for using RoBERTa (as embeddings?) or BART pre-trained models along with train, test, and eval files to train and evaluate a model to do seq2seq translations? |
I am running into the same issues trying to use pre-trained XLM-R for translation. I think the main problem is that Roberta and XLM-R are encoder-only architectures. I think the solution is to use XLM-R or Roberta as an encoder for feature-extraction with a newly initialized decoder. |
BART is supposed to be built for translation according to its paper. And for Roberta it looks like a slim chance to use it for translation as it is not seq-2-seq. |
What about using RoBERTa to generate embeddings for words to train seq2seq models? |
What was your gpu configuration for BART ? |
I tried that demo, but step 2) tries to process train.source, val.source, train.target, and val.target. No files with those names exist in the CNN-Dailymail files in step 1). However, in the CNN-Dailymail subdirectory finished_files there are train.bin and val.bin. Looks like the demo is missing a step? |
same problem here.. the example of using BART is so unclear. it would be much better if we can see some toy examples of using BART with simple input/output format for seq2seq tasks |
To do the BART preprocessing, you have to look here #1391, specifically zhaoguangxiang's comment |
Summary: …hod` to avoid unbounded local error. # Before submitting - [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements) - [ ] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)? - [ ] Did you make sure to update the docs? - [ ] Did you write any new necessary tests? ## What does this PR do? Adding initialization for `num_pipelines_per_node` in `infer_init_method` in `distributed/utils.py` ## PR review Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged. ## Did you have fun? Make sure you had fun coding � Pull Request resolved: fairinternal/fairseq-py#1599 Reviewed By: myleott Differential Revision: D26208044 Pulled By: girifb fbshipit-source-id: 98d3c0b70b59a5e0abb027850baa3bc44d9c3c78
…1599) Summary: …hod` to avoid unbounded local error. # Before submitting - [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements) - [ ] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)? - [ ] Did you make sure to update the docs? - [ ] Did you write any new necessary tests? ## What does this PR do? Adding initialization for `num_pipelines_per_node` in `infer_init_method` in `distributed/utils.py` ## PR review Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged. ## Did you have fun? Make sure you had fun coding � Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1599 Reviewed By: myleott Differential Revision: D26208044 Pulled By: girifb fbshipit-source-id: 98d3c0b70b59a5e0abb027850baa3bc44d9c3c78
Summary: …hod` to avoid unbounded local error. # Before submitting - [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements) - [ ] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)? - [ ] Did you make sure to update the docs? - [ ] Did you write any new necessary tests? ## What does this PR do? Adding initialization for `num_pipelines_per_node` in `infer_init_method` in `distributed/utils.py` ## PR review Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged. ## Did you have fun? Make sure you had fun coding � Pull Request resolved: fairinternal/fairseq-py#1599 Reviewed By: myleott Differential Revision: D26208044 Pulled By: girifb fbshipit-source-id: 98d3c0b70b59a5e0abb027850baa3bc44d9c3c78
This issue has been automatically marked as stale. If this issue is still affecting you, please leave any comment (for example, "bump"), and we'll keep it open. We are sorry that we haven't been able to prioritize it yet. If you have any new additional information, please include it with your comment! |
Closing this issue after a prolonged period of inactivity. If this issue is still present in the latest release, please create a new issue with up-to-date information. Thank you! |
couldn't agree more... |
I am running various architectures mentioned here in --arch option to benchmark
and I am using workpiece-tokenizer externally before pre-process step.
I was able to run following transformer based architectures with following command and able to inference as well.
transformer, transformer_iwslt_de_en, transformer_wmt_en_de, transformer_vaswani_wmt_en_de_big, transformer_vaswani_wmt_en_fr_big, transformer_wmt_en_de_big, transformer_wmt_en_de_big_t2t
Command I use ->
CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train /home/translation_task/mr2en_token_data --arch transformer_wmt_en_de --share-decoder-input-output-embed --optimizer adam --adam-betas '(0.9,0.98)' --clip-norm 0.0 --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 10000 --dropout 0.3 --weight-decay 0.0001 --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --max-tokens 4096 --update-freq 2 --max-source-positions 512 --max-target-positions 512 --skip-invalid-size-inputs-valid-test
Now I am trying to run following and not able to. Can someone suggest about the same.
--share-decoder-input-output-embed
)Environment?
pip
, source): pipThe text was updated successfully, but these errors were encountered: