Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What are the data formats of dataset and vocab folder? #81

Open
shivanraptor opened this issue Aug 12, 2024 · 2 comments
Open

What are the data formats of dataset and vocab folder? #81

shivanraptor opened this issue Aug 12, 2024 · 2 comments

Comments

@shivanraptor
Copy link

In the README of pre-training, it mentions that the dataset, vocab and roberta_zh have to be prepared before training.

Is there any example of the files in the dataset and vocab folder?

Also, what do you mean by "Place the checkpoint of Chinese RoBERTa"? I would like to train Chinese BART.

Last, if I wish to replace Jieba tokenizer with my custom tokenizer, how can I do so? Thanks.

@choosewhatulike
Copy link
Member

Is there any example of the files in the dataset and vocab folder?
Also, what do you mean by "Place the checkpoint of Chinese RoBERTa"? I would like to train Chinese BART.

  • dataset/: Place the .bin and .idx files that are preprocessed from raw text, the same data format used by Megatron.
  • vocab/: Place the vocab files and model config file. You can download Chinese BART from huggingface and copy these files into the folder.
  • roberta_zh/: Place the checkpoint of Chinese RoBERTa, as the CPT initializes the encoder from the checkpoint. CPT uses this, if you want to train Chinese BART, you can skip it.

Last, if I wish to replace Jieba tokenizer with my custom tokenizer, how can I do so? Thanks.

Jieba is used only for constructing whole-word masking, which does not affect the tokenizer of the model. If you want to replace it, you can either change the dictionary of Jieba by following this link, which helps Jieba recognize new words in your training data. Or you can use another tokenizer by changing the dataloader in the pre-training codebase.

@shivanraptor
Copy link
Author

After I prepared the input data, I decided to pre-train with BART format. While running run_pretrain_bart.sh, it shows:

[rank0]: IndexError: Caught IndexError in DataLoader worker process 0.
[rank0]: Original Traceback (most recent call last):
[rank0]:   File "/home/jupyter-raptor/.local/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
[rank0]:     data = fetcher.fetch(index)  # type: ignore[possibly-undefined]
[rank0]:   File "/home/jupyter-raptor/.local/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
[rank0]:     data = [self.dataset[idx] for idx in possibly_batched_index]
[rank0]:   File "/home/jupyter-raptor/.local/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in <listcomp>
[rank0]:     data = [self.dataset[idx] for idx in possibly_batched_index]
[rank0]:   File "/home/jupyter-raptor/pretrain_tokenizer/megatron/data/blendable_dataset.py", line 83, in __getitem__
[rank0]:     return self.datasets[dataset_idx][sample_idx]
[rank0]:   File "/home/jupyter-raptor/pretrain_tokenizer/megatron/data/bart_dataset.py", line 106, in __getitem__
[rank0]:     return self.build_training_sample(sample, self.max_seq_length, np_rng)
[rank0]:   File "/home/jupyter-raptor/pretrain_tokenizer/megatron/data/bart_dataset.py", line 148, in build_training_sample
[rank0]:     source = self.add_whole_word_mask(source, mask_ratio, replace_length)
[rank0]:   File "/home/jupyter-raptor/pretrain_tokenizer/megatron/data/bart_dataset.py", line 360, in add_whole_word_mask
[rank0]:     source[indices[mask_random]] = torch.randint(
[rank0]: IndexError: The shape of the mask [2] at index 0 does not match the shape of the indexed tensor [1] at index 0

I have no clue how to solve it. Can you help?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants