You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
In the "Merging for Generative Models" step, I see 20 files uploaded in the finetune dataset link (https://huggingface.co/datasets/lu-vae/natural-dataset/tree/main), but I'm not sure which task each file corresponds to. Can you upload the test dataset configuration file?
Thanks!
The text was updated successfully, but these errors were encountered:
The finetuning dataset is detailed in the paper Appendix D.2. For MMLU and TruthfulQA, which lack official training sets, we used the Dolly-15k dataset for MMLU and the BigBench-sampled dataset for TruthfulQA. For GSM8k and CNN-DailyMail, we use original training dataset, such as here. I forgot to upload the BigBench dataset, which I will work on shortly.
The test dataset is contained in HELM evaluation framework, we actually have uploaded a subset in here, its source is configured by this file.
Hi,
In the "Merging for Generative Models" step, I see 20 files uploaded in the finetune dataset link (https://huggingface.co/datasets/lu-vae/natural-dataset/tree/main), but I'm not sure which task each file corresponds to. Can you upload the test dataset configuration file?
Thanks!
The text was updated successfully, but these errors were encountered: