If you've made it this far in the course, congratulations -- you now have all the knowledge and tools you need to tackle (almost) any NLP task with 🤗 Transformers and the Hugging Face ecosystem!
We have seen a lot of different data collators, so we made this little video to help you find which one to use for each task:
After completing this lightning tour through the core NLP tasks, you should:
- Know which architectures (encoder, decoder, or encoder-decoder) are best suited for each task
- Understand the difference between pretraining and fine-tuning a language model
- Know how to train Transformer models using either the
Trainer
API and distributed training features of 🤗 Accelerate or TensorFlow and Keras, depending on which track you've been following - Understand the meaning and limitations of metrics like ROUGE and BLEU for text generation tasks
- Know how to interact with your fine-tuned models, both on the Hub and using the
pipeline
from 🤗 Transformers
Despite all this knowledge, there will come a time when you'll either encounter a difficult bug in your code or have a question about how to solve a particular NLP problem. Fortunately, the Hugging Face community is here to help you! In the final chapter of this part of the course, we'll explore how you can debug your Transformer models and ask for help effectively.