Skip to content

Latest commit

 

History

History
22 lines (16 loc) · 1.11 KB

1.mdx

File metadata and controls

22 lines (16 loc) · 1.11 KB

Introduction[[introduction]]

In Chapter 3 you got your first taste of the 🤗 Datasets library and saw that there were three main steps when it came to fine-tuning a model:

  1. Load a dataset from the Hugging Face Hub.
  2. Preprocess the data with Dataset.map().
  3. Load and compute metrics.

But this is just scratching the surface of what 🤗 Datasets can do! In this chapter, we will take a deep dive into the library. Along the way, we'll find answers to the following questions:

  • What do you do when your dataset is not on the Hub?
  • How can you slice and dice a dataset? (And what if you really need to use Pandas?)
  • What do you do when your dataset is huge and will melt your laptop's RAM?
  • What the heck are "memory mapping" and Apache Arrow?
  • How can you create your own dataset and push it to the Hub?

The techniques you learn here will prepare you for the advanced tokenization and fine-tuning tasks in Chapter 6 and Chapter 7 -- so grab a coffee and let's get started!