Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Part 7: Training a causal... fixes #179

Merged
merged 4 commits into from
May 17, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 10 additions & 3 deletions chapters/en/chapter7/6.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,11 @@ False True
We can use this to create a function that will stream the dataset and filter the elements we want:

```py
from collections import defaultdict
from tqdm import tqdm
from datasets import Dataset


def filter_streaming_dataset(dataset, filters):
filtered_dict = defaultdict(list)
total = 0
Expand Down Expand Up @@ -106,7 +111,7 @@ Filtering the full dataset can take 2-3h depending on your machine and bandwidth
from datasets import load_dataset, DatasetDict

ds_train = load_dataset("huggingface-course/codeparrot-ds-train", split="train")
ds_valid = load_dataset("huggingface-course/codeparrot-ds-valid", split="train")
ds_valid = load_dataset("huggingface-course/codeparrot-ds-valid", split="validation")

raw_datasets = DatasetDict(
{
Expand Down Expand Up @@ -348,7 +353,7 @@ data_collator = DataCollatorForLanguageModeling(tokenizer, mlm=False, return_ten
Let's have a look at an example:

```py
out = data_collator([tokenized_dataset["train"][i] for i in range(5)])
out = data_collator([tokenized_datasets["train"][i] for i in range(5)])
for key in out:
print(f"{key} shape: {out[key].shape}")
```
Expand Down Expand Up @@ -800,6 +805,8 @@ model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(
Now that we have sent our `train_dataloader` to `accelerator.prepare()`, we can use its length to compute the number of training steps. Remember that we should always do this after preparing the dataloader, as that method will change its length. We use a classic linear schedule from the learning rate to 0:

```py
from transformers import get_scheduler

num_train_epochs = 1
num_update_steps_per_epoch = len(train_dataloader)
num_training_steps = num_train_epochs * num_update_steps_per_epoch
Expand Down Expand Up @@ -857,7 +864,7 @@ model.train()
completed_steps = 0
for epoch in range(num_train_epochs):
for step, batch in tqdm(
enumerate(train_dataloader, start=1), total=len(train_dataloader)
enumerate(train_dataloader, start=1), total=num_training_steps
):
logits = model(batch["input_ids"]).logits
loss = keytoken_weighted_loss(batch["input_ids"], logits, keytoken_ids)
Expand Down
2 changes: 1 addition & 1 deletion chapters/en/chapter7/7.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -956,7 +956,7 @@ Note that while the training happens, each time the model is saved (here, every
Once the training is complete, we can finally evaluate our model (and pray we didn't spend all that compute time on nothing). The `predict()` method of the `Trainer` will return a tuple where the first elements will be the predictions of the model (here a pair with the start and end logits). We send this to our `compute_metrics()` function:

```python
predictions, _ = trainer.predict(validation_dataset)
predictions, _, _ = trainer.predict(validation_dataset)
start_logits, end_logits = predictions
compute_metrics(start_logits, end_logits, validation_dataset, raw_datasets["validation"])
```
Expand Down