Skip to content

Commit

Permalink
[docs] make empty_cache device-agnostic (#34774)
Browse files Browse the repository at this point in the history
make device-agnostic
  • Loading branch information
faaany authored Nov 18, 2024
1 parent 36759f3 commit 8568bf1
Showing 1 changed file with 4 additions and 2 deletions.
6 changes: 4 additions & 2 deletions docs/source/en/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -287,9 +287,10 @@ model.fit(tf_dataset)
At this point, you may need to restart your notebook or execute the following code to free some memory:

```py
from accelerate.utils.memory import clear_device_cache
del model
del trainer
torch.cuda.empty_cache()
clear_device_cache()
```

Next, manually postprocess `tokenized_dataset` to prepare it for training.
Expand Down Expand Up @@ -364,8 +365,9 @@ Lastly, specify `device` to use a GPU if you have access to one. Otherwise, trai

```py
>>> import torch
>>> from accelerate.test_utils.testing import get_backend

>>> device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
>>> device, _, _ = get_backend() # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.)
>>> model.to(device)
```

Expand Down

0 comments on commit 8568bf1

Please sign in to comment.