Skip to content

Commit

Permalink
docs: update link huggingface map (huggingface#26077)
Browse files Browse the repository at this point in the history
  • Loading branch information
pphuc25 authored and EduardoPach committed Nov 18, 2023
1 parent 713f384 commit a96ea45
Show file tree
Hide file tree
Showing 15 changed files with 15 additions and 15 deletions.
2 changes: 1 addition & 1 deletion docs/source/es/tasks/language_modeling.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ Así es como puedes crear una función de preprocesamiento para convertir la lis
... return tokenizer([" ".join(x) for x in examples["answers.text"]], truncation=True)
```

Usa de 🤗 Datasets la función [`map`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map) para aplicar la función de preprocesamiento sobre el dataset en su totalidad. Puedes acelerar la función `map` configurando el argumento `batched=True` para procesar múltiples elementos del dataset a la vez y aumentar la cantidad de procesos con `num_proc`. Elimina las columnas que no necesitas:
Usa de 🤗 Datasets la función [`map`](https://huggingface.co/docs/datasets/process#map) para aplicar la función de preprocesamiento sobre el dataset en su totalidad. Puedes acelerar la función `map` configurando el argumento `batched=True` para procesar múltiples elementos del dataset a la vez y aumentar la cantidad de procesos con `num_proc`. Elimina las columnas que no necesitas:

```py
>>> tokenized_eli5 = eli5.map(
Expand Down
2 changes: 1 addition & 1 deletion docs/source/pt/tasks/sequence_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Crie uma função de pré-processamento para tokenizar o campo `text` e truncar
... return tokenizer(examples["text"], truncation=True)
```

Use a função [`map`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map) do 🤗 Datasets para aplicar a função de pré-processamento em todo o conjunto de dados. Você pode acelerar a função `map` definindo `batched=True` para processar vários elementos do conjunto de dados de uma só vez:
Use a função [`map`](https://huggingface.co/docs/datasets/process#map) do 🤗 Datasets para aplicar a função de pré-processamento em todo o conjunto de dados. Você pode acelerar a função `map` definindo `batched=True` para processar vários elementos do conjunto de dados de uma só vez:

```py
tokenized_imdb = imdb.map(preprocess_function, batched=True)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/pt/tasks/token_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ Aqui está como você pode criar uma função para realinhar os tokens e rótulo
... return tokenized_inputs
```

Use a função [`map`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map) do 🤗 Datasets para tokenizar e alinhar os rótulos em todo o conjunto de dados. Você pode acelerar a função `map` configurando `batched=True` para processar vários elementos do conjunto de dados de uma só vez:
Use a função [`map`](https://huggingface.co/docs/datasets/process#map) do 🤗 Datasets para tokenizar e alinhar os rótulos em todo o conjunto de dados. Você pode acelerar a função `map` configurando `batched=True` para processar vários elementos do conjunto de dados de uma só vez:

```py
>>> tokenized_wnut = wnut.map(tokenize_and_align_labels, batched=True)
Expand Down
2 changes: 1 addition & 1 deletion examples/flax/language-modeling/run_bart_dlm_flax.py
Original file line number Diff line number Diff line change
Expand Up @@ -684,7 +684,7 @@ def group_texts(examples):
# might be slower to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
# https://huggingface.co/docs/datasets/process#map
tokenized_datasets = tokenized_datasets.map(
group_texts,
batched=True,
Expand Down
2 changes: 1 addition & 1 deletion examples/flax/language-modeling/run_clm_flax.py
Original file line number Diff line number Diff line change
Expand Up @@ -607,7 +607,7 @@ def group_texts(examples):
# to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
# https://huggingface.co/docs/datasets/process#map

lm_datasets = tokenized_datasets.map(
group_texts,
Expand Down
2 changes: 1 addition & 1 deletion examples/flax/language-modeling/run_mlm_flax.py
Original file line number Diff line number Diff line change
Expand Up @@ -625,7 +625,7 @@ def group_texts(examples):
# might be slower to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
# https://huggingface.co/docs/datasets/process#map
tokenized_datasets = tokenized_datasets.map(
group_texts,
batched=True,
Expand Down
2 changes: 1 addition & 1 deletion examples/flax/language-modeling/run_t5_mlm_flax.py
Original file line number Diff line number Diff line change
Expand Up @@ -715,7 +715,7 @@ def group_texts(examples):
# might be slower to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
# https://huggingface.co/docs/datasets/process#map
tokenized_datasets = tokenized_datasets.map(
group_texts,
batched=True,
Expand Down
2 changes: 1 addition & 1 deletion examples/pytorch/language-modeling/run_clm.py
Original file line number Diff line number Diff line change
Expand Up @@ -533,7 +533,7 @@ def group_texts(examples):
# to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
# https://huggingface.co/docs/datasets/process#map

with training_args.main_process_first(desc="grouping texts together"):
if not data_args.streaming:
Expand Down
2 changes: 1 addition & 1 deletion examples/pytorch/language-modeling/run_clm_no_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -473,7 +473,7 @@ def group_texts(examples):
# to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
# https://huggingface.co/docs/datasets/process#map

with accelerator.main_process_first():
lm_datasets = tokenized_datasets.map(
Expand Down
2 changes: 1 addition & 1 deletion examples/pytorch/language-modeling/run_mlm.py
Original file line number Diff line number Diff line change
Expand Up @@ -547,7 +547,7 @@ def group_texts(examples):
# might be slower to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
# https://huggingface.co/docs/datasets/process#map

with training_args.main_process_first(desc="grouping texts together"):
if not data_args.streaming:
Expand Down
2 changes: 1 addition & 1 deletion examples/pytorch/language-modeling/run_mlm_no_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -504,7 +504,7 @@ def group_texts(examples):
# might be slower to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
# https://huggingface.co/docs/datasets/process#map

with accelerator.main_process_first():
tokenized_datasets = tokenized_datasets.map(
Expand Down
2 changes: 1 addition & 1 deletion examples/pytorch/language-modeling/run_plm.py
Original file line number Diff line number Diff line change
Expand Up @@ -478,7 +478,7 @@ def group_texts(examples):
# might be slower to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
# https://huggingface.co/docs/datasets/process#map

with training_args.main_process_first(desc="grouping texts together"):
tokenized_datasets = tokenized_datasets.map(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -395,7 +395,7 @@ def group_texts(examples):
# to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
# https://huggingface.co/docs/datasets/process#map

lm_datasets = tokenized_datasets.map(
group_texts,
Expand Down
2 changes: 1 addition & 1 deletion examples/tensorflow/language-modeling/run_clm.py
Original file line number Diff line number Diff line change
Expand Up @@ -459,7 +459,7 @@ def group_texts(examples):
# to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
# https://huggingface.co/docs/datasets/process#map

lm_datasets = tokenized_datasets.map(
group_texts,
Expand Down
2 changes: 1 addition & 1 deletion examples/tensorflow/language-modeling/run_mlm.py
Original file line number Diff line number Diff line change
Expand Up @@ -474,7 +474,7 @@ def group_texts(examples):
# might be slower to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
# https://huggingface.co/docs/datasets/process#map

tokenized_datasets = tokenized_datasets.map(
group_texts,
Expand Down

0 comments on commit a96ea45

Please sign in to comment.