Skip to content

Commit

Permalink
Bump release (#187)
Browse files Browse the repository at this point in the history
* Chapter 2 Section 1 Bengali Translation (#72) (#168)

* [TH] Chapter 6 Section 1 and 2 (#171)

Co-authored-by: Suteera <[email protected]>

* [FA] CH1 / P1-2 (#142)

* Spanish Chapter 3: sections 1 & 2 (#162)

* fix typos in bpe, wordpiece, unigram (#166)

* [FR] French Review (#186)

* Part 7: Training a causal... fixes (#179)

* typo & error mitigation

* consistency

* Trainer.predict() returns 3 fields

* ran make style

* [TR] Translated Chapter 1.6 🤗 (#185)

* added chapter 1/6 to _toctree.yml

* [TR] Translated Chapter 1.6 🤗

Co-authored-by: Avishek Das <[email protected]>
Co-authored-by: Suteera  Seeha <[email protected]>
Co-authored-by: Suteera <[email protected]>
Co-authored-by: Saeed Choobani <[email protected]>
Co-authored-by: Fermin Ordaz <[email protected]>
Co-authored-by: Kerem Turgutlu <[email protected]>
Co-authored-by: lbourdois <[email protected]>
Co-authored-by: Sebastian Sosa <[email protected]>
Co-authored-by: tanersekmen <[email protected]>
  • Loading branch information
10 people authored May 17, 2022
1 parent 19d3c25 commit 679bdbf
Show file tree
Hide file tree
Showing 88 changed files with 11,843 additions and 9,754 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ This repo contains the content that's used to create the **[Hugging Face course]
| [Russian](https://huggingface.co/course/ru/chapter1/1) (WIP) | [`chapters/ru`](https://github.com/huggingface/course/tree/main/chapters/ru) | [@pdumin](https://github.com/pdumin), [@svv73](https://github.com/svv73) |
| [Spanish](https://huggingface.co/course/es/chapter1/1) (WIP) | [`chapters/es`](https://github.com/huggingface/course/tree/main/chapters/es) | [@camartinezbu](https://github.com/camartinezbu), [@munozariasjm](https://github.com/munozariasjm), [@fordaz](https://github.com/fordaz) |
| [Thai](https://huggingface.co/course/th/chapter1/1) (WIP) | [`chapters/th`](https://github.com/huggingface/course/tree/main/chapters/th) | [@peeraponw](https://github.com/peeraponw), [@a-krirk](https://github.com/a-krirk), [@jomariya23156](https://github.com/jomariya23156), [@ckingkan](https://github.com/ckingkan) |
| [Turkish](https://huggingface.co/course/tr/chapter1/1) (WIP) | [`chapters/tr`](https://github.com/huggingface/course/tree/main/chapters/tr) | [@tanersekmen](https://github.com/tanersekmen), [@mertbozkir](https://github.com/mertbozkir), [@ftarlaci](https://github.com/ftarlaci), [@akkasayaz](https://github.com/akkasayaz) |

### Translating the course into your language

Expand Down
5 changes: 5 additions & 0 deletions chapters/bn/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,9 @@
- title: 1. ট্রান্সফরমার মডেল
sections:
- local: chapter1/1
title: ভূমিকা

- title: 2. 🤗Transformers এর ব্যবহার
sections:
- local: chapter2/1
title: ভূমিকা
20 changes: 20 additions & 0 deletions chapters/bn/chapter2/1.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# ভূমিকা

[অধ্যায় ১](/course/bn/chapter1) এ আমরা দেখে এসেছি যে Transformer মডেলগুলো সাধারণত অনেক বড় হয়। লাখ-লাখ কোটি-কোটি প্যারামিটার সম্বলিত এই মডেল গুলো কে ট্রেনিং এবং ডেপ্লয় করা বেশ জটিল ও কষ্টসাধ্য একটা কাজ। তাছাড়াও প্রায় প্রতিদিনই নতুন নতুন মডেল রিলিজ হচ্ছে এবং সবগুলোরই নিজস্ব বাস্তবায়ন রয়েছে। এই সবকিছু একসাথে এপ্লাই করা খুব সহজ একটা কাজ নয়।

এই 🤗 Transformers লাইব্রেরিটা বানানো হয়েছে এই সমস্যাগুলো সমাধান করার জন্য। এর আসল উদ্দেশ্য হলো এমন একটি API প্রদান করা যার মাধ্যমে যেকোনো Transformer মডেলকে লোড করা, ট্রেইন করা কিংবা সেভ করা যাবে। লাইব্রেরিটির আসল ফিচারগুলো হলঃ

- **সহজে ব্যবহারযোগ্য**: ডাউনলোড করা, লোড করা এবং যেকোন state-of-the-art মডেল দিয়ে inference করা যাবে মাত্র দুই লাইনের কোড দিয়ে।
- **ফ্লেক্সিবিলিটি**: সবগুলো Transformer মডেলই আসলে PyTorch `nn.Module` অথবা TensorFlow `tf.keras.Model` ক্লাস , আর অন্য যেকোনো মডেলের মতোই এদেরকে তাদের নিজ নিজ মেশিন লার্নিং ফ্রেমওয়ার্ক এ সহজেই পরিচালনা করা যায়।

- **সরলতা**: লাইব্রেরি জুড়ে খুব কমই বিমূর্ততা তৈরি করা হয়। "All in one file" এমন একটি ধারণাঃ একটা মডেলের পুরো Forward Pass কে সম্পূর্ণরূপে একটি সিঙ্গেল ফাইলে নিয়ে আসা হয়েছে, যাতে করে কোডটি সহজেই বুঝা ও মডিফাই করা যায়।

এই শেষ বৈশিষ্ট্যটি(সরলতা) 🤗 ট্রান্সফরমারকে অন্যান্য ML লাইব্রেরি থেকে বেশ আলাদা করে তোলে। এখানে মডেলগুলি কোনো মডিউল এর উপর নির্মিত নয় যেগুলো ফাইল জুড়ে শেয়ার্ড অবস্থায় থাকে; বরংচ, প্রতিটি মডেলের নিজস্ব স্তর(Layer)রয়েছে। মডেলগুলিকে আরও সহজলভ্য এবং বোধগম্য করার পাশাপাশি, 🤗 Transformers আপনাকে অন্য মডেলকে প্রভাবিত না করে সহজেই একটি মডেলে নিয়ে এক্সপেরিমেন্ট করতে দেয়৷

এই অধ্যায়টি একটি পূর্নাঙ্গ উদাহরন দিয়ে শুরু হবে, যেখানে [অধ্যায় ১](/course/bn/chapter1) এ উল্লিখিত `pipeline()` ফাংশনটি প্রতিলিপি করতে আমরা একটি মডেল এবং একটি টোকেনাইজার একসাথে ব্যবহার করব। এর পরে, আমরা মডেল API নিয়ে আলোচনা করব: আমরা মডেল এবং কনফিগারেশন ক্লাসগুলির খুঁটিনাটি দেখব এবং আপনাকে দেখাব কীভাবে একটি মডেল লোড করতে হয় এবং কীভাবে এটি সংখ্যাসূচক ইনপুটগুলিকে প্রক্রিয়া করে আউটপুট প্রেডিক্ট করা যায়।

তারপরে আমরা টোকেনাইজার API দেখব, যা `pipeline()` ফাংশনের অন্য একটি প্রধান উপাদান। টোকেনাইজার জিনিসটা প্রথম ও শেষ প্রসেসিং স্টেপগুলোতে মেইনলি কাজে লাগে, নিউরাল নেটওয়ার্কের জন্য টেক্সট ডাটা থেকে সংখ্যাসূচক ইনপুটে রূপান্তর এবং পরে আবার প্রয়োজন অনুযায়ী সংখ্যাসূচক ডাটা থেকে টেক্সট ডাটাতে রূপান্তর করার সময়। পরিশেষে, আমরা আপনাকে দেখাব কিভাবে ব্যাচের মাধ্যমে একাধিক বাক্যকে একটি মডেলে পাঠানো যায়। তারপরে আরেকবার হাই-লেভেলে `tokenizer()` ফাংশনটিকে একনজরে দেখার মাধ্যমে পুরো অধ্যায়ের ইতি টানব।

<Tip>
⚠️ Model Hub এবং 🤗 Transformers এর সাথে উপলব্ধ সমস্ত বৈশিষ্ট্যগুলি থেকে উপকৃত হওয়ার জন্য, আমরা সাজেস্ট করি <a href="https://huggingface.co/join">এখানে একটি একাউন্ট তৈরি করার জন্যে।</a>.
</Tip>
2 changes: 1 addition & 1 deletion chapters/en/chapter3/3.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ This time, it will report the validation loss and metrics at the end of each epo

The `Trainer` will work out of the box on multiple GPUs or TPUs and provides lots of options, like mixed-precision training (use `fp16 = True` in your training arguments). We will go over everything it supports in Chapter 10.

This concludes the introduction to fine-tuning using the `Trainer` API. An example of doing this for most common NLP tasks will be given in [Chapter 7](course/chapter7), but for now let's look at how to do the same thing in pure PyTorch.
This concludes the introduction to fine-tuning using the `Trainer` API. An example of doing this for most common NLP tasks will be given in [Chapter 7](/course/chapter7), but for now let's look at how to do the same thing in pure PyTorch.

<Tip>

Expand Down
2 changes: 1 addition & 1 deletion chapters/en/chapter3/3_tf.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -196,4 +196,4 @@ metric.compute(predictions=class_preds, references=raw_datasets["validation"]["l

The exact results you get may vary, as the random initialization of the model head might change the metrics it achieved. Here, we can see our model has an accuracy of 85.78% on the validation set and an F1 score of 89.97. Those are the two metrics used to evaluate results on the MRPC dataset for the GLUE benchmark. The table in the [BERT paper](https://arxiv.org/pdf/1810.04805.pdf) reported an F1 score of 88.9 for the base model. That was the `uncased` model while we are currently using the `cased` model, which explains the better result.

This concludes the introduction to fine-tuning using the Keras API. An example of doing this for most common NLP tasks will be given in [Chapter 7](course/chapter7). If you would like to hone your skills on the Keras API, try to fine-tune a model on the GLUE SST-2 dataset, using the data processing you did in section 2.
This concludes the introduction to fine-tuning using the Keras API. An example of doing this for most common NLP tasks will be given in [Chapter 7](/course/chapter7). If you would like to hone your skills on the Keras API, try to fine-tune a model on the GLUE SST-2 dataset, using the data processing you did in section 2.
2 changes: 1 addition & 1 deletion chapters/en/chapter6/5.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ First we need a corpus, so let's create a simple one with a few sentences:

```python
corpus = [
"This is the Hugging Face course.",
"This is the Hugging Face Course.",
"This chapter is about tokenization.",
"This section shows several tokenizer algorithms.",
"Hopefully, you will be able to understand how they are trained and generate tokens.",
Expand Down
4 changes: 2 additions & 2 deletions chapters/en/chapter6/6.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ We will use the same corpus as in the BPE example:

```python
corpus = [
"This is the Hugging Face course.",
"This is the Hugging Face Course.",
"This chapter is about tokenization.",
"This section shows several tokenizer algorithms.",
"Hopefully, you will be able to understand how they are trained and generate tokens.",
Expand Down Expand Up @@ -307,7 +307,7 @@ print(vocab)
```python out
['[PAD]', '[UNK]', '[CLS]', '[SEP]', '[MASK]', '##a', '##b', '##c', '##d', '##e', '##f', '##g', '##h', '##i', '##k',
'##l', '##m', '##n', '##o', '##p', '##r', '##s', '##t', '##u', '##v', '##w', '##y', '##z', ',', '.', 'C', 'F', 'H',
'T', 'a', 'b', 'c', 'g', 'h', 'i', 's', 't', 'u', 'w', 'y', '##fu', 'Fa', 'Fac', '##ct', '##ful', '##full', '##fully',
'T', 'a', 'b', 'c', 'g', 'h', 'i', 's', 't', 'u', 'w', 'y', 'ab', '##fu', 'Fa', 'Fac', '##ct', '##ful', '##full', '##fully',
'Th', 'ch', '##hm', 'cha', 'chap', 'chapt', '##thm', 'Hu', 'Hug', 'Hugg', 'sh', 'th', 'is', '##thms', '##za', '##zat',
'##ut']
```
Expand Down
2 changes: 1 addition & 1 deletion chapters/en/chapter6/7.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ We will use the same corpus as before as an example:

```python
corpus = [
"This is the Hugging Face course.",
"This is the Hugging Face Course.",
"This chapter is about tokenization.",
"This section shows several tokenizer algorithms.",
"Hopefully, you will be able to understand how they are trained and generate tokens.",
Expand Down
13 changes: 10 additions & 3 deletions chapters/en/chapter7/6.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,11 @@ False True
We can use this to create a function that will stream the dataset and filter the elements we want:

```py
from collections import defaultdict
from tqdm import tqdm
from datasets import Dataset


def filter_streaming_dataset(dataset, filters):
filtered_dict = defaultdict(list)
total = 0
Expand Down Expand Up @@ -105,7 +110,7 @@ Filtering the full dataset can take 2-3h depending on your machine and bandwidth
from datasets import load_dataset, DatasetDict

ds_train = load_dataset("huggingface-course/codeparrot-ds-train", split="train")
ds_valid = load_dataset("huggingface-course/codeparrot-ds-valid", split="train")
ds_valid = load_dataset("huggingface-course/codeparrot-ds-valid", split="validation")

raw_datasets = DatasetDict(
{
Expand Down Expand Up @@ -347,7 +352,7 @@ data_collator = DataCollatorForLanguageModeling(tokenizer, mlm=False, return_ten
Let's have a look at an example:

```py
out = data_collator([tokenized_dataset["train"][i] for i in range(5)])
out = data_collator([tokenized_datasets["train"][i] for i in range(5)])
for key in out:
print(f"{key} shape: {out[key].shape}")
```
Expand Down Expand Up @@ -799,6 +804,8 @@ model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(
Now that we have sent our `train_dataloader` to `accelerator.prepare()`, we can use its length to compute the number of training steps. Remember that we should always do this after preparing the dataloader, as that method will change its length. We use a classic linear schedule from the learning rate to 0:

```py
from transformers import get_scheduler

num_train_epochs = 1
num_update_steps_per_epoch = len(train_dataloader)
num_training_steps = num_train_epochs * num_update_steps_per_epoch
Expand Down Expand Up @@ -856,7 +863,7 @@ model.train()
completed_steps = 0
for epoch in range(num_train_epochs):
for step, batch in tqdm(
enumerate(train_dataloader, start=1), total=len(train_dataloader)
enumerate(train_dataloader, start=1), total=num_training_steps
):
logits = model(batch["input_ids"]).logits
loss = keytoken_weighted_loss(batch["input_ids"], logits, keytoken_ids)
Expand Down
2 changes: 1 addition & 1 deletion chapters/en/chapter7/7.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -955,7 +955,7 @@ Note that while the training happens, each time the model is saved (here, every
Once the training is complete, we can finally evaluate our model (and pray we didn't spend all that compute time on nothing). The `predict()` method of the `Trainer` will return a tuple where the first elements will be the predictions of the model (here a pair with the start and end logits). We send this to our `compute_metrics()` function:

```python
predictions, _ = trainer.predict(validation_dataset)
predictions, _, _ = trainer.predict(validation_dataset)
start_logits, end_logits = predictions
compute_metrics(start_logits, end_logits, validation_dataset, raw_datasets["validation"])
```
Expand Down
4 changes: 2 additions & 2 deletions chapters/en/event/1.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ Jakob Uszkoreit is the co-founder of Inceptive. Inceptive designs RNA molecules
<Youtube id="u--UVvH-LIQ"/>
</div>

Lewis is a machine learning engineer at Hugging Face, focused on developing open-source tools and making them accessible to the wider community. He is also a co-author of an upcoming O’Reilly book on Transformers and you can follow him on Twitter (@_lewtun) for NLP tips and tricks!
Lewis is a machine learning engineer at Hugging Face, focused on developing open-source tools and making them accessible to the wider community. He is also a co-author of the O’Reilly book [Natural Language Processing with Transformers](https://www.oreilly.com/library/view/natural-language-processing/9781098103231/). You can follow him on Twitter (@_lewtun) for NLP tips and tricks!

**Matthew Carrigan:** *New TensorFlow Features for 🤗 Transformers and 🤗 Datasets*

Expand Down Expand Up @@ -162,4 +162,4 @@ Technology enthusiast, maker on my free time. I like challenges and solving prob
<Youtube id="yG6J2Zfo8iw"/>
</div>

Philipp Schmid is a Machine Learning Engineer and Tech Lead at Hugging Face, where he leads the collaboration with the Amazon SageMaker team. He is passionate about democratizing and productionizing cutting-edge NLP models and improving the ease of use for Deep Learning.
Philipp Schmid is a Machine Learning Engineer and Tech Lead at Hugging Face, where he leads the collaboration with the Amazon SageMaker team. He is passionate about democratizing and productionizing cutting-edge NLP models and improving the ease of use for Deep Learning.
7 changes: 7 additions & 0 deletions chapters/es/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,3 +33,10 @@
title: Tokenizadores
- local: chapter2/5
title: Manejando Secuencias Múltiples

- title: 3. Ajuste (fine-tuning) de un modelo preentrenado
sections:
- local: chapter3/1
title: Introducción
- local: chapter3/2
title: Procesamiento de los datos
21 changes: 21 additions & 0 deletions chapters/es/chapter3/1.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
<FrameworkSwitchCourse {fw} />

# Introducción

En el [Capítulo 2](/course/chapter2) exploramos como usar los tokenizadores y modelos preentrenados para realizar predicciones. Pero qué tal si deseas ajustar un modelo preentrenado con tu propio conjunto de datos ?

{#if fw === 'pt'}
* Como preparar un conjunto de datos grande desde el Hub.
* Como usar la API de alto nivel del entrenador para ajustar un modelo.
* Como usar un bucle personalizado de entrenamiento.
* Como aprovechar la Accelerate library 🤗 para fácilmente ejecutar el bucle personalizado de entrenamiento en cualquier configuración distribuida.

{:else}
* Como preparar un conjunto de datos grande desde el Hub.
* Como usar Keras para ajustar un modelo.
* Como usar Keras para obtener predicciones.
* Como usar una métrica personalizada.

{/if}

Para subir tus puntos de control (*checkpoints*) en el Hub de Hugging Face, necesitas una cuenta en huggingface.co: [crea una cuenta](https://huggingface.co/join)
Loading

0 comments on commit 679bdbf

Please sign in to comment.