Skip to content

Commit

Permalink
🌐 [i18n-KO] Fixed Korean and English quicktour.md (huggingface#24664)
Browse files Browse the repository at this point in the history
* fix: english/korean quicktour.md

* fix: resolve suggestions

Co-authored-by: Hyeonseo Yun <[email protected]>
Co-authored-by: Sohyun Sim <[email protected]>
Co-authored-by: Kihoon Son <[email protected]>

* fix: follow glossary

* 파인튜닝 -> 미세조정

---------

Co-authored-by: Hyeonseo Yun <[email protected]>
Co-authored-by: Sohyun Sim <[email protected]>
Co-authored-by: Kihoon Son <[email protected]>
  • Loading branch information
4 people authored and blbadger committed Nov 8, 2023
1 parent 7efa0d8 commit 2653b0a
Show file tree
Hide file tree
Showing 2 changed files with 113 additions and 102 deletions.
10 changes: 5 additions & 5 deletions docs/source/en/quicktour.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ For a complete list of available tasks, check out the [pipeline API reference](.
| Audio classification | assign a label to some audio data | Audio | pipeline(task=“audio-classification”) |
| Automatic speech recognition | transcribe speech into text | Audio | pipeline(task=“automatic-speech-recognition”) |
| Visual question answering | answer a question about the image, given an image and a question | Multimodal | pipeline(task=“vqa”) |
| Document question answering | answer a question about a document, given an image and a question | Multimodal | pipeline(task="document-question-answering") |
| Document question answering | answer a question about the document, given a document and a question | Multimodal | pipeline(task="document-question-answering") |
| Image captioning | generate a caption for a given image | Multimodal | pipeline(task="image-to-text") |

Start by creating an instance of [`pipeline`] and specifying a task you want to use it for. In this guide, you'll use the [`pipeline`] for sentiment analysis as an example:
Expand Down Expand Up @@ -289,7 +289,7 @@ See the [task summary](./task_summary) for tasks supported by an [`AutoModel`] c

</Tip>

Now pass your preprocessed batch of inputs directly to the model by passing the dictionary keys directly to the tensors:
Now pass your preprocessed batch of inputs directly to the model. You can pass the tensors as-is:

```py
>>> tf_outputs = tf_model(tf_batch)
Expand Down Expand Up @@ -410,7 +410,7 @@ All models are a standard [`torch.nn.Module`](https://pytorch.org/docs/stable/nn

Depending on your task, you'll typically pass the following parameters to [`Trainer`]:

1. A [`PreTrainedModel`] or a [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module):
1. You'll start with a [`PreTrainedModel`] or a [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module):

```py
>>> from transformers import AutoModelForSequenceClassification
Expand All @@ -432,7 +432,7 @@ Depending on your task, you'll typically pass the following parameters to [`Trai
... )
```

3. A preprocessing class like a tokenizer, image processor, feature extractor, or processor:
3. Load a preprocessing class like a tokenizer, image processor, feature extractor, or processor:

```py
>>> from transformers import AutoTokenizer
Expand Down Expand Up @@ -512,7 +512,7 @@ All models are a standard [`tf.keras.Model`](https://www.tensorflow.org/api_docs
>>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
```

2. A preprocessing class like a tokenizer, image processor, feature extractor, or processor:
2. Load a preprocessing class like a tokenizer, image processor, feature extractor, or processor:

```py
>>> from transformers import AutoTokenizer
Expand Down
Loading

0 comments on commit 2653b0a

Please sign in to comment.