From b759479c75ae1441eb42b7d1d135eea0c4898b6c Mon Sep 17 00:00:00 2001 From: 1375626371 <40328311+1375626371@users.noreply.github.com> Date: Tue, 12 Apr 2022 00:53:37 +0800 Subject: [PATCH 1/6] Chinese - Chapter 1 finished --- chapters/zh/_toctree.yml | 173 ++++++++++++++++++++++ chapters/zh/chapter1/1.mdx | 52 +++++++ chapters/zh/chapter1/10.mdx | 253 ++++++++++++++++++++++++++++++++ chapters/zh/chapter1/2.mdx | 20 +++ chapters/zh/chapter1/3.mdx | 280 ++++++++++++++++++++++++++++++++++++ chapters/zh/chapter1/4.mdx | 172 ++++++++++++++++++++++ chapters/zh/chapter1/5.mdx | 17 +++ chapters/zh/chapter1/6.mdx | 17 +++ chapters/zh/chapter1/7.mdx | 16 +++ chapters/zh/chapter1/8.mdx | 31 ++++ chapters/zh/chapter1/9.mdx | 11 ++ 11 files changed, 1042 insertions(+) create mode 100644 chapters/zh/_toctree.yml create mode 100644 chapters/zh/chapter1/1.mdx create mode 100644 chapters/zh/chapter1/10.mdx create mode 100644 chapters/zh/chapter1/2.mdx create mode 100644 chapters/zh/chapter1/3.mdx create mode 100644 chapters/zh/chapter1/4.mdx create mode 100644 chapters/zh/chapter1/5.mdx create mode 100644 chapters/zh/chapter1/6.mdx create mode 100644 chapters/zh/chapter1/7.mdx create mode 100644 chapters/zh/chapter1/8.mdx create mode 100644 chapters/zh/chapter1/9.mdx diff --git a/chapters/zh/_toctree.yml b/chapters/zh/_toctree.yml new file mode 100644 index 000000000..298de22ab --- /dev/null +++ b/chapters/zh/_toctree.yml @@ -0,0 +1,173 @@ +- title: 0. 准备 + sections: + - local: chapter0/1 + title: 课程简介 + +- title: 1. Transformer 模型 + sections: + - local: chapter1/1 + title: 章节简介 + - local: chapter1/2 + title: 自然语言处理 + - local: chapter1/3 + title: Transformers能做什么? + - local: chapter1/4 + title: Transformers 是如何工作的? + - local: chapter1/5 + title: 编码器模型 + - local: chapter1/6 + title: 解码器模型 + - local: chapter1/7 + title: 序列到序列模型 + - local: chapter1/8 + title: 偏见和局限性 + - local: chapter1/9 + title: 总结 + - local: chapter1/10 + title: 章末小测验 + quiz: 1 + +- title: 2. Using 🤗 Transformers + sections: + - local: chapter2/1 + title: Introduction + - local: chapter2/2 + title: Behind the pipeline + - local: chapter2/3 + title: Models + - local: chapter2/4 + title: Tokenizers + - local: chapter2/5 + title: Handling multiple sequences + - local: chapter2/6 + title: Putting it all together + - local: chapter2/7 + title: Basic usage completed! + - local: chapter2/8 + title: End-of-chapter quiz + quiz: 2 + +- title: 3. Fine-tuning a pretrained model + sections: + - local: chapter3/1 + title: Introduction + - local: chapter3/2 + title: Processing the data + - local: chapter3/3 + title: Fine-tuning a model with the Trainer API or Keras + local_fw: { pt: chapter3/3, tf: chapter3/3_tf } + - local: chapter3/4 + title: A full training + - local: chapter3/5 + title: Fine-tuning, Check! + - local: chapter3/6 + title: End-of-chapter quiz + quiz: 3 + +- title: 4. Sharing models and tokenizers + sections: + - local: chapter4/1 + title: The Hugging Face Hub + - local: chapter4/2 + title: Using pretrained models + - local: chapter4/3 + title: Sharing pretrained models + - local: chapter4/4 + title: Building a model card + - local: chapter4/5 + title: Part 1 completed! + - local: chapter4/6 + title: End-of-chapter quiz + quiz: 4 + +- title: 5. The 🤗 Datasets library + sections: + - local: chapter5/1 + title: Introduction + - local: chapter5/2 + title: What if my dataset isn't on the Hub? + - local: chapter5/3 + title: Time to slice and dice + - local: chapter5/4 + title: Big data? 🤗 Datasets to the rescue! + - local: chapter5/5 + title: Creating your own dataset + - local: chapter5/6 + title: Semantic search with FAISS + - local: chapter5/7 + title: 🤗 Datasets, check! + - local: chapter5/8 + title: End-of-chapter quiz + quiz: 5 + +- title: 6. The 🤗 Tokenizers library + sections: + - local: chapter6/1 + title: Introduction + - local: chapter6/2 + title: Training a new tokenizer from an old one + - local: chapter6/3 + title: Fast tokenizers' special powers + - local: chapter6/3b + title: Fast tokenizers in the QA pipeline + - local: chapter6/4 + title: Normalization and pre-tokenization + - local: chapter6/5 + title: Byte-Pair Encoding tokenization + - local: chapter6/6 + title: WordPiece tokenization + - local: chapter6/7 + title: Unigram tokenization + - local: chapter6/8 + title: Building a tokenizer, block by block + - local: chapter6/9 + title: Tokenizers, check! + - local: chapter6/10 + title: End-of-chapter quiz + quiz: 6 + +- title: 7. Main NLP tasks + sections: + - local: chapter7/1 + title: Introduction + - local: chapter7/2 + title: Token classification + - local: chapter7/3 + title: Fine-tuning a masked language model + - local: chapter7/4 + title: Translation + - local: chapter7/5 + title: Summarization + - local: chapter7/6 + title: Training a causal language model from scratch + - local: chapter7/7 + title: Question answering + - local: chapter7/8 + title: Mastering NLP + - local: chapter7/9 + title: End-of-chapter quiz + quiz: 7 + +- title: 8. How to ask for help + sections: + - local: chapter8/1 + title: Introduction + - local: chapter8/2 + title: What to do when you get an error + - local: chapter8/3 + title: Asking for help on the forums + - local: chapter8/4 + title: Debugging the training pipeline + local_fw: { pt: chapter8/4, tf: chapter8/4_tf } + - local: chapter8/5 + title: How to write a good issue + - local: chapter8/6 + title: Part 2 completed! + - local: chapter8/7 + title: End-of-chapter quiz + quiz: 8 + +- title: Hugging Face Course Event + sections: + - local: event/1 + title: Part 2 Release Event diff --git a/chapters/zh/chapter1/1.mdx b/chapters/zh/chapter1/1.mdx new file mode 100644 index 000000000..68e6a14c7 --- /dev/null +++ b/chapters/zh/chapter1/1.mdx @@ -0,0 +1,52 @@ +# 简介 + +## 欢迎来到🤗课程 + + + +本课程将使用 Hugging Face 生态系统中的库——🤗 Transformers、🤗 Datasets、🤗 Tokenizers 和 🤗 Accelerate——以及 Hugging Face Hub 教你自然语言处理 (NLP)。它是完全免费的,并且没有广告。 + + +## 有什么是值得期待的? + +以下是课程的简要概述: + +
+Brief overview of the chapters of the course. + +
+ +- 第 1 章到第 4 章介绍了 🤗 Transformers 库的主要概念。在本课程的这一部分结束时,您将熟悉 Transformer 模型的工作原理,并将了解如何使用 [Hugging Face Hub](https://huggingface.co/models) 中的模型,在数据集上对其进行微调,并在 Hub 上分享您的结果。 +- 第 5 章到第 8 章在深入研究经典 NLP 任务之前,教授 🤗 Datasets和 🤗 Tokenizers的基础知识。在本部分结束时,您将能够自己解决最常见的 NLP 问题。 +- 第 9 章到第 12 章更加深入,探讨了如何使用 Transformer 模型处理语音处理和计算机视觉中的任务。在此过程中,您将学习如何构建和分享模型,并针对生产环境对其进行优化。在这部分结束时,您将准备好将🤗 Transformers 应用于(几乎)任何机器学习问题! + +这个课程: + +* 需要良好的 Python 知识 +* 最好先学习深度学习入门课程,例如[DeepLearning.AI](https://www.deeplearning.ai/) 提供的 [fast.ai实用深度学习教程](https://course.fast.ai/) +* 不需要事先具备 [PyTorch](https://pytorch.org/) 或 [TensorFlow](https://www.tensorflow.org/) 知识,虽然熟悉其中任何一个都会对huggingface的学习有所帮助 + +完成本课程后,我们建议您查看 [DeepLearning.AI的自然语言处理系列课程](https://www.coursera.org/specializations/natural-language-processing?utm_source=deeplearning-ai&utm_medium=institutions&utm_campaign=20211011-nlp-2-hugging_face-page-nlp-refresh),其中涵盖了广泛的传统 NLP 模型,如朴素贝叶斯和 LSTM,这些模型非常值得了解! + +## 我们是谁? + +关于作者: + +**Matthew Carrigan** 是 Hugging Face 的机器学习工程师。他住在爱尔兰都柏林,之前在 Parse.ly 担任机器学习工程师,在此之前,他在Trinity College Dublin担任博士后研究员。他不相信我们会通过扩展现有架构来实现 AGI,但无论如何都对机器人充满希望。 + +**Lysandre Debut** 是 Hugging Face 的机器学习工程师,从早期的开发阶段就一直致力于 🤗 Transformers 库。他的目标是通过使用非常简单的 API 开发工具,让每个人都可以使用 NLP。 + +**Sylvain Gugger** 是 Hugging Face 的一名研究工程师,也是 🤗Transformers库的核心维护者之一。此前,他是 fast.ai 的一名研究科学家,他与Jeremy Howard 共同编写了[Deep Learning for Coders with fastai and Py Torch](https://learning.oreilly.com/library/view/deep-learning-for/9781492045519/)。他的主要研究重点是通过设计和改进允许模型在有限资源上快速训练的技术,使深度学习更容易普及。 + +**Merve Noyan** 是 Hugging Face 的开发者倡导者,致力于开发工具并围绕它们构建内容,以使每个人的机器学习平民化。 + +**Lucile Saulnier** 是 Hugging Face 的机器学习工程师,负责开发和支持开源工具的使用。她还积极参与了自然语言处理领域的许多研究项目,例如协作训练和 BigScience。 + +**Lewis Tunstall** 是 Hugging Face 的机器学习工程师,专注于开发开源工具并使更广泛的社区可以使用它们。他也是即将出版的一本书[O’Reilly book on Transformers](https://www.oreilly.com/library/view/natural-language-processing/9781098103231/)的作者之一。 + +**Leandro von Werra** 是 Hugging Face 开源团队的机器学习工程师,也是即将出版的一本书[O’Reilly book on Transformers](https://www.oreilly.com/library/view/natural-language-processing/9781098103231/)的作者之一。他拥有多年的行业经验,通过在整个机器学习堆栈中工作,将 NLP 项目投入生产。 + +你准备好了吗?在本章中,您将学习: +* 如何使用 `pipeline()` 函数解决文本生成、分类等NLP任务 +* 关于 Transformer 架构 +* 如何区分编码器、解码器和编码器-解码器架构和用例 diff --git a/chapters/zh/chapter1/10.mdx b/chapters/zh/chapter1/10.mdx new file mode 100644 index 000000000..23f768115 --- /dev/null +++ b/chapters/zh/chapter1/10.mdx @@ -0,0 +1,253 @@ + + +# 章末小测试 + +这一章涵盖了很多内容! 如果有一些不太明白的地方,请不要担心; 下一章将帮助你了解这些模块在底层是如何工作的。 + +让我们来测试一下你在这一章学到了什么! + +### 1. 探索 Hub 并寻找 `roberta-large-mnli` checkpoint。 它可以完成什么类型的任务? + + +roberta-large-mnli 页面回顾一下." + }, + { + text: "文本分类", + explain: "更准确地说,它对两个句子在三个标签(矛盾、无关、相近)之间的逻辑链接进行分类——这项任务也称为自然语言推理.", + correct: true + }, + { + text: "文本生成", + explain: "点击前往roberta-large-mnli 页面回顾一下." + } + ]} +/> + +### 2. 下面的代码将会返回什么结果? + +```py +from transformers import pipeline + +ner = pipeline("ner", grouped_entities=True) +ner("My name is Sylvain and I work at Hugging Face in Brooklyn.") +``` + +sentiment-analysis pipeline将会返回这些." + }, + { + text: "它将返回一个生成的文本来完成这句话。", + explain: "这个选项是不对的 — text-generation pipeline将会返回这些.", + }, + { + text: "它将返回代表人员、组织或位置的单词。", + explain: "此外,使用 grouped_entities=True,它会将属于同一实体的单词组合在一起,例如“Hugging Face”。", + correct: true + } + ]} +/> + +### 3. 在此代码示例中...的地方应该填写什么? + +```py +from transformers import pipeline + +filler = pipeline("fill-mask", model="bert-base-cased") +result = filler("...") +``` + + has been waiting for you.", + explain: "这个选项是不对的。 请查看 bert-base-cased 模型卡片,然后再尝试找找错在哪里。" + }, + { + text: "This [MASK] has been waiting for you.", + explain: "正解! 这个模型的mask的掩码是[MASK].", + correct: true + }, + { + text: "This man has been waiting for you.", + explain: "这个选项是不对的。 这个pipeline的作用是填充经过mask的文字,因此它需要在输入的文本中存在mask的token。" + } + ]} +/> + +### 4. 为什么这段代码会无法运行? + +```py +from transformers import pipeline + +classifier = pipeline("zero-shot-classification") +result = classifier("This is a course about the Transformers library") +``` + +candidate_labels=[...].", + correct: true + }, + { + text: "这个pipeline需要多个句子,而不仅仅是一个。", + explain: "这个选项是不对的。尽管正确使用时,此pipeline可以同时处理多个句子(与所有其他pipeline一样)。" + }, + { + text: "像往常一样,🤗 Transformers库出故障了。", + explain: "对此,我们不予置评!" + }, + { + text: "该pipeline需要更长的输入; 这个句子太短了。", + explain: "这个选项是不对的。 不过请注意,在这个pipeline处理时,太长的文本将被截断。" + } + ]} +/> + +### 5. “迁移学习”是什么意思? + + + +### 6. 语言模型在预训练时通常不需要标签,这样的说法是否正确。 + + +自监督,这意味着标签是根据输入自动创建的(例如:预测下一个单词或填充一些[MARSK]单词)。", + correct: true + }, + { + text: "错误", + explain: "这不是一个正确的答案。" + } + ]} +/> + + +### 7. 选择最能描述“模型(model)”、“架构(architecture)”和“权重(weights)”的句子。 + + + +### 8. 你将使用以下哪种类型的模型来根据输入的提示生成文本? + + + +### 9. 你会使用哪些类型的模型来生成文本的摘要? + + + +### 10. 你会使用哪一种类型的模型来根据特定的标签对文本输入进行分类? + + + +### 11. 模型中观察到的偏见有哪些可能的来源? + + diff --git a/chapters/zh/chapter1/2.mdx b/chapters/zh/chapter1/2.mdx new file mode 100644 index 000000000..1b5ee0ea6 --- /dev/null +++ b/chapters/zh/chapter1/2.mdx @@ -0,0 +1,20 @@ +# 自然语言处理 + +在进入 Transformer 模型之前,让我们快速概述一下自然语言处理是什么以及我们为什么这么重视它。 + +## 什么是自然语言处理? + +NLP 是语言学和机器学习交叉领域,专注于理解与人类语言相关的一切。 NLP 任务的目标不仅是单独理解单个单词,而且是能够理解这些单词的上下文。 + +以下是常见 NLP 任务的列表,每个任务都有一些示例: + +- **对整个句子进行分类**: 获取评论的情绪,检测电子邮件是否为垃圾邮件,确定句子在语法上是否正确或两个句子在逻辑上是否相关 +- **对句子中的每个词进行分类**: 识别句子的语法成分(名词、动词、形容词)或命名实体(人、地点、组织) +- **生成文本内容**: 用自动生成的文本完成提示,用屏蔽词填充文本中的空白 +- **从文本中提取答案**: 给定问题和上下文,根据上下文中提供的信息提取问题的答案 +- **从输入文本生成新句子**: 将文本翻译成另一种语言,总结文本 + +NLP 不仅限于书面文本。它还解决了语音识别和计算机视觉中的复杂挑战,例如生成音频样本的转录或图像描述。 +## 为什么具有挑战性? + +计算机处理信息的方式与人类不同。例如,当我们读到“我饿了”这句话时,我们很容易理解它的意思。同样,给定两个句子,例如“我很饿”和“我很伤心”,我们可以轻松确定它们的相似程度。对于机器学习 (ML) 模型,此类任务更加困难。文本需要以一种使模型能够从中学习的方式进行处理。而且由于语言很复杂,我们需要仔细考虑必须如何进行这种处理。关于如何表示文本已经做了很多研究,我们将在下一章中介绍一些方法。 \ No newline at end of file diff --git a/chapters/zh/chapter1/3.mdx b/chapters/zh/chapter1/3.mdx new file mode 100644 index 000000000..1f067ab6d --- /dev/null +++ b/chapters/zh/chapter1/3.mdx @@ -0,0 +1,280 @@ +# Transformers能做什么? + + + +在本节中,我们将看看 Transformer 模型可以做什么,并使用 🤗 Transformers 库中的第一个工具:pipeline() 函数。 + +## Transformer被应用于各个方面! +Transformer 模型用于解决各种 NLP 任务,就像上一节中提到的那样。以下是一些使用 Hugging Face 和 Transformer 模型的公司和组织,他们也通过分享他们的模型回馈社区: + +![使用 Hugging Face 的公司](https://huggingface.co/course/static/chapter1/companies.PNG) +[🤗 Transformers 库](https://github.com/huggingface/transformers)提供了创建和使用这些共享模型的功能。[模型中心(hub)](https://huggingface.co/models)包含数千个任何人都可以下载和使用的预训练模型。您还可以将自己的模型上传到 Hub! + +```python +⚠️ Hugging Face Hub 不限于 Transformer 模型。任何人都可以分享他们想要的任何类型的模型或数据集!创建一个 Huggingface.co 帐户(https://huggingface.co/join)以使用所有可用功能! +``` + +在深入研究 Transformer 模型的底层工作原理之前,让我们先看几个示例,看看它们如何用于解决一些有趣的 NLP 问题。 + +## 使用pipelines + + +(这里有一个视频,但是国内可能打不开,译者注) + + +🤗 Transformers 库中最基本的对象是 **pipeline()** 函数。它将模型与其必要的预处理和后处理步骤连接起来,使我们能够通过直接输入任何文本并获得最终的答案: + +```python +from transformers import pipeline +classifier = pipeline("sentiment-analysis") +classifier("I've been waiting for a HuggingFace course my whole life.") +``` +```python +[{'label': 'POSITIVE', 'score': 0.9598047137260437}] +``` + + +我们也可以多传几句! +```python +classifier( + ["I've been waiting for a HuggingFace course my whole life.", "I hate this so much!"] +) +``` +```python +[{'label': 'POSITIVE', 'score': 0.9598047137260437}, + {'label': 'NEGATIVE', 'score': 0.9994558095932007}] +``` +默认情况下,此pipeline选择一个特定的预训练模型,该模型已针对英语情感分析进行了微调。创建**分类器**对象时,将下载并缓存模型。如果您重新运行该命令,则将使用缓存的模型,无需再次下载模型。 + +将一些文本传递到pipeline时涉及三个主要步骤: + +1. 文本被预处理为模型可以理解的格式。 +2. 预处理的输入被传递给模型。 +3. 模型处理后输出最终人类可以理解的结果。 + +目前[可用的一些pipeline](https://huggingface.co/transformers/main_classes/pipelines.html)是: + +* **特征提取**(获取文本的向量表示) +* **填充空缺** +* **ner**(命名实体识别) +* **问答** +* **情感分析** +* **文本摘要** +* **文本生成** +* **翻译** +* **零样本分类** + +让我们来看看其中的一些吧! + +## 零样本分类 +我们将首先处理一项非常具挑战性的任务,我们需要对尚未标记的文本进行分类。这是实际项目中的常见场景,因为注释文本通常很耗时并且需要领域专业知识。对于这项任务**zero-shot-classification**pipeline非常强大:它允许您直接指定用于分类的标签,因此您不必依赖预训练模型的标签。下面的模型展示了如何使用这两个标签将句子分类为正面或负面——但也可以使用您喜欢的任何其他标签集对文本进行分类。 + +```python +from transformers import pipeline + +classifier = pipeline("zero-shot-classification") +classifier( + "This is a course about the Transformers library", + candidate_labels=["education", "politics", "business"], +) +``` +```python +{'sequence': 'This is a course about the Transformers library', + 'labels': ['education', 'business', 'politics'], + 'scores': [0.8445963859558105, 0.111976258456707, 0.043427448719739914]} +``` + +此pipeline称为zero-shot,因为您不需要对数据上的模型进行微调即可使用它。它可以直接返回您想要的任何标签列表的概率分数! +``` +✏️快来试试吧!使用您自己的序列和标签,看看模型的行为。 +``` + +## 文本生成 +现在让我们看看如何使用pipeline来生成一些文本。这里的主要使用方法是您提供一个提示,模型将通过生成剩余的文本来自动完成整段话。这类似于许多手机上的预测文本功能。文本生成涉及随机性,因此如果您没有得到相同的如下所示的结果,这是正常的。 + +```python +from transformers import pipeline + +generator = pipeline("text-generation") +generator("In this course, we will teach you how to") +``` +```python +[{'generated_text': 'In this course, we will teach you how to understand and use ' + 'data flow and data interchange when handling user data. We ' + 'will be working with one or more of the most commonly used ' + 'data flows — data flows of various types, as seen by the ' + 'HTTP'}] +``` +您可以使用参数 **num_return_sequences** 控制生成多少个不同的序列,并使用参数 **max_length** 控制输出文本的总长度。 + +``` +✏️快来试试吧!使用 num_return_sequences 和 max_length 参数生成两个句子,每个句子 15 个单词。 +``` + +## 在pipeline中使用 Hub 中的其他模型 +前面的示例使用了默认模型,但您也可以从 Hub 中选择特定模型以在特定任务的pipeline中使用 - 例如,文本生成。转到[模型中心(hub)](https://huggingface.co/models)并单击左侧的相应标签将会只显示该任务支持的模型。[例如这样](https://huggingface.co/models?pipeline_tag=text-generation)。 + +让我们试试 [**distilgpt2**](https://huggingface.co/distilgpt2) 模型吧!以下是如何在与以前相同的pipeline中加载它: + +```python +from transformers import pipeline + +generator = pipeline("text-generation", model="distilgpt2") +generator( + "In this course, we will teach you how to", + max_length=30, + num_return_sequences=2, +) +``` +```python +[{'generated_text': 'In this course, we will teach you how to manipulate the world and ' + 'move your mental and physical capabilities to your advantage.'}, + {'generated_text': 'In this course, we will teach you how to become an expert and ' + 'practice realtime, and with a hands on experience on both real ' + 'time and real'}] +``` +您可以通过单击语言标签来筛选搜索结果,然后选择另一种文本生成模型的模型。模型中心(hub)甚至包含支持多种语言的多语言模型。 + +通过单击选择模型后,您会看到有一个小组件,可让您直接在线试用。通过这种方式,您可以在下载之前快速测试模型的功能。 +``` +✏️快来试试吧!使用标签筛选查找另一种语言的文本生成模型。使用小组件测试并在pipeline中使用它! +``` + +## 推理 API +所有模型都可以使用 Inference API 直接通过浏览器进行测试,该 API 可在 [Hugging Face 网站](https://huggingface.co/)上找到。通过输入自定义文本并观察模型的输出,您可以直接在此页面上使用模型。 + +小组件形式的推理 API 也可作为付费产品使用,如果您的工作流程需要它,它会派上用场。有关更多详细信息,请参阅[定价页面](https://huggingface.co/pricing)。 + +## Mask filling +您将尝试的下一个pipeline是 **fill-mask**。此任务的想法是填充给定文本中的空白: +```python +from transformers import pipeline + +unmasker = pipeline("fill-mask") +unmasker("This course will teach you all about models.", top_k=2) +``` +```python +[{'sequence': 'This course will teach you all about mathematical models.', + 'score': 0.19619831442832947, + 'token': 30412, + 'token_str': ' mathematical'}, + {'sequence': 'This course will teach you all about computational models.', + 'score': 0.04052725434303284, + 'token': 38163, + 'token_str': ' computational'}] +``` +**top_k** 参数控制要显示的结果有多少种。请注意,这里模型填充了特殊的< **mask** >词,它通常被称为掩码标记。其他掩码填充模型可能有不同的掩码标记,因此在探索其他模型时要验证正确的掩码字是什么。检查它的一种方法是查看小组件中使用的掩码。 + +``` +✏️快来试试吧!在 Hub 上搜索基于 bert 的模型并在推理 API 小组件中找到它的掩码。这个模型对上面pipeline示例中的句子预测了什么? +``` + +## 命名实体识别 +命名实体识别 (NER) 是一项任务,其中模型必须找到输入文本的哪些部分对应于诸如人员、位置或组织之类的实体。让我们看一个例子: +```python +from transformers import pipeline + +ner = pipeline("ner", grouped_entities=True) +ner("My name is Sylvain and I work at Hugging Face in Brooklyn.") +``` +```python +[{'entity_group': 'PER', 'score': 0.99816, 'word': 'Sylvain', 'start': 11, 'end': 18}, + {'entity_group': 'ORG', 'score': 0.97960, 'word': 'Hugging Face', 'start': 33, 'end': 45}, + {'entity_group': 'LOC', 'score': 0.99321, 'word': 'Brooklyn', 'start': 49, 'end': 57} +] +``` +在这里,模型正确地识别出 Sylvain 是一个人 (PER),Hugging Face 是一个组织 (ORG),而布鲁克林是一个位置 (LOC)。 + +我们在pipeline创建函数中传递选项 **grouped_entities=True** 以告诉pipeline将对应于同一实体的句子部分重新组合在一起:这里模型正确地将“Hugging”和“Face”分组为一个组织,即使名称由多个词组成。事实上,正如我们即将在下一章看到的,预处理甚至会将一些单词分成更小的部分。例如,**Sylvain** 分割为了四部分:**S、##yl、##va** 和 **##in**。在后处理步骤中,pipeline成功地重新组合了这些部分。 + +``` +✏️快来试试吧!在模型中心(hub)搜索能够用英语进行词性标注(通常缩写为 POS)的模型。这个模型对上面例子中的句子预测了什么? +``` + +## 问答系统 +问答pipeline使用来自给定上下文的信息回答问题: +```python +from transformers import pipeline + +question_answerer = pipeline("question-answering") +question_answerer( + question="Where do I work?", + context="My name is Sylvain and I work at Hugging Face in Brooklyn", +) + +``` +```python +{'score': 0.6385916471481323, 'start': 33, 'end': 45, 'answer': 'Hugging Face'} +klyn", +) + +``` +请注意,此pipeline通过从提供的上下文中提取信息来工作;它不会凭空生成答案。 + +## 文本摘要 +文本摘要是将文本缩减为较短文本的任务,同时保留文本中的主要(重要)信息。下面是一个例子: + +```python +from transformers import pipeline + +summarizer = pipeline("summarization") +summarizer( + """ + America has changed dramatically during recent years. Not only has the number of + graduates in traditional engineering disciplines such as mechanical, civil, + electrical, chemical, and aeronautical engineering declined, but in most of + the premier American universities engineering curricula now concentrate on + and encourage largely the study of engineering science. As a result, there + are declining offerings in engineering subjects dealing with infrastructure, + the environment, and related issues, and greater concentration on high + technology subjects, largely supporting increasingly complex scientific + developments. While the latter is important, it should not be at the expense + of more traditional engineering. + + Rapidly developing economies such as China and India, as well as other + industrial countries in Europe and Asia, continue to encourage and advance + the teaching of engineering. Both China and India, respectively, graduate + six and eight times as many traditional engineers as does the United States. + Other industrial countries at minimum maintain their output, while America + suffers an increasingly serious decline in the number of engineering graduates + and a lack of well-educated engineers. +""" +) +``` +```python +[{'summary_text': ' America has changed dramatically during recent years . The ' + 'number of engineering graduates in the U.S. has declined in ' + 'traditional engineering disciplines such as mechanical, civil ' + ', electrical, chemical, and aeronautical engineering . Rapidly ' + 'developing economies such as China and India, as well as other ' + 'industrial countries in Europe and Asia, continue to encourage ' + 'and advance engineering .'}] +``` +与文本生成一样,您指定结果的 **max_length** 或 **min_length**。 + +## 翻译 +对于翻译,如果您在任务名称中提供语言对(例如“**translation_en_to_fr**”),则可以使用默认模型,但最简单的方法是在[模型中心(hub)](https://huggingface.co/models)选择要使用的模型。在这里,我们将尝试从法语翻译成英语: + +```python +from transformers import pipeline + +translator = pipeline("translation", model="Helsinki-NLP/opus-mt-fr-en") +translator("Ce cours est produit par Hugging Face.") +``` +```python +[{'translation_text': 'This course is produced by Hugging Face.'}] + +``` + +与文本生成和摘要一样,您可以指定结果的 **max_length** 或 **min_length**。 + +``` +✏️快来试试吧!搜索其他语言的翻译模型,尝试将前一句翻译成几种不同的语言。 +``` + +到目前为止显示的pipeline主要用于演示目的。它们是为特定任务而编程的,不能对他们进行自定义的修改。在下一章中,您将了解 **pipeline()** 函数内部的内容以及如何进行自定义的修改。 \ No newline at end of file diff --git a/chapters/zh/chapter1/4.mdx b/chapters/zh/chapter1/4.mdx new file mode 100644 index 000000000..45641e71e --- /dev/null +++ b/chapters/zh/chapter1/4.mdx @@ -0,0 +1,172 @@ +# Transformers 是如何工作的? + +在本节中,我们将深入了解 Transformer 模型的架构。 + +## 一点Transformers的发展历史 + +以下是 Transformer 模型(简短)历史中的一些关键节点: + +
+A brief chronology of Transformers models. + +
+ +[Transformer 架构](https://arxiv.org/abs/1706.03762) 于 2017 年 6 月推出。原本研究的重点是翻译任务。随后推出了几个有影响力的模型,包括 + +- **2018 年 6 月**: [GPT](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf), 第一个预训练的 Transformer 模型,用于各种 NLP 任务并获得极好的结果 + +- **2018 年 10 月**: [BERT](https://arxiv.org/abs/1810.04805), 另一个大型预训练模型,该模型旨在生成更好的句子摘要(下一章将详细介绍!) + +- **2019 年 2 月**: [GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf), GPT 的改进(并且更大)版本,由于道德问题没有立即公开发布 + +- **2019 年 10 月**: [DistilBERT](https://arxiv.org/abs/1910.01108), BERT 的提炼版本,速度提高 60%,内存减轻 40%,但仍保留 BERT 97% 的性能 + +- **2019 年 10 月**: [BART](https://arxiv.org/abs/1910.13461) 和 [T5](https://arxiv.org/abs/1910.10683), 两个使用与原始 Transformer 模型相同架构的大型预训练模型(第一个这样做) + +- **2020 年 5 月**, [GPT-3](https://arxiv.org/abs/2005.14165), GPT-2 的更大版本,无需微调即可在各种任务上表现良好(称为零样本学习) + +这个列表并不全面,只是为了突出一些不同类型的 Transformer 模型。大体上,它们可以分为三类: + +- GPT-like (也被称作自回归Transformer模型) +- BERT-like (也被称作自动编码Transformer模型) +- BART/T5-like (也被称作序列到序列的 Transformer模型) + +稍后我们将更深入地探讨这些分类。 + +## Transformers是语言模型 + +上面提到的所有 Transformer 模型(GPT、BERT、BART、T5 等)都被训练为语言模型。这意味着他们已经以无监督学习的方式接受了大量原始文本的训练。无监督学习是一种训练类型,其中目标是根据模型的输入自动计算的。这意味着不需要人工来标记数据! + +这种类型的模型可以对其训练过的语言进行统计理解,但对于特定的实际任务并不是很有用。因此,一般的预训练模型会经历一个称为*迁移学习*的过程。在此过程中,模型在给定任务上以监督方式(即使用人工注释标签)进行微调。 + +任务的一个例子是阅读 *n* 个单词的句子,预测下一个单词。这被称为因果语言建模,因为输出取决于过去和现在的输入。 + +
+Example of causal language modeling in which the next word from a sentence is predicted. + +
+ +另一个例子是*遮罩语言建模*,该模型预测句子中的遮住的词。 + +
+Example of masked language modeling in which a masked word from a sentence is predicted. + +
+ +## Transformer是大模型 + +除了一些特例(如 DistilBERT)外,实现更好性能的一般策略是增加模型的大小以及预训练的数据量。 + +
+Number of parameters of recent Transformers models +
+ +不幸的是,训练模型,尤其是大型模型,需要大量的数据,时间和计算资源。它甚至会对环境产生影响,如下图所示。 + +
+The carbon footprint of a large language model. + +
+ + + +Transformers是由一个团队领导的(非常大的)模型项目,该团队试图减少预训练对环境的影响,通过运行大量试验以获得最佳超参数。 + +想象一下,如果每次一个研究团队、一个学生组织或一家公司想要训练一个模型,都从头开始训练的。这将导致巨大的、不必要的浪费! + +这就是为什么共享语言模型至关重要:共享经过训练的权重,当遇见新的需求时在预训练的权重之上进行微调,可以降低训练模型训练的算力和时间消耗,降低全球的总体计算成本和碳排放。 + + +## 迁移学习 + + + +*预训练是*训练模型前的一个操作:随机初始化权重,在没有任何先验知识的情况下开始训练。 + +
+The pretraining of a language model is costly in both time and money. + +
+ +这种预训练通常是在非常大量的数据上进行的。因此,它需要大量的数据,而且训练可能需要几周的时间。 + +另一方面,*微调*是在模型经过预训练后完成的训练。要执行微调,首先需要获取一个经过预训练的语言模型,然后使用特定于任务的数据集执行额外的训练。等等,为什么不直接为最后的任务而训练呢?有几个原因: + +* 预训练模型已经在与微调数据集有一些相似之处的数据集上进行了训练。因此,微调过程能够利用模型在预训练期间获得的知识(例如,对于NLP问题,预训练模型将对您在任务中使用的语言有某种统计规律上的理解)。 +* 由于预训练模型已经在大量数据上进行了训练,因此微调需要更少的数据来获得不错的结果。 +* 出于同样的原因,获得好结果所需的时间和资源要少得多 + +例如,可以利用英语的预训练过的模型,然后在arXiv语料库上对其进行微调,从而形成一个基于科学/研究的模型。微调只需要有限的数据量:预训练模型获得的知识可以“迁移”到目标任务上,因此被称为*迁移学习*。 + +
+The fine-tuning of a language model is cheaper than pretraining in both time and money. + +
+ +因此,微调模型具有较低的时间、数据、财务和环境成本。迭代不同的微调方案也更快、更容易,因为与完整的预训练相比,训练的约束更少。 + +这个过程也会比从头开始的训练(除非你有很多数据)取得更好的效果,这就是为什么你应该总是尝试利用一个预训练的模型--一个尽可能接近你手头的任务的模型--并对其进行微调。 + +## 一般的体系结构 +在这一部分,我们将介绍Transformer模型的一般架构。如果你不理解其中的一些概念,不要担心;下文将详细介绍每个组件。 + + + +## 介绍 + +该模型主要由两个块组成: + +* **Encoder (左侧)**: 编码器接收输入并构建其表示(其特征)。这意味着对模型进行了优化,以从输入中获得理解。 +* **Decoder (右侧)**: 解码器使用编码器的表示(特征)以及其他输入来生成目标序列。这意味着该模型已针对生成输出进行了优化。 + +
+Architecture of a Transformers models + +
+ +这些部件中的每一个都可以独立使用,具体取决于任务: + +* **Encoder-only models**: 适用于需要理解输入的任务,如句子分类和命名实体识别。 +* **Decoder-only models**: 适用于生成任务,如文本生成。 +* **Encoder-decoder models** 或者 **sequence-to-sequence models**: 适用于需要根据输入进行生成的任务,如翻译或摘要。 + +在后面的部分中,我们将单独地深入研究这些体系结构。 + +## 注意力层 + +Transformer模型的一个关键特性是*注意力层*。事实上,介绍Transformer架构的文章的标题是[“注意力就是你所需要的”](https://arxiv.org/abs/1706.03762)! 我们将在课程的后面更加深入地探索注意力层;现在,您需要知道的是,这一层将告诉模型在处理每个单词的表示时,要特别重视您传递给它的句子中的某些单词(并且或多或少地忽略其他单词)。 + +把它放在语境中,考虑将文本从英语翻译成法语的任务。在输入“You like this course”的情况下,翻译模型还需要注意相邻的单词“You”,以获得单词“like”的正确翻译,因为在法语中,动词“like”的变化取决于主题。然而,句子的其余部分对于该词的翻译没有用处。同样,在翻译“this”时,模型也需要注意“course”一词,因为“this”的翻译不同,取决于相关名词是单数还是复数。同样,句子中的其他单词对于“this”的翻译也不重要。对于更复杂的句子(以及更复杂的语法规则),模型需要特别注意可能出现在句子中更远位置的单词,以便正确地翻译每个单词。 + +同样的概念也适用于与自然语言相关的任何任务:一个词本身有一个含义,但这个含义受语境的影响很大,语境可以是研究该词之前或之后的任何其他词(或多个词)。 + +现在您已经了解了注意力层的含义,让我们更仔细地了解Transformer架构。 + +## 原始的结构 + +Transformer架构最初是为翻译而设计的。在训练期间,编码器接收特定语言的输入(句子),而解码器需要输出对应语言的翻译。在编码器中,注意力层可以使用一个句子中的所有单词(正如我们刚才看到的,给定单词的翻译可以取决于它在句子中的其他单词)。然而,解码器是按顺序工作的,并且只能注意它已经翻译过的句子中的单词。例如,当我们预测了翻译目标的前三个单词时,我们将它们提供给解码器,然后解码器使用编码器的所有输入来尝试预测第四个单词。 + +为了在训练过程中加快速度(当模型可以访问目标句子时),解码器会被输入整个目标,但不允许获取到要翻译的单词(如果它在尝试预测位置2的单词时可以访问位置2的单词,解码器就会偷懒,直接输出那个单词,从而无法学习到正确的语言关系!)。例如,当试图预测第4个单词时,注意力层只能获取位置1到3的单词。 + +最初的Transformer架构如下所示,编码器位于左侧,解码器位于右侧: + +
+Architecture of a Transformers models + +
+ +注意,解码器块中的第一个注意力层关联到解码器的所有(过去的)输入,但是第二注意力层使用编码器的输出。因此,它可以访问整个输入句子,以最好地预测当前单词。这是非常有用的,因为不同的语言可以有语法规则将单词按不同的顺序排列,或者句子后面提供的一些上下文可能有助于确定给定单词的最佳翻译。 + +也可以在编码器/解码器中使用*注意力遮罩层*,以防止模型注意某些特殊单词。例如,在批处理句子时,填充特殊词使所有句子的长度一致。 + +## 架构与参数 + +在本课程中,当我们深入探讨Transformers模型时,您将看到 +架构、参数和模型 +。 这些术语的含义略有不同: + +* **架构**: 这是模型的骨架 -- 每个层的定义以及模型中发生的每个操作。 +* **Checkpoints**: 这些是将在给架构中结构中加载的权重。 +* **模型**: 这是一个笼统的术语,没有“架构”或“参数”那么精确:它可以指两者。为了避免歧义,本课程使用将使用架构和参数。 + +例如,BERT是一个架构,而 `bert-base-cased`, 这是谷歌团队为BERT的第一个版本训练的一组权重参数,是一个参数。我们可以说“BERT模型”和"`bert-base-cased`模型." diff --git a/chapters/zh/chapter1/5.mdx b/chapters/zh/chapter1/5.mdx new file mode 100644 index 000000000..7aa765ec2 --- /dev/null +++ b/chapters/zh/chapter1/5.mdx @@ -0,0 +1,17 @@ +# “编码器”模型 + + + +“编码器”模型指仅使用编码器的Transformer模型。在每个阶段,注意力层都可以获取初始句子中的所有单词。这些模型通常具有“双向”注意力,被称为自编码模型。 + +这些模型的预训练通常围绕着以某种方式破坏给定的句子(例如:通过随机遮盖其中的单词),并让模型寻找或重建给定的句子。 + +“编码器”模型最适合于需要理解完整句子的任务,例如:句子分类、命名实体识别(以及更普遍的单词分类)和阅读理解后回答问题。 + +该系列模型的典型代表有: + +- [ALBERT](https://huggingface.co/transformers/model_doc/albert.html) +- [BERT](https://huggingface.co/transformers/model_doc/bert.html) +- [DistilBERT](https://huggingface.co/transformers/model_doc/distilbert.html) +- [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html) +- [RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html) diff --git a/chapters/zh/chapter1/6.mdx b/chapters/zh/chapter1/6.mdx new file mode 100644 index 000000000..2de4c44a6 --- /dev/null +++ b/chapters/zh/chapter1/6.mdx @@ -0,0 +1,17 @@ +# “解码器”模型 + + + +“解码器”模型通常指仅使用解码器的Transformer模型。在每个阶段,对于给定的单词,注意力层只能获取到句子中位于将要预测单词前面的单词。这些模型通常被称为自回归模型。 + +“解码器”模型的预训练通常围绕预测句子中的下一个单词进行。 + +这些模型最适合于涉及文本生成的任务。 + +该系列模型的典型代表有: + + +- [CTRL](https://huggingface.co/transformers/model_doc/ctrl.html) +- [GPT](https://huggingface.co/transformers/model_doc/gpt.html) +- [GPT-2](https://huggingface.co/transformers/model_doc/gpt2.html) +- [Transformer XL](https://huggingface.co/transformers/model_doc/transformerxl.html) diff --git a/chapters/zh/chapter1/7.mdx b/chapters/zh/chapter1/7.mdx new file mode 100644 index 000000000..99dc00eea --- /dev/null +++ b/chapters/zh/chapter1/7.mdx @@ -0,0 +1,16 @@ +# 序列到序列模型 + + + +编码器-解码器模型(也称为序列到序列模型)同时使用Transformer架构的编码器和解码器两个部分。在每个阶段,编码器的注意力层可以访问初始句子中的所有单词,而解码器的注意力层只能访问位于输入中将要预测单词前面的单词。 + +这些模型的预训练可以使用训练编码器或解码器模型的方式来完成,但通常涉及更复杂的内容。例如,[T5](https://huggingface.co/t5-base)通过将文本的随机跨度(可以包含多个单词)替换为单个特殊单词来进行预训练,然后目标是预测该掩码单词替换的文本。 + +序列到序列模型最适合于围绕根据给定输入生成新句子的任务,如摘要、翻译或生成性问答。 + +该系列模型的典型代表有: + +- [BART](https://huggingface.co/transformers/model_doc/bart.html) +- [mBART](https://huggingface.co/transformers/model_doc/mbart.html) +- [Marian](https://huggingface.co/transformers/model_doc/marian.html) +- [T5](https://huggingface.co/transformers/model_doc/t5.html) diff --git a/chapters/zh/chapter1/8.mdx b/chapters/zh/chapter1/8.mdx new file mode 100644 index 000000000..707731892 --- /dev/null +++ b/chapters/zh/chapter1/8.mdx @@ -0,0 +1,31 @@ +# Bias and limitations + + + +如果您打算在正式的项目中使用经过预训练或经过微调的模型。请注意:虽然这些模型是很强大,但它们也有局限性。其中最大的一个问题是,为了对大量数据进行预训练,研究人员通常会搜集所有他们能找到的内容,中间可能夹带一些意识形态或者价值观的刻板印象。 + +为了快速解释清楚这个问题,让我们回到一个使用BERT模型的pipeline的例子: + +```python +from transformers import pipeline + +unmasker = pipeline("fill-mask", model="bert-base-uncased") +result = unmasker("This man works as a [MASK].") +print([r["token_str"] for r in result]) + +result = unmasker("This woman works as a [MASK].") +print([r["token_str"] for r in result]) +``` + +```python out +['lawyer', 'carpenter', 'doctor', 'waiter', 'mechanic'] +['nurse', 'waitress', 'teacher', 'maid', 'prostitute'] +``` +当要求模型填写这两句话中缺少的单词时,模型给出的答案中,只有一个与性别无关(服务员/女服务员)。其他职业通常与某一特定性别相关,妓女最终进入了模型中与“女人”和“工作”相关的前五位。尽管BERT是使用经过筛选和清洗后,明显中立的数据集上建立的的Transformer模型,而不是通过从互联网上搜集数据(它是在[Wikipedia 英文](https://huggingface.co/datasets/wikipedia)和[BookCorpus](https://huggingface.co/datasets/bookcorpus)数据集)。 + +因此,当您使用这些工具时,您需要记住,使用的原始模型的时候,很容易生成性别歧视、种族主义或恐同内容。这种固有偏见不会随着微调模型而使消失。 \ No newline at end of file diff --git a/chapters/zh/chapter1/9.mdx b/chapters/zh/chapter1/9.mdx new file mode 100644 index 000000000..16c5ab6ad --- /dev/null +++ b/chapters/zh/chapter1/9.mdx @@ -0,0 +1,11 @@ +# 总结 + +在本章中,您了解了如何使用来自🤗Transformers的函数pipeline()处理不同的NLP任务。您还了解了如何在模型中心(hub)中搜索和使用模型,以及如何使用推理API直接在浏览器中测试模型。 + +我们讨论了Transformer模型如何在应用层上工作,并讨论了迁移学习和微调的重要性。您可以使用完整的体系结构,也可以仅使用编码器或解码器,具体取决于您要解决的任务类型。下表总结了这一点: + +| 模型 | 示例 | 任务| +| ---- | ---- |----| +| 编码器 | ALBERT, BERT, DistilBERT, ELECTRA, RoBERTa |句子分类、命名实体识别、从文本中提取答案| +| 解码器 | CTRL, GPT, GPT-2, Transformer XL |文本生成| +| 编码器-解码器 | BART, T5, Marian, mBART |文本摘要、翻译、生成问题的回答| \ No newline at end of file From 8ec6fd3680456717dab3a75115c49507c89adfd1 Mon Sep 17 00:00:00 2001 From: 1375626371 <40328311+1375626371@users.noreply.github.com> Date: Tue, 12 Apr 2022 21:26:13 +0800 Subject: [PATCH 2/6] Add zh to the languages field Add zh to the languages field in the build_documentation.yml and build_pr_documentation.yml files --- .github/workflows/build_documentation.yml | 2 +- .github/workflows/build_pr_documentation.yml | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/build_documentation.yml b/.github/workflows/build_documentation.yml index a03061dc8..8e0f0b7e9 100644 --- a/.github/workflows/build_documentation.yml +++ b/.github/workflows/build_documentation.yml @@ -14,6 +14,6 @@ jobs: package: course path_to_docs: course/chapters/ additional_args: --not_python_module - languages: ar bn en es fa fr he ko pt ru th tr + languages: ar bn en es fa fr he ko pt ru th tr zh secrets: token: ${{ secrets.HUGGINGFACE_PUSH }} \ No newline at end of file diff --git a/.github/workflows/build_pr_documentation.yml b/.github/workflows/build_pr_documentation.yml index 84af57f75..ff5039db9 100644 --- a/.github/workflows/build_pr_documentation.yml +++ b/.github/workflows/build_pr_documentation.yml @@ -16,5 +16,5 @@ jobs: package: course path_to_docs: course/chapters/ additional_args: --not_python_module - languages: ar bn en es fa fr he ko pt ru th tr + languages: ar bn en es fa fr he ko pt ru th tr zh hub_base_path: https://moon-ci-docs.huggingface.co/course From b931926f84a95f24cc21d81efec548f9d5e7957d Mon Sep 17 00:00:00 2001 From: 1375626371 <40328311+1375626371@users.noreply.github.com> Date: Tue, 12 Apr 2022 21:32:03 +0800 Subject: [PATCH 3/6] Remove untranslated chapters in _toctree.yml Remove all these sections that haven't been translated yet Remove Chapter 0 from the table of contents since it hasn't been translated yet --- chapters/zh/_toctree.yml | 152 +-------------------------------------- 1 file changed, 1 insertion(+), 151 deletions(-) diff --git a/chapters/zh/_toctree.yml b/chapters/zh/_toctree.yml index 298de22ab..453fc5e99 100644 --- a/chapters/zh/_toctree.yml +++ b/chapters/zh/_toctree.yml @@ -1,8 +1,3 @@ -- title: 0. 准备 - sections: - - local: chapter0/1 - title: 课程简介 - - title: 1. Transformer 模型 sections: - local: chapter1/1 @@ -25,149 +20,4 @@ title: 总结 - local: chapter1/10 title: 章末小测验 - quiz: 1 - -- title: 2. Using 🤗 Transformers - sections: - - local: chapter2/1 - title: Introduction - - local: chapter2/2 - title: Behind the pipeline - - local: chapter2/3 - title: Models - - local: chapter2/4 - title: Tokenizers - - local: chapter2/5 - title: Handling multiple sequences - - local: chapter2/6 - title: Putting it all together - - local: chapter2/7 - title: Basic usage completed! - - local: chapter2/8 - title: End-of-chapter quiz - quiz: 2 - -- title: 3. Fine-tuning a pretrained model - sections: - - local: chapter3/1 - title: Introduction - - local: chapter3/2 - title: Processing the data - - local: chapter3/3 - title: Fine-tuning a model with the Trainer API or Keras - local_fw: { pt: chapter3/3, tf: chapter3/3_tf } - - local: chapter3/4 - title: A full training - - local: chapter3/5 - title: Fine-tuning, Check! - - local: chapter3/6 - title: End-of-chapter quiz - quiz: 3 - -- title: 4. Sharing models and tokenizers - sections: - - local: chapter4/1 - title: The Hugging Face Hub - - local: chapter4/2 - title: Using pretrained models - - local: chapter4/3 - title: Sharing pretrained models - - local: chapter4/4 - title: Building a model card - - local: chapter4/5 - title: Part 1 completed! - - local: chapter4/6 - title: End-of-chapter quiz - quiz: 4 - -- title: 5. The 🤗 Datasets library - sections: - - local: chapter5/1 - title: Introduction - - local: chapter5/2 - title: What if my dataset isn't on the Hub? - - local: chapter5/3 - title: Time to slice and dice - - local: chapter5/4 - title: Big data? 🤗 Datasets to the rescue! - - local: chapter5/5 - title: Creating your own dataset - - local: chapter5/6 - title: Semantic search with FAISS - - local: chapter5/7 - title: 🤗 Datasets, check! - - local: chapter5/8 - title: End-of-chapter quiz - quiz: 5 - -- title: 6. The 🤗 Tokenizers library - sections: - - local: chapter6/1 - title: Introduction - - local: chapter6/2 - title: Training a new tokenizer from an old one - - local: chapter6/3 - title: Fast tokenizers' special powers - - local: chapter6/3b - title: Fast tokenizers in the QA pipeline - - local: chapter6/4 - title: Normalization and pre-tokenization - - local: chapter6/5 - title: Byte-Pair Encoding tokenization - - local: chapter6/6 - title: WordPiece tokenization - - local: chapter6/7 - title: Unigram tokenization - - local: chapter6/8 - title: Building a tokenizer, block by block - - local: chapter6/9 - title: Tokenizers, check! - - local: chapter6/10 - title: End-of-chapter quiz - quiz: 6 - -- title: 7. Main NLP tasks - sections: - - local: chapter7/1 - title: Introduction - - local: chapter7/2 - title: Token classification - - local: chapter7/3 - title: Fine-tuning a masked language model - - local: chapter7/4 - title: Translation - - local: chapter7/5 - title: Summarization - - local: chapter7/6 - title: Training a causal language model from scratch - - local: chapter7/7 - title: Question answering - - local: chapter7/8 - title: Mastering NLP - - local: chapter7/9 - title: End-of-chapter quiz - quiz: 7 - -- title: 8. How to ask for help - sections: - - local: chapter8/1 - title: Introduction - - local: chapter8/2 - title: What to do when you get an error - - local: chapter8/3 - title: Asking for help on the forums - - local: chapter8/4 - title: Debugging the training pipeline - local_fw: { pt: chapter8/4, tf: chapter8/4_tf } - - local: chapter8/5 - title: How to write a good issue - - local: chapter8/6 - title: Part 2 completed! - - local: chapter8/7 - title: End-of-chapter quiz - quiz: 8 - -- title: Hugging Face Course Event - sections: - - local: event/1 - title: Part 2 Release Event + quiz: 1 \ No newline at end of file From 0778f36a829715813a7ce45f552cc5ab844715c3 Mon Sep 17 00:00:00 2001 From: 1375626371 <40328311+1375626371@users.noreply.github.com> Date: Wed, 13 Apr 2022 01:17:40 +0800 Subject: [PATCH 4/6] Fixed an error in the translation format Fixed an error in the translation format of Chapter 1, Section 3 --- chapters/zh/chapter1/3.mdx | 607 ++++++++++++++++++++----------------- 1 file changed, 327 insertions(+), 280 deletions(-) diff --git a/chapters/zh/chapter1/3.mdx b/chapters/zh/chapter1/3.mdx index 1f067ab6d..cd6aee466 100644 --- a/chapters/zh/chapter1/3.mdx +++ b/chapters/zh/chapter1/3.mdx @@ -1,280 +1,327 @@ -# Transformers能做什么? - - - -在本节中,我们将看看 Transformer 模型可以做什么,并使用 🤗 Transformers 库中的第一个工具:pipeline() 函数。 - -## Transformer被应用于各个方面! -Transformer 模型用于解决各种 NLP 任务,就像上一节中提到的那样。以下是一些使用 Hugging Face 和 Transformer 模型的公司和组织,他们也通过分享他们的模型回馈社区: - -![使用 Hugging Face 的公司](https://huggingface.co/course/static/chapter1/companies.PNG) -[🤗 Transformers 库](https://github.com/huggingface/transformers)提供了创建和使用这些共享模型的功能。[模型中心(hub)](https://huggingface.co/models)包含数千个任何人都可以下载和使用的预训练模型。您还可以将自己的模型上传到 Hub! - -```python -⚠️ Hugging Face Hub 不限于 Transformer 模型。任何人都可以分享他们想要的任何类型的模型或数据集!创建一个 Huggingface.co 帐户(https://huggingface.co/join)以使用所有可用功能! -``` - -在深入研究 Transformer 模型的底层工作原理之前,让我们先看几个示例,看看它们如何用于解决一些有趣的 NLP 问题。 - -## 使用pipelines - - -(这里有一个视频,但是国内可能打不开,译者注) - - -🤗 Transformers 库中最基本的对象是 **pipeline()** 函数。它将模型与其必要的预处理和后处理步骤连接起来,使我们能够通过直接输入任何文本并获得最终的答案: - -```python -from transformers import pipeline -classifier = pipeline("sentiment-analysis") -classifier("I've been waiting for a HuggingFace course my whole life.") -``` -```python -[{'label': 'POSITIVE', 'score': 0.9598047137260437}] -``` - - -我们也可以多传几句! -```python -classifier( - ["I've been waiting for a HuggingFace course my whole life.", "I hate this so much!"] -) -``` -```python -[{'label': 'POSITIVE', 'score': 0.9598047137260437}, - {'label': 'NEGATIVE', 'score': 0.9994558095932007}] -``` -默认情况下,此pipeline选择一个特定的预训练模型,该模型已针对英语情感分析进行了微调。创建**分类器**对象时,将下载并缓存模型。如果您重新运行该命令,则将使用缓存的模型,无需再次下载模型。 - -将一些文本传递到pipeline时涉及三个主要步骤: - -1. 文本被预处理为模型可以理解的格式。 -2. 预处理的输入被传递给模型。 -3. 模型处理后输出最终人类可以理解的结果。 - -目前[可用的一些pipeline](https://huggingface.co/transformers/main_classes/pipelines.html)是: - -* **特征提取**(获取文本的向量表示) -* **填充空缺** -* **ner**(命名实体识别) -* **问答** -* **情感分析** -* **文本摘要** -* **文本生成** -* **翻译** -* **零样本分类** - -让我们来看看其中的一些吧! - -## 零样本分类 -我们将首先处理一项非常具挑战性的任务,我们需要对尚未标记的文本进行分类。这是实际项目中的常见场景,因为注释文本通常很耗时并且需要领域专业知识。对于这项任务**zero-shot-classification**pipeline非常强大:它允许您直接指定用于分类的标签,因此您不必依赖预训练模型的标签。下面的模型展示了如何使用这两个标签将句子分类为正面或负面——但也可以使用您喜欢的任何其他标签集对文本进行分类。 - -```python -from transformers import pipeline - -classifier = pipeline("zero-shot-classification") -classifier( - "This is a course about the Transformers library", - candidate_labels=["education", "politics", "business"], -) -``` -```python -{'sequence': 'This is a course about the Transformers library', - 'labels': ['education', 'business', 'politics'], - 'scores': [0.8445963859558105, 0.111976258456707, 0.043427448719739914]} -``` - -此pipeline称为zero-shot,因为您不需要对数据上的模型进行微调即可使用它。它可以直接返回您想要的任何标签列表的概率分数! -``` -✏️快来试试吧!使用您自己的序列和标签,看看模型的行为。 -``` - -## 文本生成 -现在让我们看看如何使用pipeline来生成一些文本。这里的主要使用方法是您提供一个提示,模型将通过生成剩余的文本来自动完成整段话。这类似于许多手机上的预测文本功能。文本生成涉及随机性,因此如果您没有得到相同的如下所示的结果,这是正常的。 - -```python -from transformers import pipeline - -generator = pipeline("text-generation") -generator("In this course, we will teach you how to") -``` -```python -[{'generated_text': 'In this course, we will teach you how to understand and use ' - 'data flow and data interchange when handling user data. We ' - 'will be working with one or more of the most commonly used ' - 'data flows — data flows of various types, as seen by the ' - 'HTTP'}] -``` -您可以使用参数 **num_return_sequences** 控制生成多少个不同的序列,并使用参数 **max_length** 控制输出文本的总长度。 - -``` -✏️快来试试吧!使用 num_return_sequences 和 max_length 参数生成两个句子,每个句子 15 个单词。 -``` - -## 在pipeline中使用 Hub 中的其他模型 -前面的示例使用了默认模型,但您也可以从 Hub 中选择特定模型以在特定任务的pipeline中使用 - 例如,文本生成。转到[模型中心(hub)](https://huggingface.co/models)并单击左侧的相应标签将会只显示该任务支持的模型。[例如这样](https://huggingface.co/models?pipeline_tag=text-generation)。 - -让我们试试 [**distilgpt2**](https://huggingface.co/distilgpt2) 模型吧!以下是如何在与以前相同的pipeline中加载它: - -```python -from transformers import pipeline - -generator = pipeline("text-generation", model="distilgpt2") -generator( - "In this course, we will teach you how to", - max_length=30, - num_return_sequences=2, -) -``` -```python -[{'generated_text': 'In this course, we will teach you how to manipulate the world and ' - 'move your mental and physical capabilities to your advantage.'}, - {'generated_text': 'In this course, we will teach you how to become an expert and ' - 'practice realtime, and with a hands on experience on both real ' - 'time and real'}] -``` -您可以通过单击语言标签来筛选搜索结果,然后选择另一种文本生成模型的模型。模型中心(hub)甚至包含支持多种语言的多语言模型。 - -通过单击选择模型后,您会看到有一个小组件,可让您直接在线试用。通过这种方式,您可以在下载之前快速测试模型的功能。 -``` -✏️快来试试吧!使用标签筛选查找另一种语言的文本生成模型。使用小组件测试并在pipeline中使用它! -``` - -## 推理 API -所有模型都可以使用 Inference API 直接通过浏览器进行测试,该 API 可在 [Hugging Face 网站](https://huggingface.co/)上找到。通过输入自定义文本并观察模型的输出,您可以直接在此页面上使用模型。 - -小组件形式的推理 API 也可作为付费产品使用,如果您的工作流程需要它,它会派上用场。有关更多详细信息,请参阅[定价页面](https://huggingface.co/pricing)。 - -## Mask filling -您将尝试的下一个pipeline是 **fill-mask**。此任务的想法是填充给定文本中的空白: -```python -from transformers import pipeline - -unmasker = pipeline("fill-mask") -unmasker("This course will teach you all about models.", top_k=2) -``` -```python -[{'sequence': 'This course will teach you all about mathematical models.', - 'score': 0.19619831442832947, - 'token': 30412, - 'token_str': ' mathematical'}, - {'sequence': 'This course will teach you all about computational models.', - 'score': 0.04052725434303284, - 'token': 38163, - 'token_str': ' computational'}] -``` -**top_k** 参数控制要显示的结果有多少种。请注意,这里模型填充了特殊的< **mask** >词,它通常被称为掩码标记。其他掩码填充模型可能有不同的掩码标记,因此在探索其他模型时要验证正确的掩码字是什么。检查它的一种方法是查看小组件中使用的掩码。 - -``` -✏️快来试试吧!在 Hub 上搜索基于 bert 的模型并在推理 API 小组件中找到它的掩码。这个模型对上面pipeline示例中的句子预测了什么? -``` - -## 命名实体识别 -命名实体识别 (NER) 是一项任务,其中模型必须找到输入文本的哪些部分对应于诸如人员、位置或组织之类的实体。让我们看一个例子: -```python -from transformers import pipeline - -ner = pipeline("ner", grouped_entities=True) -ner("My name is Sylvain and I work at Hugging Face in Brooklyn.") -``` -```python -[{'entity_group': 'PER', 'score': 0.99816, 'word': 'Sylvain', 'start': 11, 'end': 18}, - {'entity_group': 'ORG', 'score': 0.97960, 'word': 'Hugging Face', 'start': 33, 'end': 45}, - {'entity_group': 'LOC', 'score': 0.99321, 'word': 'Brooklyn', 'start': 49, 'end': 57} -] -``` -在这里,模型正确地识别出 Sylvain 是一个人 (PER),Hugging Face 是一个组织 (ORG),而布鲁克林是一个位置 (LOC)。 - -我们在pipeline创建函数中传递选项 **grouped_entities=True** 以告诉pipeline将对应于同一实体的句子部分重新组合在一起:这里模型正确地将“Hugging”和“Face”分组为一个组织,即使名称由多个词组成。事实上,正如我们即将在下一章看到的,预处理甚至会将一些单词分成更小的部分。例如,**Sylvain** 分割为了四部分:**S、##yl、##va** 和 **##in**。在后处理步骤中,pipeline成功地重新组合了这些部分。 - -``` -✏️快来试试吧!在模型中心(hub)搜索能够用英语进行词性标注(通常缩写为 POS)的模型。这个模型对上面例子中的句子预测了什么? -``` - -## 问答系统 -问答pipeline使用来自给定上下文的信息回答问题: -```python -from transformers import pipeline - -question_answerer = pipeline("question-answering") -question_answerer( - question="Where do I work?", - context="My name is Sylvain and I work at Hugging Face in Brooklyn", -) - -``` -```python -{'score': 0.6385916471481323, 'start': 33, 'end': 45, 'answer': 'Hugging Face'} -klyn", -) - -``` -请注意,此pipeline通过从提供的上下文中提取信息来工作;它不会凭空生成答案。 - -## 文本摘要 -文本摘要是将文本缩减为较短文本的任务,同时保留文本中的主要(重要)信息。下面是一个例子: - -```python -from transformers import pipeline - -summarizer = pipeline("summarization") -summarizer( - """ - America has changed dramatically during recent years. Not only has the number of - graduates in traditional engineering disciplines such as mechanical, civil, - electrical, chemical, and aeronautical engineering declined, but in most of - the premier American universities engineering curricula now concentrate on - and encourage largely the study of engineering science. As a result, there - are declining offerings in engineering subjects dealing with infrastructure, - the environment, and related issues, and greater concentration on high - technology subjects, largely supporting increasingly complex scientific - developments. While the latter is important, it should not be at the expense - of more traditional engineering. - - Rapidly developing economies such as China and India, as well as other - industrial countries in Europe and Asia, continue to encourage and advance - the teaching of engineering. Both China and India, respectively, graduate - six and eight times as many traditional engineers as does the United States. - Other industrial countries at minimum maintain their output, while America - suffers an increasingly serious decline in the number of engineering graduates - and a lack of well-educated engineers. -""" -) -``` -```python -[{'summary_text': ' America has changed dramatically during recent years . The ' - 'number of engineering graduates in the U.S. has declined in ' - 'traditional engineering disciplines such as mechanical, civil ' - ', electrical, chemical, and aeronautical engineering . Rapidly ' - 'developing economies such as China and India, as well as other ' - 'industrial countries in Europe and Asia, continue to encourage ' - 'and advance engineering .'}] -``` -与文本生成一样,您指定结果的 **max_length** 或 **min_length**。 - -## 翻译 -对于翻译,如果您在任务名称中提供语言对(例如“**translation_en_to_fr**”),则可以使用默认模型,但最简单的方法是在[模型中心(hub)](https://huggingface.co/models)选择要使用的模型。在这里,我们将尝试从法语翻译成英语: - -```python -from transformers import pipeline - -translator = pipeline("translation", model="Helsinki-NLP/opus-mt-fr-en") -translator("Ce cours est produit par Hugging Face.") -``` -```python -[{'translation_text': 'This course is produced by Hugging Face.'}] - -``` - -与文本生成和摘要一样,您可以指定结果的 **max_length** 或 **min_length**。 - -``` -✏️快来试试吧!搜索其他语言的翻译模型,尝试将前一句翻译成几种不同的语言。 -``` - -到目前为止显示的pipeline主要用于演示目的。它们是为特定任务而编程的,不能对他们进行自定义的修改。在下一章中,您将了解 **pipeline()** 函数内部的内容以及如何进行自定义的修改。 \ No newline at end of file +# Transformers, what can they do? + + + +In this section, we will look at what Transformer models can do and use our first tool from the 🤗 Transformers library: the `pipeline()` function. + + +👀 See that Open in Colab button on the top right? Click on it to open a Google Colab notebook with all the code samples of this section. This button will be present in any section containing code examples. + +If you want to run the examples locally, we recommend taking a look at the setup. + + +## Transformers are everywhere! + +Transformer models are used to solve all kinds of NLP tasks, like the ones mentioned in the previous section. Here are some of the companies and organizations using Hugging Face and Transformer models, who also contribute back to the community by sharing their models: + +Companies using Hugging Face + +The [🤗 Transformers library](https://github.com/huggingface/transformers) provides the functionality to create and use those shared models. The [Model Hub](https://huggingface.co/models) contains thousands of pretrained models that anyone can download and use. You can also upload your own models to the Hub! + + +⚠️ The Hugging Face Hub is not limited to Transformer models. Anyone can share any kind of models or datasets they want! Create a huggingface.co account to benefit from all available features! + + +Before diving into how Transformer models work under the hood, let's look at a few examples of how they can be used to solve some interesting NLP problems. + +## Working with pipelines + + + +The most basic object in the 🤗 Transformers library is the `pipeline()` function. It connects a model with its necessary preprocessing and postprocessing steps, allowing us to directly input any text and get an intelligible answer: + +```python +from transformers import pipeline + +classifier = pipeline("sentiment-analysis") +classifier("I've been waiting for a HuggingFace course my whole life.") +``` + +```python out +[{'label': 'POSITIVE', 'score': 0.9598047137260437}] +``` + +We can even pass several sentences! + +```python +classifier( + ["I've been waiting for a HuggingFace course my whole life.", "I hate this so much!"] +) +``` + +```python out +[{'label': 'POSITIVE', 'score': 0.9598047137260437}, + {'label': 'NEGATIVE', 'score': 0.9994558095932007}] +``` + +By default, this pipeline selects a particular pretrained model that has been fine-tuned for sentiment analysis in English. The model is downloaded and cached when you create the `classifier` object. If you rerun the command, the cached model will be used instead and there is no need to download the model again. + +There are three main steps involved when you pass some text to a pipeline: + +1. The text is preprocessed into a format the model can understand. +2. The preprocessed inputs are passed to the model. +3. The predictions of the model are post-processed, so you can make sense of them. + + +Some of the currently [available pipelines](https://huggingface.co/transformers/main_classes/pipelines.html) are: + +- `feature-extraction` (get the vector representation of a text) +- `fill-mask` +- `ner` (named entity recognition) +- `question-answering` +- `sentiment-analysis` +- `summarization` +- `text-generation` +- `translation` +- `zero-shot-classification` + +Let's have a look at a few of these! + +## Zero-shot classification + +We'll start by tackling a more challenging task where we need to classify texts that haven't been labelled. This is a common scenario in real-world projects because annotating text is usually time-consuming and requires domain expertise. For this use case, the `zero-shot-classification` pipeline is very powerful: it allows you to specify which labels to use for the classification, so you don't have to rely on the labels of the pretrained model. You've already seen how the model can classify a sentence as positive or negative using those two labels — but it can also classify the text using any other set of labels you like. + +```python +from transformers import pipeline + +classifier = pipeline("zero-shot-classification") +classifier( + "This is a course about the Transformers library", + candidate_labels=["education", "politics", "business"], +) +``` + +```python out +{'sequence': 'This is a course about the Transformers library', + 'labels': ['education', 'business', 'politics'], + 'scores': [0.8445963859558105, 0.111976258456707, 0.043427448719739914]} +``` + +This pipeline is called _zero-shot_ because you don't need to fine-tune the model on your data to use it. It can directly return probability scores for any list of labels you want! + + + +✏️ **Try it out!** Play around with your own sequences and labels and see how the model behaves. + + + + +## Text generation + +Now let's see how to use a pipeline to generate some text. The main idea here is that you provide a prompt and the model will auto-complete it by generating the remaining text. This is similar to the predictive text feature that is found on many phones. Text generation involves randomness, so it's normal if you don't get the same results as shown below. + +```python +from transformers import pipeline + +generator = pipeline("text-generation") +generator("In this course, we will teach you how to") +``` + +```python out +[{'generated_text': 'In this course, we will teach you how to understand and use ' + 'data flow and data interchange when handling user data. We ' + 'will be working with one or more of the most commonly used ' + 'data flows — data flows of various types, as seen by the ' + 'HTTP'}] +``` + +You can control how many different sequences are generated with the argument `num_return_sequences` and the total length of the output text with the argument `max_length`. + + + +✏️ **Try it out!** Use the `num_return_sequences` and `max_length` arguments to generate two sentences of 15 words each. + + + + +## Using any model from the Hub in a pipeline + +The previous examples used the default model for the task at hand, but you can also choose a particular model from the Hub to use in a pipeline for a specific task — say, text generation. Go to the [Model Hub](https://huggingface.co/models) and click on the corresponding tag on the left to display only the supported models for that task. You should get to a page like [this one](https://huggingface.co/models?pipeline_tag=text-generation). + +Let's try the [`distilgpt2`](https://huggingface.co/distilgpt2) model! Here's how to load it in the same pipeline as before: + +```python +from transformers import pipeline + +generator = pipeline("text-generation", model="distilgpt2") +generator( + "In this course, we will teach you how to", max_length=30, num_return_sequences=2, +) +``` + +```python out +[{'generated_text': 'In this course, we will teach you how to manipulate the world and ' + 'move your mental and physical capabilities to your advantage.'}, + {'generated_text': 'In this course, we will teach you how to become an expert and ' + 'practice realtime, and with a hands on experience on both real ' + 'time and real'}] +``` + +You can refine your search for a model by clicking on the language tags, and pick a model that will generate text in another language. The Model Hub even contains checkpoints for multilingual models that support several languages. + +Once you select a model by clicking on it, you'll see that there is a widget enabling you to try it directly online. This way you can quickly test the model's capabilities before downloading it. + + + +✏️ **Try it out!** Use the filters to find a text generation model for another language. Feel free to play with the widget and use it in a pipeline! + + + +### The Inference API + +All the models can be tested directly through your browser using the Inference API, which is available on the Hugging Face [website](https://huggingface.co/). You can play with the model directly on this page by inputting custom text and watching the model process the input data. + +The Inference API that powers the widget is also available as a paid product, which comes in handy if you need it for your workflows. See the [pricing page](https://huggingface.co/pricing) for more details. + +## Mask filling + +The next pipeline you'll try is `fill-mask`. The idea of this task is to fill in the blanks in a given text: + +```python +from transformers import pipeline + +unmasker = pipeline("fill-mask") +unmasker("This course will teach you all about models.", top_k=2) +``` + +```python out +[{'sequence': 'This course will teach you all about mathematical models.', + 'score': 0.19619831442832947, + 'token': 30412, + 'token_str': ' mathematical'}, + {'sequence': 'This course will teach you all about computational models.', + 'score': 0.04052725434303284, + 'token': 38163, + 'token_str': ' computational'}] +``` + +The `top_k` argument controls how many possibilities you want to be displayed. Note that here the model fills in the special `` word, which is often referred to as a *mask token*. Other mask-filling models might have different mask tokens, so it's always good to verify the proper mask word when exploring other models. One way to check it is by looking at the mask word used in the widget. + + + +✏️ **Try it out!** Search for the `bert-base-cased` model on the Hub and identify its mask word in the Inference API widget. What does this model predict for the sentence in our `pipeline` example above? + + + +## Named entity recognition + +Named entity recognition (NER) is a task where the model has to find which parts of the input text correspond to entities such as persons, locations, or organizations. Let's look at an example: + +```python +from transformers import pipeline + +ner = pipeline("ner", grouped_entities=True) +ner("My name is Sylvain and I work at Hugging Face in Brooklyn.") +``` + +```python out +[{'entity_group': 'PER', 'score': 0.99816, 'word': 'Sylvain', 'start': 11, 'end': 18}, + {'entity_group': 'ORG', 'score': 0.97960, 'word': 'Hugging Face', 'start': 33, 'end': 45}, + {'entity_group': 'LOC', 'score': 0.99321, 'word': 'Brooklyn', 'start': 49, 'end': 57} +] +``` + +Here the model correctly identified that Sylvain is a person (PER), Hugging Face an organization (ORG), and Brooklyn a location (LOC). + +We pass the option `grouped_entities=True` in the pipeline creation function to tell the pipeline to regroup together the parts of the sentence that correspond to the same entity: here the model correctly grouped "Hugging" and "Face" as a single organization, even though the name consists of multiple words. In fact, as we will see in the next chapter, the preprocessing even splits some words into smaller parts. For instance, `Sylvain` is split into four pieces: `S`, `##yl`, `##va`, and `##in`. In the post-processing step, the pipeline successfully regrouped those pieces. + + + +✏️ **Try it out!** Search the Model Hub for a model able to do part-of-speech tagging (usually abbreviated as POS) in English. What does this model predict for the sentence in the example above? + + + +## Question answering + +The `question-answering` pipeline answers questions using information from a given context: + +```python +from transformers import pipeline + +question_answerer = pipeline("question-answering") +question_answerer( + question="Where do I work?", + context="My name is Sylvain and I work at Hugging Face in Brooklyn", +) +``` + +```python out +{'score': 0.6385916471481323, 'start': 33, 'end': 45, 'answer': 'Hugging Face'} +``` + +Note that this pipeline works by extracting information from the provided context; it does not generate the answer. + +## Summarization + +Summarization is the task of reducing a text into a shorter text while keeping all (or most) of the important aspects referenced in the text. Here's an example: + +```python +from transformers import pipeline + +summarizer = pipeline("summarization") +summarizer( + """ + America has changed dramatically during recent years. Not only has the number of + graduates in traditional engineering disciplines such as mechanical, civil, + electrical, chemical, and aeronautical engineering declined, but in most of + the premier American universities engineering curricula now concentrate on + and encourage largely the study of engineering science. As a result, there + are declining offerings in engineering subjects dealing with infrastructure, + the environment, and related issues, and greater concentration on high + technology subjects, largely supporting increasingly complex scientific + developments. While the latter is important, it should not be at the expense + of more traditional engineering. + + Rapidly developing economies such as China and India, as well as other + industrial countries in Europe and Asia, continue to encourage and advance + the teaching of engineering. Both China and India, respectively, graduate + six and eight times as many traditional engineers as does the United States. + Other industrial countries at minimum maintain their output, while America + suffers an increasingly serious decline in the number of engineering graduates + and a lack of well-educated engineers. +""" +) +``` + +```python out +[{'summary_text': ' America has changed dramatically during recent years . The ' + 'number of engineering graduates in the U.S. has declined in ' + 'traditional engineering disciplines such as mechanical, civil ' + ', electrical, chemical, and aeronautical engineering . Rapidly ' + 'developing economies such as China and India, as well as other ' + 'industrial countries in Europe and Asia, continue to encourage ' + 'and advance engineering .'}] +``` + +Like with text generation, you can specify a `max_length` or a `min_length` for the result. + + +## Translation + +For translation, you can use a default model if you provide a language pair in the task name (such as `"translation_en_to_fr"`), but the easiest way is to pick the model you want to use on the [Model Hub](https://huggingface.co/models). Here we'll try translating from French to English: + +```python +from transformers import pipeline + +translator = pipeline("translation", model="Helsinki-NLP/opus-mt-fr-en") +translator("Ce cours est produit par Hugging Face.") +``` + +```python out +[{'translation_text': 'This course is produced by Hugging Face.'}] +``` + +Like with text generation and summarization, you can specify a `max_length` or a `min_length` for the result. + + + +✏️ **Try it out!** Search for translation models in other languages and try to translate the previous sentence into a few different languages. + + + +The pipelines shown so far are mostly for demonstrative purposes. They were programmed for specific tasks and cannot perform variations of them. In the next chapter, you'll learn what's inside a `pipeline()` function and how to customize its behavior. From e7100c604db051cfcfcd0c27c141fffda9ad4b9e Mon Sep 17 00:00:00 2001 From: 1375626371 <40328311+1375626371@users.noreply.github.com> Date: Wed, 13 Apr 2022 01:40:00 +0800 Subject: [PATCH 5/6] Added a small part of the missing content --- chapters/zh/chapter1/3.mdx | 188 ++++++++++++++----------------------- 1 file changed, 73 insertions(+), 115 deletions(-) diff --git a/chapters/zh/chapter1/3.mdx b/chapters/zh/chapter1/3.mdx index cd6aee466..1e7e91108 100644 --- a/chapters/zh/chapter1/3.mdx +++ b/chapters/zh/chapter1/3.mdx @@ -1,4 +1,4 @@ -# Transformers, what can they do? +# Transformers能做什么? -In this section, we will look at what Transformer models can do and use our first tool from the 🤗 Transformers library: the `pipeline()` function. - +在本节中,我们将看看 Transformer 模型可以做什么,并使用 🤗 Transformers 库中的第一个工具:pipeline() 函数。 -👀 See that Open in Colab button on the top right? Click on it to open a Google Colab notebook with all the code samples of this section. This button will be present in any section containing code examples. +👀 看到那个右上角的 在Colab中打开的按钮了吗? 单击它就可以打开一个包含本节所有代码示例的 Google Colab 笔记本。 每一个有实例代码的小节都会有它。 -If you want to run the examples locally, we recommend taking a look at the setup. +如果您想在本地运行示例,我们建议您查看准备. -## Transformers are everywhere! - -Transformer models are used to solve all kinds of NLP tasks, like the ones mentioned in the previous section. Here are some of the companies and organizations using Hugging Face and Transformer models, who also contribute back to the community by sharing their models: +## Transformer被应用于各个方面! +Transformer 模型用于解决各种 NLP 任务,就像上一节中提到的那样。以下是一些使用 Hugging Face 和 Transformer 模型的公司和组织,他们也通过分享他们的模型回馈社区: -Companies using Hugging Face - -The [🤗 Transformers library](https://github.com/huggingface/transformers) provides the functionality to create and use those shared models. The [Model Hub](https://huggingface.co/models) contains thousands of pretrained models that anyone can download and use. You can also upload your own models to the Hub! +![使用 Hugging Face 的公司](https://huggingface.co/course/static/chapter1/companies.PNG) +[🤗 Transformers 库](https://github.com/huggingface/transformers)提供了创建和使用这些共享模型的功能。[模型中心(hub)](https://huggingface.co/models)包含数千个任何人都可以下载和使用的预训练模型。您还可以将自己的模型上传到 Hub! -⚠️ The Hugging Face Hub is not limited to Transformer models. Anyone can share any kind of models or datasets they want! Create a huggingface.co account to benefit from all available features! +⚠️ Hugging Face Hub 不限于 Transformer 模型。任何人都可以分享他们想要的任何类型的模型或数据集!创建一个 Huggingface.co 帐户(https://huggingface.co/join)以使用所有可用功能! -Before diving into how Transformer models work under the hood, let's look at a few examples of how they can be used to solve some interesting NLP problems. +在深入研究 Transformer 模型的底层工作原理之前,让我们先看几个示例,看看它们如何用于解决一些有趣的 NLP 问题。 + +## 使用pipelines -## Working with pipelines + +(这里有一个视频,但是国内可能打不开,译者注) - -The most basic object in the 🤗 Transformers library is the `pipeline()` function. It connects a model with its necessary preprocessing and postprocessing steps, allowing us to directly input any text and get an intelligible answer: +🤗 Transformers 库中最基本的对象是 **pipeline()** 函数。它将模型与其必要的预处理和后处理步骤连接起来,使我们能够通过直接输入任何文本并获得最终的答案: ```python from transformers import pipeline @@ -41,50 +40,45 @@ from transformers import pipeline classifier = pipeline("sentiment-analysis") classifier("I've been waiting for a HuggingFace course my whole life.") ``` - ```python out [{'label': 'POSITIVE', 'score': 0.9598047137260437}] ``` -We can even pass several sentences! +我们也可以多传几句! ```python classifier( ["I've been waiting for a HuggingFace course my whole life.", "I hate this so much!"] ) -``` - +``` ```python out [{'label': 'POSITIVE', 'score': 0.9598047137260437}, {'label': 'NEGATIVE', 'score': 0.9994558095932007}] ``` +默认情况下,此pipeline选择一个特定的预训练模型,该模型已针对英语情感分析进行了微调。创建**分类器**对象时,将下载并缓存模型。如果您重新运行该命令,则将使用缓存的模型,无需再次下载模型。 -By default, this pipeline selects a particular pretrained model that has been fine-tuned for sentiment analysis in English. The model is downloaded and cached when you create the `classifier` object. If you rerun the command, the cached model will be used instead and there is no need to download the model again. +将一些文本传递到pipeline时涉及三个主要步骤: -There are three main steps involved when you pass some text to a pipeline: +1. 文本被预处理为模型可以理解的格式。 +2. 预处理的输入被传递给模型。 +3. 模型处理后输出最终人类可以理解的结果。 -1. The text is preprocessed into a format the model can understand. -2. The preprocessed inputs are passed to the model. -3. The predictions of the model are post-processed, so you can make sense of them. +目前[可用的一些pipeline](https://huggingface.co/transformers/main_classes/pipelines.html)是: +* **特征提取**(获取文本的向量表示) +* **填充空缺** +* **ner**(命名实体识别) +* **问答** +* **情感分析** +* **文本摘要** +* **文本生成** +* **翻译** +* **零样本分类** -Some of the currently [available pipelines](https://huggingface.co/transformers/main_classes/pipelines.html) are: +让我们来看看其中的一些吧! -- `feature-extraction` (get the vector representation of a text) -- `fill-mask` -- `ner` (named entity recognition) -- `question-answering` -- `sentiment-analysis` -- `summarization` -- `text-generation` -- `translation` -- `zero-shot-classification` - -Let's have a look at a few of these! - -## Zero-shot classification - -We'll start by tackling a more challenging task where we need to classify texts that haven't been labelled. This is a common scenario in real-world projects because annotating text is usually time-consuming and requires domain expertise. For this use case, the `zero-shot-classification` pipeline is very powerful: it allows you to specify which labels to use for the classification, so you don't have to rely on the labels of the pretrained model. You've already seen how the model can classify a sentence as positive or negative using those two labels — but it can also classify the text using any other set of labels you like. +## 零样本分类 +我们将首先处理一项非常具挑战性的任务,我们需要对尚未标记的文本进行分类。这是实际项目中的常见场景,因为注释文本通常很耗时并且需要领域专业知识。对于这项任务**zero-shot-classification**pipeline非常强大:它允许您直接指定用于分类的标签,因此您不必依赖预训练模型的标签。下面的模型展示了如何使用这两个标签将句子分类为正面或负面——但也可以使用您喜欢的任何其他标签集对文本进行分类。 ```python from transformers import pipeline @@ -95,25 +89,19 @@ classifier( candidate_labels=["education", "politics", "business"], ) ``` - ```python out {'sequence': 'This is a course about the Transformers library', 'labels': ['education', 'business', 'politics'], 'scores': [0.8445963859558105, 0.111976258456707, 0.043427448719739914]} ``` -This pipeline is called _zero-shot_ because you don't need to fine-tune the model on your data to use it. It can directly return probability scores for any list of labels you want! - +此pipeline称为zero-shot,因为您不需要对数据上的模型进行微调即可使用它。它可以直接返回您想要的任何标签列表的概率分数! - -✏️ **Try it out!** Play around with your own sequences and labels and see how the model behaves. - +✏️**快来试试吧!**使用您自己的序列和标签,看看模型的行为。 - -## Text generation - -Now let's see how to use a pipeline to generate some text. The main idea here is that you provide a prompt and the model will auto-complete it by generating the remaining text. This is similar to the predictive text feature that is found on many phones. Text generation involves randomness, so it's normal if you don't get the same results as shown below. +## 文本生成 +现在让我们看看如何使用pipeline来生成一些文本。这里的主要使用方法是您提供一个提示,模型将通过生成剩余的文本来自动完成整段话。这类似于许多手机上的预测文本功能。文本生成涉及随机性,因此如果您没有得到相同的如下所示的结果,这是正常的。 ```python from transformers import pipeline @@ -121,7 +109,6 @@ from transformers import pipeline generator = pipeline("text-generation") generator("In this course, we will teach you how to") ``` - ```python out [{'generated_text': 'In this course, we will teach you how to understand and use ' 'data flow and data interchange when handling user data. We ' @@ -129,21 +116,16 @@ generator("In this course, we will teach you how to") 'data flows — data flows of various types, as seen by the ' 'HTTP'}] ``` - -You can control how many different sequences are generated with the argument `num_return_sequences` and the total length of the output text with the argument `max_length`. +您可以使用参数 **num_return_sequences** 控制生成多少个不同的序列,并使用参数 **max_length** 控制输出文本的总长度。 - -✏️ **Try it out!** Use the `num_return_sequences` and `max_length` arguments to generate two sentences of 15 words each. - +✏️**快来试试吧!**使用 num_return_sequences 和 max_length 参数生成两个句子,每个句子 15 个单词。 +## 在pipeline中使用 Hub 中的其他模型 +前面的示例使用了默认模型,但您也可以从 Hub 中选择特定模型以在特定任务的pipeline中使用 - 例如,文本生成。转到[模型中心(hub)](https://huggingface.co/models)并单击左侧的相应标签将会只显示该任务支持的模型。[例如这样](https://huggingface.co/models?pipeline_tag=text-generation)。 -## Using any model from the Hub in a pipeline - -The previous examples used the default model for the task at hand, but you can also choose a particular model from the Hub to use in a pipeline for a specific task — say, text generation. Go to the [Model Hub](https://huggingface.co/models) and click on the corresponding tag on the left to display only the supported models for that task. You should get to a page like [this one](https://huggingface.co/models?pipeline_tag=text-generation). - -Let's try the [`distilgpt2`](https://huggingface.co/distilgpt2) model! Here's how to load it in the same pipeline as before: +让我们试试 [**distilgpt2**](https://huggingface.co/distilgpt2) 模型吧!以下是如何在与以前相同的pipeline中加载它: ```python from transformers import pipeline @@ -153,7 +135,6 @@ generator( "In this course, we will teach you how to", max_length=30, num_return_sequences=2, ) ``` - ```python out [{'generated_text': 'In this course, we will teach you how to manipulate the world and ' 'move your mental and physical capabilities to your advantage.'}, @@ -161,34 +142,26 @@ generator( 'practice realtime, and with a hands on experience on both real ' 'time and real'}] ``` +您可以通过单击语言标签来筛选搜索结果,然后选择另一种文本生成模型的模型。模型中心(hub)甚至包含支持多种语言的多语言模型。 -You can refine your search for a model by clicking on the language tags, and pick a model that will generate text in another language. The Model Hub even contains checkpoints for multilingual models that support several languages. - -Once you select a model by clicking on it, you'll see that there is a widget enabling you to try it directly online. This way you can quickly test the model's capabilities before downloading it. - +通过单击选择模型后,您会看到有一个小组件,可让您直接在线试用。通过这种方式,您可以在下载之前快速测试模型的功能。 - -✏️ **Try it out!** Use the filters to find a text generation model for another language. Feel free to play with the widget and use it in a pipeline! - +✏️**快来试试吧!**使用标签筛选查找另一种语言的文本生成模型。使用小组件测试并在pipeline中使用它! -### The Inference API +## 推理 API +所有模型都可以使用 Inference API 直接通过浏览器进行测试,该 API 可在 [Hugging Face 网站](https://huggingface.co/)上找到。通过输入自定义文本并观察模型的输出,您可以直接在此页面上使用模型。 -All the models can be tested directly through your browser using the Inference API, which is available on the Hugging Face [website](https://huggingface.co/). You can play with the model directly on this page by inputting custom text and watching the model process the input data. - -The Inference API that powers the widget is also available as a paid product, which comes in handy if you need it for your workflows. See the [pricing page](https://huggingface.co/pricing) for more details. +小组件形式的推理 API 也可作为付费产品使用,如果您的工作流程需要它,它会派上用场。有关更多详细信息,请参阅[定价页面](https://huggingface.co/pricing)。 ## Mask filling - -The next pipeline you'll try is `fill-mask`. The idea of this task is to fill in the blanks in a given text: - +您将尝试的下一个pipeline是 **fill-mask**。此任务的想法是填充给定文本中的空白: ```python from transformers import pipeline unmasker = pipeline("fill-mask") unmasker("This course will teach you all about models.", top_k=2) ``` - ```python out [{'sequence': 'This course will teach you all about mathematical models.', 'score': 0.19619831442832947, @@ -199,47 +172,36 @@ unmasker("This course will teach you all about models.", top_k=2) 'token': 38163, 'token_str': ' computational'}] ``` - -The `top_k` argument controls how many possibilities you want to be displayed. Note that here the model fills in the special `` word, which is often referred to as a *mask token*. Other mask-filling models might have different mask tokens, so it's always good to verify the proper mask word when exploring other models. One way to check it is by looking at the mask word used in the widget. +**top_k** 参数控制要显示的结果有多少种。请注意,这里模型填充了特殊的< **mask** >词,它通常被称为掩码标记。其他掩码填充模型可能有不同的掩码标记,因此在探索其他模型时要验证正确的掩码字是什么。检查它的一种方法是查看小组件中使用的掩码。 - -✏️ **Try it out!** Search for the `bert-base-cased` model on the Hub and identify its mask word in the Inference API widget. What does this model predict for the sentence in our `pipeline` example above? - +✏️**快来试试吧!**在 Hub 上搜索基于 bert 的模型并在推理 API 小组件中找到它的掩码。这个模型对上面pipeline示例中的句子预测了什么? -## Named entity recognition - -Named entity recognition (NER) is a task where the model has to find which parts of the input text correspond to entities such as persons, locations, or organizations. Let's look at an example: - +## 命名实体识别 +命名实体识别 (NER) 是一项任务,其中模型必须找到输入文本的哪些部分对应于诸如人员、位置或组织之类的实体。让我们看一个例子: ```python from transformers import pipeline ner = pipeline("ner", grouped_entities=True) ner("My name is Sylvain and I work at Hugging Face in Brooklyn.") ``` - ```python out [{'entity_group': 'PER', 'score': 0.99816, 'word': 'Sylvain', 'start': 11, 'end': 18}, {'entity_group': 'ORG', 'score': 0.97960, 'word': 'Hugging Face', 'start': 33, 'end': 45}, {'entity_group': 'LOC', 'score': 0.99321, 'word': 'Brooklyn', 'start': 49, 'end': 57} ] ``` +在这里,模型正确地识别出 Sylvain 是一个人 (PER),Hugging Face 是一个组织 (ORG),而布鲁克林是一个位置 (LOC)。 -Here the model correctly identified that Sylvain is a person (PER), Hugging Face an organization (ORG), and Brooklyn a location (LOC). - -We pass the option `grouped_entities=True` in the pipeline creation function to tell the pipeline to regroup together the parts of the sentence that correspond to the same entity: here the model correctly grouped "Hugging" and "Face" as a single organization, even though the name consists of multiple words. In fact, as we will see in the next chapter, the preprocessing even splits some words into smaller parts. For instance, `Sylvain` is split into four pieces: `S`, `##yl`, `##va`, and `##in`. In the post-processing step, the pipeline successfully regrouped those pieces. +我们在pipeline创建函数中传递选项 **grouped_entities=True** 以告诉pipeline将对应于同一实体的句子部分重新组合在一起:这里模型正确地将“Hugging”和“Face”分组为一个组织,即使名称由多个词组成。事实上,正如我们即将在下一章看到的,预处理甚至会将一些单词分成更小的部分。例如,**Sylvain** 分割为了四部分:**S、##yl、##va** 和 **##in**。在后处理步骤中,pipeline成功地重新组合了这些部分。 - -✏️ **Try it out!** Search the Model Hub for a model able to do part-of-speech tagging (usually abbreviated as POS) in English. What does this model predict for the sentence in the example above? - +✏️**快来试试吧!**在模型中心(hub)搜索能够用英语进行词性标注(通常缩写为 POS)的模型。这个模型对上面例子中的句子预测了什么? -## Question answering - -The `question-answering` pipeline answers questions using information from a given context: - +## 问答系统 +问答pipeline使用来自给定上下文的信息回答问题: ```python from transformers import pipeline @@ -249,16 +211,16 @@ question_answerer( context="My name is Sylvain and I work at Hugging Face in Brooklyn", ) ``` - ```python out {'score': 0.6385916471481323, 'start': 33, 'end': 45, 'answer': 'Hugging Face'} -``` - -Note that this pipeline works by extracting information from the provided context; it does not generate the answer. +klyn", +) -## Summarization +``` +请注意,此pipeline通过从提供的上下文中提取信息来工作;它不会凭空生成答案。 -Summarization is the task of reducing a text into a shorter text while keeping all (or most) of the important aspects referenced in the text. Here's an example: +## 文本摘要 +文本摘要是将文本缩减为较短文本的任务,同时保留文本中的主要(重要)信息。下面是一个例子: ```python from transformers import pipeline @@ -287,7 +249,6 @@ summarizer( """ ) ``` - ```python out [{'summary_text': ' America has changed dramatically during recent years . The ' 'number of engineering graduates in the U.S. has declined in ' @@ -297,13 +258,10 @@ summarizer( 'industrial countries in Europe and Asia, continue to encourage ' 'and advance engineering .'}] ``` +与文本生成一样,您指定结果的 **max_length** 或 **min_length**。 -Like with text generation, you can specify a `max_length` or a `min_length` for the result. - - -## Translation - -For translation, you can use a default model if you provide a language pair in the task name (such as `"translation_en_to_fr"`), but the easiest way is to pick the model you want to use on the [Model Hub](https://huggingface.co/models). Here we'll try translating from French to English: +## 翻译 +对于翻译,如果您在任务名称中提供语言对(例如“**translation_en_to_fr**”),则可以使用默认模型,但最简单的方法是在[模型中心(hub)](https://huggingface.co/models)选择要使用的模型。在这里,我们将尝试从法语翻译成英语: ```python from transformers import pipeline @@ -311,17 +269,17 @@ from transformers import pipeline translator = pipeline("translation", model="Helsinki-NLP/opus-mt-fr-en") translator("Ce cours est produit par Hugging Face.") ``` - ```python out [{'translation_text': 'This course is produced by Hugging Face.'}] + ``` -Like with text generation and summarization, you can specify a `max_length` or a `min_length` for the result. +与文本生成和摘要一样,您可以指定结果的 **max_length** 或 **min_length**。 -✏️ **Try it out!** Search for translation models in other languages and try to translate the previous sentence into a few different languages. +✏️**快来试试吧!**搜索其他语言的翻译模型,尝试将前一句翻译成几种不同的语言。 -The pipelines shown so far are mostly for demonstrative purposes. They were programmed for specific tasks and cannot perform variations of them. In the next chapter, you'll learn what's inside a `pipeline()` function and how to customize its behavior. +到目前为止显示的pipeline主要用于演示目的。它们是为特定任务而编程的,不能对他们进行自定义的修改。在下一章中,您将了解 **pipeline()** 函数内部的内容以及如何进行自定义的修改。 \ No newline at end of file From 0990c773d3e0f5f1ff434a5af1d06078072d4ff9 Mon Sep 17 00:00:00 2001 From: Lewis Tunstall Date: Wed, 13 Apr 2022 11:14:29 +0200 Subject: [PATCH 6/6] Fix style --- chapters/zh/chapter1/3.mdx | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/chapters/zh/chapter1/3.mdx b/chapters/zh/chapter1/3.mdx index 1e7e91108..076263ba4 100644 --- a/chapters/zh/chapter1/3.mdx +++ b/chapters/zh/chapter1/3.mdx @@ -132,7 +132,9 @@ from transformers import pipeline generator = pipeline("text-generation", model="distilgpt2") generator( - "In this course, we will teach you how to", max_length=30, num_return_sequences=2, + "In this course, we will teach you how to", + max_length=30, + num_return_sequences=2, ) ``` ```python out