Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

modify intro #6

Merged
merged 1 commit into from
May 17, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 6 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,17 @@
--------------------------------------------------------------------------------


FlagAI aims to help researchers and developers to freely train and test large-scale models for NLP tasks.
FlagAI (Fast LArge-scale General AI models) is an fast, easy-to-use and extensible toolkit for large-scale model. Our goal is to support training, fine-tuning, and deployment of large-scale models on various downstream tasks with multi-modality. Currently, we are focusing on NLP models and tasks. In near futher, we will support for other modalities.

<br><br>

* Now it supports GLM, Bert, RoBerta, GPT2, T5 and models from Huggingface Transformers.
* Now it supports GLM, BERT, RoBERTa, GPT2, T5, and models from Huggingface Transformers.

* It provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our model hub.
* It provides APIs to quickly download and use those pre-trained models on a given text, fine-tune them on your own datasets, and then share them with the community on our model hub.

* These models can be applied on Text, for tasks like text classification, information extraction, question answering, summarization, text generation, especially in Chinese.
* These models can be applied to (Chinese/English) Text, for tasks like text classification, information extraction, question answering, summarization, and text generation, especially in Chinese.

* FlagAI is backed by the three most popular data/model parallel libraries — PyTorch/Deepspeed/Megatron-LM — with a seamless integration between them. Your can paralle your training/testing process with less than ten lines of code.
* FlagAI is backed by the three most popular data/model parallel libraries — PyTorch/Deepspeed/Megatron-LM — with seamless integration between them. Users can parallel their training/testing process with less than ten lines of code.


The code is partially based on [Transformers](https://github.com/huggingface/transformers) and [DeepSpeedExamples](https://github.com/microsoft/DeepSpeedExamples).
Expand Down
4 changes: 2 additions & 2 deletions README_zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,12 @@

--------------------------------------------------------------------------------

FlagAI 旨在帮助研究人员和开发人员自由地训练和测试用于 NLP 任务的大规模模型
FlagAI 是一个快速、易于使用和可扩展的大型模型工具包。 我们的目标是支持在多模态的各种下游任务上训练、微调和部署大规模模型。 目前,我们专注于 NLP 模型和任务。 在不久的将来,我们将支持其他模态
<br><br>

* 现在它支持 GLM、BERT、RoBERTa、GPT2、T5 模型和 Huggingface Transformers 的模型。

* 它提供 API 以快速下载并在给定文本上使用这些预训练模型,在您自己的数据集上对其进行微调,然后在我们的模型中心与社区共享它们。
* 它提供 API 以快速下载并在给定(中/英文)文本上使用这些预训练模型,在您自己的数据集上对其进行微调,然后在我们的模型中心与社区共享它们。

* 这些模型可以应用于文本,用于文本分类、信息提取、问答、摘要、文本生成等任务,尤其是中文。

Expand Down