Skip to content

A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer

License

Notifications You must be signed in to change notification settings

teknium1/GPTeacher

Repository files navigation

GPTeacher

A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer

New: Roleplay V2 (Supplemental) Dataset has been added to the /roleplay/ directory:
- 100% GPT4 Generated still
- 2.5x larger than original roleplay dataset
- Much more diverse
- Includes simulated conversations/chat histories in a large portion of examples.

They were all made mostly by adapting the alpaca prompt, the toolformer dataset a bit more than the rest though. They used many versions of the prompts and since I only saved final prompt I'll probably not be posting it or it will just confuse end users. See here for example prompt: https://github.com/tatsu-lab/stanford_alpaca/blob/main/generate_instruction.py

The General-Instruct used many of the same seed prompts as alpaca, but also had specific examples of things we didnt see much in with alpaca. Such as Chain of Thought Reasoning, Logic Puzzles, Wordplay, Role Playing (lightly), and was asked to include reasoning behind and thought steps where appropriate in example responses, among other things. The General-Instruct dataset is about 20,000 examples with just deduplication.

Still cleaning the codegen instruct dataset, will be up when its cleaned.
Update: Code-Instruct Dataset has been uploaded! ~5350 Code Task Instructions in varying programming languages!

The Roleplay-Instruct dataset are tasks specifically to take on roles of characters, fictional and non-fictional, from various settings, personalities, etc.

Each dataset is split into 5 separate datasets (except roleplay), based on similarity scored cleaning. Simple dedupe only, and then range of <60% to <90% similarity cleaned sets for each.

They are all made to be compliant with Alpaca's dataset format, i.e. each has an instruction, input, and output field, should make it easier to use the same fine tune script and process as alpaca has.

Documentation on the toolformers section coming soon, we generated a dataset to use a set of predefined tools, including search, python, terminal/shell, wikipedia, wolfram, and others.

About

A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages