Skip to content

yaodongC/awesome-totally-open-chatgpt

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 

Repository files navigation

Awesome Totally Open Chatgpt

ChatGPT is GPT-3.5 finetuned with RLHF (Reinforcement Learning with Human Feedback) for human instruction and chat.

Alternatives are projects featuring different instruct finetuned language models for chat. Projects are not counted if they are:

  • Alternative frontend projects which simply call OpenAI's APIs.
  • Using language models which are not finetuned for human instruction or chat.

Tags:

  • Bare: only source code, no data, no model's weight, no chat system
  • Standard: yes data, yes model's weight, bare chat via API
  • Full: full yes data, yes model's weight, fancy chat system including TUI and GUI
  • Complicated: semi open source, not really open source, based on closed model, etc...

Other revelant lists:

Table of Contents

  1. The template
  2. The list

The template

Append the new project at the end of file

## [{owner}/{project-name}]{https://github.com/link/to/project}

Description goes here

Tags: Bare/Standard/Full/Complicated

The list

Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM

Tags: Bare

OpenChatKit provides a powerful, open-source base to create both specialized and general purpose chatbots for various applications.

Related links:

Tags: Full

A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.

Tags: Full

This is a browser-based front-end for AI-assisted writing with multiple local & remote AI models. It offers the standard array of tools, including Memory, Author’s Note, World Info, Save & Load, adjustable AI settings, formatting options, and the ability to import existing AI Dungeon adventures. You can also turn on Adventure mode and play the game like AI Dungeon Unleashed.

Tags: Full

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

Related links:

Tags: Full

This is the repo for the Stanford Alpaca project, which aims to build and share an instruction-following LLaMA model.

Tags: Complicated

Other LLaMA-derived projects:

  • pointnetwork/point-alpaca Released weights recreated from Stanford Alpaca, an experiment in fine-tuning LLaMA on a synthetic instruction dataset.
  • tloen/alpaca-lora Code for rproducing the Stanford Alpaca results using low-rank adaptation (LoRA).
  • ggerganov/llama.cpp Ports for inferencing LLaMA in C/C++ running on CPUs, supports alpaca, gpt4all, etc.
  • setzer22/llama-rs Rust port of the llama.cpp project.
  • juncongmoo/chatllama Open source implementation for LLaMA-based ChatGPT runnable in a single GPU.
  • Lightning-AI/lit-llama Implementation of the LLaMA language model based on nanoGPT.
  • nomic-ai/gpt4all Demo, data and code to train an assistant-style large language model with ~800k GPT-3.5-Turbo Generations based on LLaMA.
  • hpcaitech/ColossalAI#ColossalChat An open-source solution for cloning ChatGPT with a complete RLHF pipeline.
  • lm-sys/FastChat An open platform for training, serving, and evaluating large language model based chatbots.
  • nsarrazin/serge A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.

ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.

Tags: Full

ChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level).

Related links:

Tags: Full

This repository provides an overview of all components used for the creation of BLOOMZ & mT0 and xP3 introduced in the paper Crosslingual Generalization through Multitask Finetuning.

Related links:

Tags: Standard

A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF), supporting online RL up to 20b params and offline RL to larger models. Basically what you would use to finetune GPT into ChatGPT.

Tags: Bare

Script to fine tune GPT-J 6B model on the Alpaca dataset. Insightful if you want to fine tune LLMs.

Related links:

Tags: Standard

The goal of this project is to promote the development of the open-source community for Chinese language large-scale conversational models. This project optimizes Chinese performance in addition to original Stanford Alpaca. The model finetuning uses only data generated via ChatGPT (without other data). This repo contains: 175 chinese seed tasks used for generating the data, code for generating the data, 0.5M generated data used for fine-tuning the model, model finetuned from BLOOMZ-7B1-mt on data generated by this project.

Related links:

Tags: Standard

A minimum example of aligning language models with RLHF similar to ChatGPT

Related links:

Tags: Standard

7 open source GPT-3 style models with parameter ranges from 111 million to 13 billion, trained using the Chinchilla formula. Model weights have been released under a permissive license (Apache 2.0 license in particular).

Related links:

Tags: Standard

Atmospheric adventure chat for AI language model Pygmalion by default and other models such as KoboldAI, ChatGPT, GPT-4

Tags: Full

About

A list of totally open alternatives to ChatGPT

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published