Skip to content

Latest commit

 

History

History
382 lines (285 loc) · 14.5 KB

README_en.md

File metadata and controls

382 lines (285 loc) · 14.5 KB

AITuberKit

Notice: From version v2.0.0, this project adopts a custom license. If you plan to use it for commercial purposes, please check the Usage Agreement section.

GitHub Last Commit GitHub Top Language GitHub Tag License: Custom

GitHub stars GitHub forks GitHub contributors GitHub issues

X (Twitter) GitHub Tag Discord GitHub Sponsor

Overview

This repository has mainly the following 2 features:

  1. Conversation with AI character
  2. AITuber streaming

I've written a detailed usage guide in the article below:

You are AITuber Developer from Today | Nike-chan

Development Environment

This project is developed in the following environment:

  • Node.js: ^20.0.0
  • npm: 10.8.1

Common Preparations

  1. Clone the repository to your local machine.
git clone https://github.com/tegnike/aituber-kit.git
  1. Open the folder.
cd aituber-kit
  1. Install packages.
npm install
  1. Start the application in development mode.
npm run dev
  1. Open the URL http://localhost:3000

Conversation with AI Character

  • This is a feature to converse with an AI character.
  • It is an extended feature of pixiv/ChatVRM, which is the basis of this repository.
  • It can be easily started as long as you have an API key for various LLMs.
  • The recent conversation sentences are retained as memory.
  • It is multimodal, capable of recognizing images from the camera or uploaded images to generate responses.

Usage

  1. Enter your API key for various LLMs in the settings screen.
    • OpenAI
    • Anthropic
    • Google Gemini
    • Azure OpenAI
    • Groq
    • Cohere
    • Mistral AI
    • Perplexity
    • Fireworks
    • Local LLM
    • Dify (Chatbot or Agent)
  2. Edit the character's setting prompt if necessary.
  3. Load a VRM file and background file if needed.
  4. Select a speech synthesis engine and configure voice settings if necessary.
    • VOICEVOX: You can select a speaker from multiple options. The VOICEVOX app needs to be running beforehand.
    • Koeiromap: You can finely adjust the voice. An API key is required.
    • Google TTS: Languages other than Japanese can also be selected. Credential information is required.
    • Style-Bert-VITS2: A local API server needs to be running.
    • AivisSpeech: The AivisSpeech app needs to be running beforehand.
    • GSVI TTS: A local API server needs to be running.
    • ElevenLabs: Various language selection is possible. Please enter the API key.
    • OpenAI: API key is required.
    • Azure OpenAI: API key is required.
  5. Start conversing with the character from the input form. Microphone input is also possible.

AITuber Streaming

  • It is possible to retrieve YouTube streaming comments and have the character speak.
  • A YouTube API key is required.
  • Comments starting with '#' are not read.

Usage

  1. Turn on YouTube mode in the settings screen.
  2. Enter your YouTube API key and YouTube Live ID.
  3. Configure other settings the same way as "Conversation with AI Character".
  4. Start streaming on YouTube and confirm that the character reacts to comments.
  5. Turn on the conversation continuity mode to be able to speak even if there are no comments.

Other Features

External Linkage Mode

  • You can send requests to the server app via WebSocket and get responses.
  • A separate server app needs to be prepared.

Usage

  1. Start the server app and open the ws://127.0.0.1:8000/ws endpoint.
  2. Turn on External Linkage Mode in the settings screen.
  3. Configure other settings the same way as "Conversation with AI Character".
  4. Send requests from the input form and confirm that responses are returned from the server app.

Related

Slide Mode

  • This is a mode where the AI character automatically presents slides.
  • You need to prepare slides and script files in advance.

Usage

  1. Proceed to the point where you can interact with the AI character.
  2. Place the slide folder and script file in the designated folder.
  3. Turn on Slide Mode in the settings screen.
  4. Press the Start Slide button to begin the presentation.

Related

Realtime API Mode

  • This is a mode where you can interact with the character with low latency using OpenAI's Realtime API.
  • Function execution can be defined.

Usage

  1. Select OpenAI or Azure OpenAI as the AI service.
  2. Turn on Realtime API mode.
  3. Use the microphone to talk to the character.

Function Execution

  • Define new functions in src/components/realtimeAPITools.tsx and src/components/realtimeAPITools.json.
  • Refer to the existing get_current_weather function as an example.

TIPS

VRM Model and Background Fixing Method

  • Change the VRM model data at public/AvatarSample_B.vrm. Do not change the name.
  • Change the background image at public/bg-c.png. Do not change the name.

Setting Environment Variables

  • Some configuration values can be referenced from the .env file contents.
  • If entered in the settings screen, that value takes precedence.

Microphone Input Methods (2 Patterns)

  1. Hold Alt (or option) key to record => Release to send
  2. Click microphone button (click once to start recording) => Click again to send

Other

  • Settings and conversation history can be reset in the settings screen.
  • Various settings are stored in the browser's local storage.
  • Elements enclosed in code blocks are not read by TTS.

Related Articles

Seeking Sponsors

We are seeking sponsors to continue our development efforts.
Your support will greatly contribute to the development and improvement of the AITuber Kit.

GitHub Sponsor

Our Supporters (in order of support)

morioki3 hodachi-axcxept coderabbitai ai-bootcamp-tokyo wmoto-ai JunzoKamahara darkgaldragon usagi917 ochisamu mo0013 tsubouchi bunkaich seiki-aliveland rossy8417 gijigae takm-reason haoling FoundD-oka terisuke konpeita

Plus multiple private sponsors

Usage Agreement

License

From version v2.0.0, this project adopts a custom license.

  • Non-Commercial Use

    • Non-Commercial Use is available for personal use, educational purposes, and non-profit purposes that are not for commercial purposes.
  • Commercial License

    • A separate commercial license is required for commercial use.
    • For details, please check About License.

Others

Tips for Contributors

How to Add a New Language

To add a new language to the project, follow these steps:

  1. Add Language File:

    • Create a new language directory in the locales directory and create a translation.json file inside it.
    • Example: locales/fr/translation.json (for French)
  2. Add Translations:

    • Add translations to the translation.json file, referring to existing language files.
  3. Update Language Settings:

    • Open the src/lib/i18n.js file and add the new language to the resources object.
    resources: {
      ...,
      fr: {  // New language code
        translation: require("../../locales/fr/translation.json"),
      },
    },
  4. Add Language Selection Option:

    • Add a new language option to the appropriate part of the UI (e.g., language selection dropdown in the settings screen) so users can select the language.
    <select>
      ...,
      <option value="FR">French - Français</option>
    </select>
  5. Test:

    • Test if the application displays correctly in the new language.

This will add support for the new language to the project.

Adding Voice Language Code

  • You also need to add support for the voice language code.
  • Add the new language code to the getVoiceLanguageCode function in the Introduction component.
const getVoiceLanguageCode = (selectLanguage: string) => {
  switch (selectLanguage) {
    case 'JP':
      return 'ja-JP';
    case 'EN':
      return 'en-US';
    case 'ZH':
      return 'zh-TW';
    case 'zh-TW':
      return 'zh-TW';
    case 'KO':
      return 'ko-KR';
    case 'FR':
      return 'fr-FR';
    default:
      return 'ja-JP';
  }
}

Adding README

  • Add a new language README (README_fr.md), logo usage terms (logo_licence_fr.md), and VRM model usage terms (vrm_licence_fr.md) to the docs directory.