Notice: From version v2.0.0, this project adopts a custom license. If you plan to use it for commercial purposes, please check the Usage Agreement section.
This repository has mainly the following 2 features:
- Conversation with AI character
- AITuber streaming
I've written a detailed usage guide in the article below:
This project is developed in the following environment:
- Node.js: ^20.0.0
- npm: 10.8.1
- Clone the repository to your local machine.
git clone https://github.com/tegnike/aituber-kit.git
- Open the folder.
cd aituber-kit
- Install packages.
npm install
- Start the application in development mode.
npm run dev
- Open the URL http://localhost:3000
- This is a feature to converse with an AI character.
- It is an extended feature of pixiv/ChatVRM, which is the basis of this repository.
- It can be easily started as long as you have an API key for various LLMs.
- The recent conversation sentences are retained as memory.
- It is multimodal, capable of recognizing images from the camera or uploaded images to generate responses.
- Enter your API key for various LLMs in the settings screen.
- OpenAI
- Anthropic
- Google Gemini
- Azure OpenAI
- Groq
- Cohere
- Mistral AI
- Perplexity
- Fireworks
- Local LLM
- Dify (Chatbot or Agent)
- Edit the character's setting prompt if necessary.
- Load a VRM file and background file if needed.
- Select a speech synthesis engine and configure voice settings if necessary.
- VOICEVOX: You can select a speaker from multiple options. The VOICEVOX app needs to be running beforehand.
- Koeiromap: You can finely adjust the voice. An API key is required.
- Google TTS: Languages other than Japanese can also be selected. Credential information is required.
- Style-Bert-VITS2: A local API server needs to be running.
- AivisSpeech: The AivisSpeech app needs to be running beforehand.
- GSVI TTS: A local API server needs to be running.
- ElevenLabs: Various language selection is possible. Please enter the API key.
- OpenAI: API key is required.
- Azure OpenAI: API key is required.
- Start conversing with the character from the input form. Microphone input is also possible.
- It is possible to retrieve YouTube streaming comments and have the character speak.
- A YouTube API key is required.
- Comments starting with '#' are not read.
- Turn on YouTube mode in the settings screen.
- Enter your YouTube API key and YouTube Live ID.
- Configure other settings the same way as "Conversation with AI Character".
- Start streaming on YouTube and confirm that the character reacts to comments.
- Turn on the conversation continuity mode to be able to speak even if there are no comments.
- You can send requests to the server app via WebSocket and get responses.
- A separate server app needs to be prepared.
- Start the server app and open the
ws://127.0.0.1:8000/ws
endpoint. - Turn on External Linkage Mode in the settings screen.
- Configure other settings the same way as "Conversation with AI Character".
- Send requests from the input form and confirm that responses are returned from the server app.
- You can try it immediately with this server app repository. tegnike/aituber-server
- For detailed settings, please read "Let's develop with a beautiful girl!! [Open Interpreter]".
- This is a mode where the AI character automatically presents slides.
- You need to prepare slides and script files in advance.
- Proceed to the point where you can interact with the AI character.
- Place the slide folder and script file in the designated folder.
- Turn on Slide Mode in the settings screen.
- Press the Start Slide button to begin the presentation.
- For detailed settings, please read "AI Does Slide Presentations Now!!!!".
- This is a mode where you can interact with the character with low latency using OpenAI's Realtime API.
- Function execution can be defined.
- Select OpenAI or Azure OpenAI as the AI service.
- Turn on Realtime API mode.
- Use the microphone to talk to the character.
- Define new functions in src/components/realtimeAPITools.tsx and src/components/realtimeAPITools.json.
- Refer to the existing get_current_weather function as an example.
- Change the VRM model data at
public/AvatarSample_B.vrm
. Do not change the name. - Change the background image at
public/bg-c.png
. Do not change the name.
- Some configuration values can be referenced from the
.env
file contents. - If entered in the settings screen, that value takes precedence.
- Hold Alt (or option) key to record => Release to send
- Click microphone button (click once to start recording) => Click again to send
- Settings and conversation history can be reset in the settings screen.
- Various settings are stored in the browser's local storage.
- Elements enclosed in code blocks are not read by TTS.
- You are AITuber Developer from Today | Nike-chan
- Let's develop with a beautiful girl!! [Open Interpreter]
- AI Does Slide Presentations Now!!!!
- Added Multimodal Features to AITuberKit, So Let's Have a Drink at Home with AI Character
- AITuberKit × Dify for Super Easy Chatbot Building
- Publishing Dify on the Internet with Xserver
- Try the Advanced Voice Mode Called Realtime API
We are seeking sponsors to continue our development efforts.
Your support will greatly contribute to the development and improvement of the AITuber Kit.
Plus multiple private sponsors
From version v2.0.0, this project adopts a custom license.
-
Non-Commercial Use
- Non-Commercial Use is available for personal use, educational purposes, and non-profit purposes that are not for commercial purposes.
-
Commercial License
- A separate commercial license is required for commercial use.
- For details, please check About License.
To add a new language to the project, follow these steps:
-
Add Language File:
- Create a new language directory in the
locales
directory and create atranslation.json
file inside it. - Example:
locales/fr/translation.json
(for French)
- Create a new language directory in the
-
Add Translations:
- Add translations to the
translation.json
file, referring to existing language files.
- Add translations to the
-
Update Language Settings:
- Open the
src/lib/i18n.js
file and add the new language to theresources
object.
resources: { ..., fr: { // New language code translation: require("../../locales/fr/translation.json"), }, },
- Open the
-
Add Language Selection Option:
- Add a new language option to the appropriate part of the UI (e.g., language selection dropdown in the settings screen) so users can select the language.
<select> ..., <option value="FR">French - Français</option> </select>
-
Test:
- Test if the application displays correctly in the new language.
This will add support for the new language to the project.
- You also need to add support for the voice language code.
- Add the new language code to the
getVoiceLanguageCode
function in theIntroduction
component.
const getVoiceLanguageCode = (selectLanguage: string) => {
switch (selectLanguage) {
case 'JP':
return 'ja-JP';
case 'EN':
return 'en-US';
case 'ZH':
return 'zh-TW';
case 'zh-TW':
return 'zh-TW';
case 'KO':
return 'ko-KR';
case 'FR':
return 'fr-FR';
default:
return 'ja-JP';
}
}
- Add a new language README (
README_fr.md
), logo usage terms (logo_licence_fr.md
), and VRM model usage terms (vrm_licence_fr.md
) to thedocs
directory.