-
Notifications
You must be signed in to change notification settings - Fork 15
DeepGram
Lucie Galland edited this page Jul 19, 2024
·
4 revisions
The DeepGram module can be used to detect oral speech from the user
- make sure conda is installed on your system
- Create an API key at https://deepgram.com/ and replace API key in Common/Data/DeepASR/DeepGram
- Add the DeepGram module to your configuration
- Enable Deep speech module
- Choose port and language (can stay the same)
- Push the Listen button before talking
The DeepGram module can be connected to a LLM module and a Feedback module to create a demo configuration see page LLM - Deep ASR integration
A demo of the integration of this module is available at LLM DeepASR integration
Advanced
- Generating New Facial expressions
- Generating New Gestures
- Generating new Hand configurations
- Torso Editor Interface
- Creating an Instance for Interaction
- Create a new virtual character
- Creating a Greta Module in Java
- Modular Application
- Basic Configuration
- Signal
- Feedbacks
- From text to FML
- Expressivity Parameters
- Text-to-speech, TTS
-
AUs from external sources
-
Large language model (LLM)
-
Automatic speech recognition (ASR)
-
Extentions
-
Integration examples
Nothing to show here