-
Notifications
You must be signed in to change notification settings - Fork 15
BehaviorRealizer
The Behavior Realizer module is one of the four main modules in the Greta system as illustrated in the below image. This module receives as input BML messages from Behavior Planner via the messaging central system ActiveMQ (Snyder et al., 2011) in realtime. After processing multimodal behaviors specified as BML messages, this module returns a set of keyframes to the ActiveMQ’s network.
The Greta system follows the SAIBA framework. Hence we use the syntax defined by the standard version of Behavior Markup Language (BML, 2011) to encode behavior scripts generated by the Behavior Planner and received by the Behavior Realizer. Our BML Resolver submodule was developed to deal with the BML semantic descriptions. This submodule interprets behaviors specified in each BML request it receives. Three issues need to be addressed in our implementation of the BML Resolver module:
- Initializing behavior timing;
- Composition of BML messages;
- Rebuilding the behavior surface form.
If a new BML request arrives before the realization of a previous requests has been completed, a BML composition mechanism must be developed to resolve them. The BML standard defines three cases for behaviors composition (i.e., via composition tag) :
- The behaviors specified in the new BML block and in the previous BML blocks are computed and realized together as one BML request. If a conflict exists, the new behaviors cannot modify behaviors defined in a previous BML requests (i.e. merging composition).
- The new BML block has to wait for the prior blocks to end before starting the new one (i.e., appending composition). This means that all signals in the current BML have to finish and return to rest states before any of the signals in the new BML starts.
- The behaviors specified in prior BML blocks are forced to stop when the new BML block arrives. In the later case, the behaviors defined in the new block are realized as usual (i.e., replacing composition).
The other cases of replacing and merging composition have not yet been integrated.
These cases require modeling of the interruption and combination of gestures which are being executed.
The input of the module is specified by the BML language. It contains the text to be spoken and/or a set of nonverbal signals to be displayed. Facial expressions, gaze, gestures, torso movements are described symbolically in repository files. The Behavior Realizer solves also eventual conflicts between the signals that are scheduled to happen on the same modality at the same time. The Behavior Realizer uses repository files of predefined facial expressions, gestures, torso movements and so on. The agent's speech, which is also part of the BML input, is synthesized by an external TTS system. The TTS system provides the list of phonemes and their respective duration. This information is used to compute the lip movements. When the Behavior Realizer receives no input, the agent does not remain still. It generates some idle movements. Periodically a piece of animation is computed and is sent to the Player. This avoids unnatural freezing of the agent. For some modalities, like the head or gaze, the Behavior Realizer manages also signals with infinite duration i.e. signals with an a priori unknown ending time. In this way we can force the agent to keep the head turned till a BML command arising from a new communicative intention is generated by the Intent Planner module.
Advanced
- Generating New Facial expressions
- Generating New Gestures
- Generating new Hand configurations
- Torso Editor Interface
- Creating an Instance for Interaction
- Create a new virtual character
- Creating a Greta Module in Java
- Modular Application
- Basic Configuration
- Signal
- Feedbacks
- From text to FML
- Expressivity Parameters
- Text-to-speech, TTS
-
AUs from external sources
-
Large language model (LLM)
-
Automatic speech recognition (ASR)
-
Extentions
-
Integration examples
Nothing to show here