Code of Interactive and Adaptive Virtual Agent (IAVA) system, IVA 2023.
Full Paper: "IAVA: Interactive and Adaptive Virtual Agent" [IVA 2023].
Demo Paper: "Conducting Cognitive Behavioral Therapy with an Adaptive Virtual Agent" [IVA 2023].
IAVA is an interactive virtual agent system that generates real-time adaptive behaviors in response to its human interlocutor.
It ensures the two aspects:
- generating real-time adaptive behavior,
- managing natural dialogue.
The agent adapts to its interlocutor linguistically (choosing its next conversational move via the dialogue manager including a BERT-based (LLM-based) automatic thought classifier) and nonverbally (displaying reciprocally adaptive facial gestures via the ASAP model).
Cognitive Behavioral Therapy (CBT) is chosen as our proof-of-concept, the agent acts as a therapist helping human users detect their negative automatic thoughts.
A dyadic interaction between a human user and an adaptive CBT agent simulated via our IAVA system. The system loop consisting of user multimodal signal perception, agent behavior generation, signal communication, and visualization is assured to the frame level of 25 fps.
Please click to see the full demo video.
A database of 60 CBT human-agent interactions has been collected using the IAVA system.
If you are interested in the CBT HAI DB, please contact the authors of the IAVA system.
IAVA consists of 2 main modules:
- Perception and Generation (PerceptNGen): computes the user multimodal signal perception and the agent behavior generation (via ASAP model) in real-time and sends the agent's nonverbal generations to the Greta platform for the display,
- IAVA for Greta (IAVA4Greta): contains IAVA elements of real-time signal communication and visualization which needs to be integrated into the Greta platform (Greta Main GITHUB).
Each section will be detailed in the sub-readme files within each part's directory.
Please follow the integration instructions in each sub-readme file.
To use the system, please follow the following instructions:
-
Launch OpenFace streaming with ZeroMQ.
Select the "Record" options of : -
Launch Greta platform and open one of the following configuration files:
- "Greta-IAVA-Cereproc.xml":
Using the pop-up window of IAVA, launch the socket connection (with the "Enable" button) after entering your IP address (please leave the port number as it is). - "Greta - ASR IAVA.xml":
The operation with ASR is done in two steps:
- Start by opening the ASR window. Using the Google Chrome navigator (in an incognito window), launch the ASR system with: https://127.0.0.1:8087
- Using the pop-up window of IAVA, launch the socket connection (with the "Enable" button) after entering your IP address (please leave the port number as it is). Please refer to the image above for "Greta-IAVA-Cereproc.xml".
- "Greta-IAVA-Cereproc.xml":
-
Launch real-time IAVA system: Start by changing the socket connection configurations by entering your IP addresses (please leave the port number as it is) in: config.ini
Run the system using the main script of "realtime_PerceptNGen.py" with:
python realtime_PerceptNGen.py
After closing the system program, the following output CSV files will be available (detailed in the realtimeASAP sub-readme): - Audio signals of the user - OpenFace and openSMILE features of the user and agent
The original code for the ASAP model can be found here: ASAP GITHUB