Skip to content

Gaze behavior in a group

Brice Donval edited this page Jan 7, 2020 · 8 revisions

Content

The eye-contact and mutual gaze between two or more conversation’s participants are important cues of social interaction. Gazing at someone is a non-verbal signal that can help to regulate social attention and to influence the receiver.

Humans are highly sensitive to their social partners’ gaze signals and adapt their behaviours accordingly. Without any eye-contact people do not perceive that they are fully immersed in a conversation and do not have any clue about the amount of interest of the other person.

We tried to create a simple model where the agent knows the user head position and can gaze at him/her according its amount of mutual gaze or gaze away times that can change according their personality. For example, a shy person could try to look less at the other person eyes and so have a less mutual gaze time and a more frequent gaze away behaviour.

In the actual model develop on Greta the dutation of mutual gaze and gaze away are set by the platform user.

One problem is to understand if the user is gazing at the agent or not. Without any eyes trackier it can be difficult. One way could have been, use just the rotational angle of user head and setting a threshold that decides if the user gaze at the agent or not. The way it is actually done it is different. We should take into account that the user head can be rotate and shift from the central position and still gazing at the agent. Therefore, using just the rotation we lose information. To have a better understanding of user gazing direction, both user head position and head rotation are used. The head position it is used to calculate the head angle that the user should have to look at the agent face. The angle computed from the user head position is then compared with the actual head rotation extract by the camera. If the two angles are similar, the user is looking at the agent and viceversa.

drawing

Looking at the picture, using the x and z position of the user head we can calculate α as α = arcsin x/sqrt(x^2 + z^2). The angle found in this way is then compared to the one already computes by the extarnal software.

The model is designed to have two main situations:

  • One agent and the user
  • More agent (or more agent and the user)

Note The camera and greta coordinate reference system are different. In the figure below it is shown:

drawing

In order to make the virtual agent gazing at the external user the basic configuration should also have 4 more modules:

  1. AgentGazeUser
  2. User
  3. Feedbacks
  4. SSITranslator
  5. ActiveMQBroker

drawing

The ActiveMQBroker need to be add to create a ActiveMQ connection that the SSITranslator module can use to receive XML message (example XML message https://github.com/gretaproject/greta/blob/master/bin/Examples/SSI/SSI-Sample-Input.xml) about the user info from an external software. Every software that can use ActiveMQ and send information about head position, head rotation, voice intensity of the user can be used to communicate with Greta platform via SSITranslator module.

The User module receive the information about the user and does some transformation. Through the User module interface can be set the position of the camera respect to the actual position of Agent face. Let's imagine the webcam is over a huge screen while the agent face is in the centre of the screen. This difference can make the agent look in wrong direction because the position and the rotation of user head are computed considering the position of the camera.

The user module interface, in the figure below, allow to overcome this difference between camera and agent face position.

drawing

Once the User module is added in the configuration a node is created in the TreeNode environment in order to have always access to the user position and orientation. Those are updated each time the SSITranslator receive information from the external software and send them to the USER module.

The AgentGazeUser module receive the information from User module and see for each frame if the user is looking at the agent (Gaze status = 0) of not (Gaze status = 1). The user gaze status every 6 frames is stored in a vector and the mean is calculated to decide if the user state is classified as gaze on the agent or as looking away.

Once known if the user is talking or not, the agent will look at the user until the speech is ended or the model of Y. Zhang et al. (2017) is followed to decide if look at the user of look away.

The main idea of the model is checking the gaze status for both agent and user and according to the combination send to the agent the right GazeSignal:

drawing

drawing

When the configuration is open, the start condition is both user and agent are looking away and then the loop start checking which time is passed and how to change the agent gaze status.

  • T mutual gaze: how long the agent can look at the user
  • T lookaway: how long the agent looks away from the user
  • T both_mutualgaze: how long before breaking the mutual gaze
  • T both_lookaway: afetr how long the Agent Try to start the eye contact

If the agent has to look at the agent, a GazeSignal with the user as target is sent to the behaviour realizer. If the agent has to look away, the direction to looking away is taken randomly among a list of offset gaze direction and with an offset angle of 15 degree.

The times duration can be changed in the AgenrGazeUser module interface (Figure below).

drawing

The interface allow also to see real time as the Agent and User gaze status change in real time and also the intensity of the user voice. There is a threshold box where the person who is using the platform can chose the intensity necessary to say if the user is talking or not.

The Feedbacks module gives information about the agent action. When it starts to speak and when it stops. Can give also information about starting and ending of each signal. The main use in this configuration is to have info just about the agent speaking time.

Like shown in the figure below, to have more agent and the user look at each other we need to add for each character AgentGazeUser and Feedbacks module and to connect the user and the agent as part of a group it is necessary the ConversationalGroup module.

drawing

This module takes the info about each agent and the user and checking the different time duration of mutual gaze or gaze away for each participant decide the GazeSignal to be perform by the agents.

If one of the participants in the conversation is speaking, all the agents start to look at the speaker until the end of the speech and then randomly, look at someone else or away.

The speaker once start will look away from the agents to suggest that it doesn’t want to be interrupt and then during the speech will start to look at the other participants.

TO DO

  • Improve the code to understand if the user is gazing at the agent or not (would be better to have the eyes orientation)
  • Substitute the random choice of the target to look at. The agent should look the other participants of the Group taking into account also the role of each one in the conversation, the dominance, the affiliation or other variables

Getting started with Greta

Greta Architecture

Quick start

Advanced

Functionalities

Core functionality

Auxiliary functionalities

Preview functionality

Nothing to show here

Previous functionality (possibly it still works, but not supported anymore)

Clone this wiki locally