-
Notifications
You must be signed in to change notification settings - Fork 16
Shore
Shore is a solution enabling the detection of multiple faces and their analysis as well. The sofwtare, written in C++, provide the following features :
- Position of the head, nose, mouth and eyes
- Information whether the eyes or the mouth are open or closed
- Gender classification
- Age estimation in years
- Recognition of facial expressions in % (“Happy”, “Surprised”, “Angry” and “Sad”)
- Detection of facial profiles
- Short term memory for recognition of faces, which appear again in the image
The purpose of this module is to gather information generated by Shore, thus they can be used by Greta (e.g. for mimicry)
The information gathered are :
#!java public double FaceId; // required public double PositionLeft; // required public double PositionTop; // required public double PositionRight; // required public double PositionBottom; // required public double Happy; // required public double Sad; // required public double Angry; // required public double Surprised; // required public double Female; // required public double Male; // required public double MouthOpen; // required public double LeftEyeClosed; // required public double RightEyeClosed; // required public double Age; // required public double AgeDeviation; // required public double LeftEyeX; // required public double LeftEyeY; // required public double RightEyeX; // required public double RightEyeY; // required public double LeftMouthCornerX; // required public double LeftMouthCornerY; // required public double RightMouthCornerX; // required public double RightMouthCornerY; // required public double NosetipX; // required public double NosetipY; // required
It's possible to simulate a mimicry process by connecting the ShoreReceiver module (Thrift project) to the Behavior Realizer. The ShoreReceiver module will then send signals according to the facial expression attributed to the user :
Thus, if the user seems to express joy, Greta will express joy too. It works in the same way for the fourth expression registered by Shore (joy, anger, surprise, sadness).
#!java private void sendFacialExpression(String emotion) { if (!lastEmotion.equals(emotion)) { List<Signal> Fsignals = new ArrayList<Signal>(); FaceSignal faceExp = new FaceSignal("faceexp"); faceExp.setIntensity(1); faceExp.getTimeMarker("start").setValue(0); faceExp.getTimeMarker("end").setValue(8); faceExp.setReference(emotion); Fsignals.add(faceExp); lastEmotion = emotion; for (SignalPerformer perf : signalPerformers) { perf.performSignals(Fsignals, "ShoreReceiver" + emotion, default_bml_mode); } } }
Actually, as there is no interpolation between two facial expressions, the mimicry process is not that great. We also need to send "infinite" signals, with no "end" TimeMarker.
Another needed improvment is too integrate the work of Tadas Baltrisaitis in order to gather more precise information from the user's face.
Advanced
- Generating New Facial expressions
- Generating New Gestures
- Generating new Hand configurations
- Torso Editor Interface
- Creating an Instance for Interaction
- Create a new virtual character
- Creating a Greta Module in Java
- Modular Application
- Basic Configuration
- Signal
- Feedbacks
- From text to FML
- Expressivity Parameters
- Text-to-speech, TTS
-
AUs from external sources
-
Large language model (LLM)
-
Automatic speech recognition (ASR)
-
Extentions
-
Integration examples
Nothing to show here