Skip to content
Brice Donval edited this page Jan 13, 2020 · 4 revisions

Table of Contents

Shore

What the hell is Shore ?

Shore is a solution enabling the detection of multiple faces and their analysis as well. The sofwtare, written in C++, provide the following features :

  • Position of the head, nose, mouth and eyes
  • Information whether the eyes or the mouth are open or closed
  • Gender classification
  • Age estimation in years
  • Recognition of facial expressions in % (“Happy”, “Surprised”, “Angry” and “Sad”)
  • Detection of facial profiles
  • Short term memory for recognition of faces, which appear again in the image
You can find an exemple of the outcome from the camera below :

The ShoreReceiver Java Class

The purpose of this module is to gather information generated by Shore, thus they can be used by Greta (e.g. for mimicry)

The information gathered are :

public double FaceId; // required
public double PositionLeft; // required
public double PositionTop; // required
public double PositionRight; // required
public double PositionBottom; // required
public double Happy; // required
public double Sad; // required
public double Angry; // required
public double Surprised; // required
public double Female; // required
public double Male; // required
public double MouthOpen; // required
public double LeftEyeClosed; // required
public double RightEyeClosed; // required
public double Age; // required
public double AgeDeviation; // required
public double LeftEyeX; // required
public double LeftEyeY; // required
public double RightEyeX; // required
public double RightEyeY; // required
public double LeftMouthCornerX; // required
public double LeftMouthCornerY; // required
public double RightMouthCornerX; // required
public double RightMouthCornerY; // required
public double NosetipX; // required
public double NosetipY; // required

Mimicry

It's possible to simulate a mimicry process by connecting the ShoreReceiver module (Thrift project) to the Behavior Realizer. The ShoreReceiver module will then send signals according to the facial expression attributed to the user :

Thus, if the user seems to express joy, Greta will express joy too. It works in the same way for the fourth expression registered by Shore (joy, anger, surprise, sadness).

private void sendFacialExpression(String emotion) {
	if (!lastEmotion.equals(emotion)) {
		List<Signal> Fsignals = new ArrayList<Signal>();
		FaceSignal faceExp = new FaceSignal("faceexp");
		faceExp.setIntensity(1);
		faceExp.getTimeMarker("start").setValue(0);
		faceExp.getTimeMarker("end").setValue(8);
		faceExp.setReference(emotion);
		Fsignals.add(faceExp);
		lastEmotion = emotion;
		for (SignalPerformer perf : signalPerformers) {
			perf.performSignals(Fsignals, "ShoreReceiver" + emotion, default_bml_mode);
		}
	}
}

Future works

Actually, as there is no interpolation between two facial expressions, the mimicry process is not that great. We also need to send "infinite" signals, with no "end" TimeMarker.

Another needed improvment is too integrate the work of Tadas Baltrisaitis in order to gather more precise information from the user's face.

Getting started with Greta

Greta Architecture

Quick start

Advanced

Functionalities

Core functionality

Auxiliary functionalities

Preview functionality

Nothing to show here

Previous functionality (possibly it still works, but not supported anymore)

Clone this wiki locally