This is emotion recognizing tool for RGB-D camera (Kinect). It was my graduate project in 2015. Its main purpose is to provide facial features recording with MS Kinect during the special experiment. It comes with recording and playback module wich tracks change of facial parameters along with experiment state. At the moment, most of the data analysis is not yet implemented in tool itself but you can check the approach by looking at 'matlab' folder. But first, checkout the slides.
Color view with tracked points overlay | Depth view with tracked points overlay |
---|---|
doc
- folder containing slides, screenshots and some sample dataEmotions
- the main C# WPF project of the tool.Emotions.KinectTools
- library for recording kinect data. Also contains some useful classes for face tracking.External
- external libraries (.dll) which are required for proper working of some audio/video encoding.Microsoft.Kinect.Toolkit.FaceTracking
- fork of a Microsoft project from Kinect SDK with some changes in code. In order to run the project you will need to reference this version ofMicrosoft.Kinect.Toolkit.FaceTracking
not the one from SDK.
This experiment was designed to detect correlation between human stress level and human facial expression. So I developed a some sort of game in which participant has to perform tasks like clicking on the objects of one specific type e.g. blue circles. Experiment lasts fixed time (3 minutes). And there were same experiment conditions for all (everyone had the same circles at the same time).
Tasks appears faster over time dividing game in three modes (according to model proposed by Arthur Siegel and James Wolf):
- Easy mode. In which participant performs 100% of tasks.
- Concentrated mode. In which participant can make mistakes but he increases his concentration on performing tasks.
- Hard mode. In which participant cannot perform tasks before new tasks appears which causes increase of a stress level.
Along with the game performance change of facial paramaters (Action Units) were recorded (with same horizontal axis) So results like this were obtained:
Experiment results | Recorded facial features |
---|---|
And as result we are extracting 2 points from result:
- Transition between relaxed (normal) state to concentrated state (by series of failures)
- Transition between concentrated state to stressed state (by second series of failures)
Combining it all together we get a set of training data for each person. Where objects are sets of values of action units x = (au1, au2, au3, au4, au5, au6)
and labels are stress state class e.g. relaxed, concentrated or stressed.
But before we trained a model we ensured that facial expressions can really be classified by stress level, we got results like this for each person. each dot (after factor analysis) represents a facial expression while its color states for our stress level label e.g. relaxed, concentrated or stressed.
You can clearly see that it could be classified pretty much yeasily. I've already tried classification with neural networks and got pretty nice results.
For more information checkout the slides. And take a look at matlab implementation of a classification.
First of all you need Microsoft Kinect v1 and its SDK:
The tool is build in C# (WPF) so, unfortunately, it can only run under Windows. The project written in Visual Studio 2013, but it can be simply upgraded to VS2015.