-
Notifications
You must be signed in to change notification settings - Fork 78
1. Gettings Started
FaceCept3D is a modular technology aimed at analysis and recognition of 3D faces. In principle, it is a general-purpose 3D processing framework and can be applied to any point clouds. However, this will require manual configuration of the pipeline.
From a high-level perpective it consists of three major parts:
- Recognition components that include filtering, registration (alignment) using an Iterative Closest Point algorithms, feature extraction, machine learning methods and other
- Pipeline components, that decouple recognition methods from technical details of a sensor and from a possibly complicated processing pipeline
- User interface components that ease viewing, annotating point clouds and allow displaying result.
FaceCept3D is based on several open source libraries, such as OpenCV, PCL, boost and couple of others. We thank the contributor for their work.
The following image describes the pipeline of a TemplateCreator example. Let’s detail every component.
To decouple technical details of a sensors with the recognition components FaceCept3D uses different grabbers and processors. The GrabberBase class defines the interface of a grabber. There are several grabbers in FaceCept3D. KinectSDKGrabber works on Windows and uses MS Kinect SDK to get the RGB-D stream from a sensor. OpenNIGrabber uses the OpenNI drivers to do that. If you plan to use a different sensor rather than Kinect, you will need to subclass the GrabberBase class for that.
In addition to receiving frames from the sensor, every grabber has a set of processors that should be called in a predefined order. Every processor is a descendand of the IProcessor interface, where the necessary methods are declared. Every concrete processor must overload the Process method, where the processing is performed:
virtual void Process(IDataStorage::Ptr dataStorage) = 0;
IDataStorage is a key-value map, where processor takes input data and stored the processing results. For example, DepthPreprocessingProcessor will extract the necessary data using the following line of code:
RawFrames::Ptr frames = storage->GetAndCastNotNull<RawFrames>("RawFrames");
The processing result is stored in the following way:
storage->Set("OriginalCloud", cloudObject);
This way, processors do not know anything about each other and the rest of the processing pipeline. They can only see the common data storage. Therefore the order in which processors are called matters. Grabbers in turn know nothing about the recognition components, their task is to maintain the common storage and call the processors in a predefined order. Processors use recognition components to perform their task.
FaceCept3D components are easily combined with each other. The following is an example of a minimal pipeline:
int main(int argc, char *argv[])
{
KinectSDKGrabber grabber;
grabber.AddProcessor(IProcessor::Ptr(new KinectSDKConverterProcessor("Cloud")));
grabber.AddProcessor(IProcessor::Ptr(new ShowCloudProcessor("Cloud")));
grabber.Start();
}
KinectSDKConverterProcessor will take the RawFrames and convert them into a point cloud. It will put the result in storage with "Cloud" key. ShowCloudProcessor will take the cloud using the same key and show it on the screen. The last lines starts the pipeline.