This work is used for reproduce MTCNN,a Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks.
- You need CUDA-compatible GPUs to train the model.
- You should first download WIDER Face and Celeba.WIDER Face for face detection and Celeba for landmark detection(This is required by original paper.But I found some labels were wrong in Celeba. So I use this dataset for landmark detection).
- Tensorflow 1.2.1
- TF-Slim
- Python 3.5
- Ubuntu 16.04
- Cuda 8.0
- Download Wider Face Training part only from Official Website , unzip to replace
WIDER_train
and put it intoprepare_data
folder. - Run
prepare_data/gen_12net_data.py
to generate training data(Face Detection Part) for PNet. - Run
gen_imglist_pnet.py
to merge positive, negative and part data. - Run
gen_PNet_tfrecords.py
to generate tfrecord for PNet.
-
After training PNet, run
gen_hard_example_R.py
to generate training data(Face Detection Part) for RNet. -
Run
gen_RNet_pos_tfrecords.py
to generate pos tfrecords for RNet. -
Run
gen_RNet_part_tfrecords.py
to generate part tfrecords for RNet. -
Run
gen_RNet_neg_tfrecords.py
to generate neg tfrecords for RNet. -
total 3 tfrecords for RNet training
-
After training RNet, run
gen_hard_example_O.py
to generate training data(Face Detection Part) for ONet. -
Run
gen_ONet_pos_tfrecords.py
to generate pos tfrecords for ONet. -
Run
gen_ONet_part_tfrecords.py
to generate part tfrecords for ONet. -
Run
gen_ONet_neg_tfrecords.py
to generate neg tfrecords for ONet. -
total 3 tfrecords for ONet training
- Run
train_models/train_PNet.py
to train PNet. - Run
train_models/train_RNet.py
to train RNet. - Run
train_models/train_ONet.py
to train ONet.
-
Two version of model was trained, first version has no landmark.
-
When training PNet,I merge four parts of data(pos,part,neg) into one tfrecord,since their total number radio is almost 1:1:3.But when training RNet , I generate 3 tfrecords,since their total number is not balanced.During training,I read 16 samples from pos and part tfrecord, and read 32 samples from neg tfrecord to construct mini-batch. When training ONet,I generate four tfrecords,since their total number is not balanced.During training,I read 16 samples from pos,part and landmark tfrecord and read 32 samples from neg tfrecord to construct mini-batch.
-
It's important for PNet and RNet to keep high recall radio.When using well-trained PNet to generate training data for RNet,I can get 14w+ pos samples.When using well-trained RNet to generate training data for ONet,I can get 19w+ pos samples.
-
Since MTCNN is a Multi-task Network,we should pay attention to the format of training data.The format is:
[path to image][cls_label][bbox_label][landmark_label]
For pos sample,cls_label=1,bbox_label(calculate),landmark_label=[0,0,0,0,0,0,0,0,0,0].
For part sample,cls_label=-1,bbox_label(calculate),landmark_label=[0,0,0,0,0,0,0,0,0,0].
For landmark sample,cls_label=-2,bbox_label=[0,0,0,0],landmark_label(calculate).
For neg sample,cls_label=0,bbox_label=[0,0,0,0],landmark_label=[0,0,0,0,0,0,0,0,0,0].
-
Since the training data for landmark is less.I use transform,random rotate and random flip to conduct data augment(the result of landmark detection is not that good).
MIT LICENSE
- Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, Yu Qiao , " Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks," IEEE Signal Processing Letter
- MTCNN-MXNET
- MTCNN-CAFFE
- deep-landmark