Firstly you need to prepare the dataset as described here.
Then download a start point model here and place it at ${EgoNet_DIR}/resources.
The training phase consists of two stages which are described as follows.
For training on other datasets. You need to prepare the training images and camera parameters accordingly.
You need to modify the configuration by
cd ${EgoNet_DIR}/configs && vim KITTI_train_lifting.yml
Edit dataset:root to your KITTI directory.
(Optional) Edit dirs:output to where you want to save the output model.
(Optional) You can evaluate during training by setting eval_during to True.
Finally, run
cd tools
python train_lifting.py --cfg "../configs/KITTI_train_lifting.yml"
You need to modify the configuration by
cd ${EgoNet_DIR}/configs && vim KITTI_train_IGRs.yml
Edit dataset:root to your KITTI directory.
Edit gpu_id according to your local machine and set batch_size based on how much GPU memory you have.
(Optional) Edit dirs:output to where you want to save the output model.
(Optional) You can evaluate during training by setting eval_during to True.
(Optional) Edit ss to enable self-supervised representation learning. You need to prepare unlabeled ApolloScape images and download record here.
(Optional) Edit training_settings:debug to disable saveing intermediate training results.
Finally, run
cd tools
python train_IGRs.py --cfg "../configs/KITTI_train_IGRs.yml"