Skip to content

Latest commit

 

History

History
53 lines (33 loc) · 1.76 KB

training.md

File metadata and controls

53 lines (33 loc) · 1.76 KB

Firstly you need to prepare the dataset as described here.

Then download a start point model here and place it at ${EgoNet_DIR}/resources.

The training phase consists of two stages which are described as follows.

For training on other datasets. You need to prepare the training images and camera parameters accordingly.

Stage 1: train a lifter (L.pth)

You need to modify the configuration by

cd ${EgoNet_DIR}/configs && vim KITTI_train_lifting.yml

Edit dataset:root to your KITTI directory.

(Optional) Edit dirs:output to where you want to save the output model.

(Optional) You can evaluate during training by setting eval_during to True.

Finally, run

 cd tools
 python train_lifting.py --cfg "../configs/KITTI_train_lifting.yml"

Stage 2: train the remaining part (HC.pth)

You need to modify the configuration by

cd ${EgoNet_DIR}/configs && vim KITTI_train_IGRs.yml

Edit dataset:root to your KITTI directory.

Edit gpu_id according to your local machine and set batch_size based on how much GPU memory you have.

(Optional) Edit dirs:output to where you want to save the output model.

(Optional) You can evaluate during training by setting eval_during to True.

(Optional) Edit ss to enable self-supervised representation learning. You need to prepare unlabeled ApolloScape images and download record here.

(Optional) Edit training_settings:debug to disable saveing intermediate training results.

Finally, run

 cd tools
 python train_IGRs.py --cfg "../configs/KITTI_train_IGRs.yml"