Code for: ScaleFlow++: Robust and Accurate Estimation of 3D Motion from Monocular Camera https://arxiv.org/abs/2407.09797
There is a preliminary version of this work https://dl.acm.org/doi/abs/10.1145/3503161.3547979
PS: Can be found by name on Google Scholar, if ACM is not convenient to read
The code has been tested with PyTorch 2.0.1 and Cuda 11.8.
conda create -n cscv python=3.9
conda activate cscv
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
Also need to install via pip
pip install matplotlib==3.5
pip install opencv-python
pip install tqdm
pip install pypng
pip install scipy
pip install einops
pip install tensorboard
First, download the weights (Demo_Scaleflowpp.pth) from https://drive.google.com/drive/folders/129lbJWkcMwxispcRVXOvUGF12GuHbhX3?usp=drive_link and place it in the checkpoints path.
You need to specify the image path and output path in the Demo.py file (line 92,93,94)
path1 = '/home/lh/CSCV/00026.jpg'
path2 = '/home/lh/CSCV/00027.jpg'
outpath = '/home/lh/CSCV/output'
CUDA_VISIBLE_DEVICES=0 python Demo_ScaleFlowpp.py --model=/home/lh/CSCV/checkpoints/Demo_Scaleflowpp.pth --mixed_precision --start=0
First, download the weights (Demo.pth) from https://drive.google.com/drive/folders/129lbJWkcMwxispcRVXOvUGF12GuHbhX3?usp=drive_link and place it in the checkpoints path.
You need to specify the image path and output path in the Demo.py file (line 92,93,94)
path1 = '/home/lh/CSCV/00026.jpg'
path2 = '/home/lh/CSCV/00027.jpg'
outpath = '/home/lh/CSCV/output'
CUDA_VISIBLE_DEVICES=0 python Demo.py --model=/home/lh/CSCV/checkpoints/Demo.pth --mixed_precision --start=0
soapbox:
soapbox.mp4
motorbike:
motorbike.mp4
motocross-jump:
motocross-jump.mp4
car-shadow:
car-shadow.mp4
breakdance-flare:
breakdance-flare.mp4
Dog:
00007.mp4
To evaluate/train CSCV, you will need to download the required datasets.
We recommend manually specifying the path in dataset_exp_orin.py
, like in line 477 def __init__(self, aug_params=None, split='kitti_test', root='/new_data/datasets/KITTI/',get_depth=0):
, '/new_data/datasets/KITTI/' is where you put the KITTI dataset.
You can create symbolic links to wherever the datasets were downloaded in the datasets
folder
├── datasets
├── KITTI
├── testing
├── training
├── devkit
├── FlyingThings3D
├── frames_cleanpass
├── frames_finalpass
├── optical_flow
Download and place in the checkpoints directory ../CSCV/checkpotins/
CUDA_VISIBLE_DEVICES=0,1 python train_scaleflowpp.py --name ScaleFlowpp --stage kitti --validation kitti --gpus 0 1 --num_steps 60000 --batch_size 6 --lr 0.000125 --image_size 320 896 --wdecay 0.0001 --gamma=0.85
Reproduce the results of Table 3 in the paper (https://arxiv.org/abs/2407.09797)
CUDA_VISIBLE_DEVICES=0 python dc_flow_eval.py --model=../CSCV/checkpotins/ResScale_KITTI160FT.pth --modelused='scaleflowpp'
If you want to submit test results to KITTI and Reproduce the results of Table 4 in the paper (https://arxiv.org/abs/2407.09797)
Of course, you need to indicate the location of the corresponding folder in the code
in dc_flow_eval.py line 543: test_dataset = datasets.KITTI(split='test', aug_params=None,root='/home/lh/all_datasets/kitti/testing')
in line 560,563,564
output_filename = os.path.join('/home/lh/CSCV_occ/submit_pre909/flow/', frame_id)
cv2.imwrite('%s/%s' % ('/home/lh/CSCV_occ/submit_pre909/disp_0', frame_id), disp1)
cv2.imwrite('%s/%s' % ('/home/lh/CSCV_occ/submit_pre909/disp_1', frame_id), disp2)
You also need to download the disp_ganet_testing folder from https://drive.google.com/drive/folders/129lbJWkcMwxispcRVXOvUGF12GuHbhX3?usp=drive_link and place it in the testing path (like:/home/lh/all_datasets/kitti/testing)
CUDA_VISIBLE_DEVICES=0 python dc_flow_eval.py --model=../CSCV/checkpotins/ResScale_kittift200.pth --modelused='scaleflowpp' --ifsubmit=True
CUDA_VISIBLE_DEVICES=0 python train.py --name raft-cscv --stage kitti --validation kitti --gpus 0 --num_steps 60000 --batch_size 2 --lr 0.000125 --image_size 320 960 --wdecay 0.0001 --gamma=0.85
Test Scaleflow on KITTI (This is slightly different from the original Scaleflow, as it uses a hybrid training method)
CUDA_VISIBLE_DEVICES=0 python dc_flow_eval.py --model=../CSCV/checkpotins/cscv_kitti_42.08.pth --modelused='scaleflow'