Skip to content
/ BIN Public

Blurry Video Frame Interpolation (CVPR20)

Notifications You must be signed in to change notification settings

laomao0/BIN

Repository files navigation

BIN (Blurry Video Frame Interpolation)

Project | Paper

Wang Shen, Wenbo Bao, Guangtao Zhai, Li Chen, Xiongkuo Min, and Zhiyong Gao

IEEE Conference on Computer Vision and Pattern Recognition, Seattle, CVPR 2020

Table of Contents

  1. Introduction
  2. Citation
  3. Requirements and Dependencies
  4. Installation
  5. Testing Pre-trained Models
  6. Downloading Results
  7. Training New Models

Introduction

We propose a Blurry video frame INterpolation method to reduce motion blur and up-convert frame rate simultaneously. We provide videos here.

Futher more, in the journal version (accepted by TIP), we also extend our model for joint frame interpolation and deblurring with compression artifacts, joint frame interpolation and super-resolution. We provide videos here.

Citation

If you find the code and datasets useful in your research, please cite:

Frame interpolation for blurry video @inproceedings{BIN, author = {Shen, Wang and Bao, Wenbo and Zhai, Guangtao and Chen, Li and Min, Xiongkuo and Gao, Zhiyong}, title = {Blurry Video Frame Interpolation}, booktitle = {IEEE Conference on Computer Vision and Pattern Recognition}, year = {2020} }

Frame interpolation and enhancement @inproceedings{BIN, author = {Shen, Wang and Bao, Wenbo and Zhai, Guangtao and Chen, Li and Min, Xiongkuo and Gao, Zhiyong}, title = {Video Frame Interpolation and Enhancement via Pyramid Recurrent Framework}, booktitle = {IEEE Transactions on Image Processing}, year = {2020} }

Frame interpolation for normal video @inproceedings{DAIN, author = {Bao, Wenbo and Lai, Wei-Sheng and Ma, Chao and Zhang, Xiaoyun and Gao, Zhiyong and Yang, Ming-Hsuan}, title = {Depth-Aware Video Frame Interpolation}, booktitle = {IEEE Conference on Computer Vision and Pattern Recognition}, year = {2019} }

Frame interpolation MEMC architecture @article{MEMC-Net, title={MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement}, author={Bao, Wenbo and Lai, Wei-Sheng, and Zhang, Xiaoyun and Gao, Zhiyong and Yang, Ming-Hsuan}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, doi={10.1109/TPAMI.2019.2941941}, year={2018} }

Requirements and Dependencies

  • Ubuntu (We test with Ubuntu = 16.04.5 LTS)
  • Python (We test with Python = 3.6.8 in Anaconda3 = 4.1.1)
  • Cuda & Cudnn (We test with Cuda = 10.0 and Cudnn = 7.4)
  • PyTorch >= 1.0.0 (We test with Pytorch = 1.3.0)
  • FFmpeg (We test with the static build version = ffmpeg-git-20190701-amd64-static)
  • GCC (Compiling PyTorch 1.0.0 extension files (.c/.cu) requires gcc = 4.9.1 and nvcc = 10.0 compilers)
  • NVIDIA GPU (We use RTX-2080 Ti with compute = 7.5)

Installation

Download repository:

$ git clone https://github.com/laomao0/BIN.git

Make Adobe240 blur dataset

If you want to directly download the testset, please refer to 5.

  1. Download the Adobe240 original videos

  2. Then de-compress those videos into a folder: Adobe_240fps_dataset/Adobe_240fps_original_high_fps_videos

The structure of the folder is as following:

Adobe_240fps_original_high_fps_videos   -- 720p_240fps_1.mov
                                        -- 720p_240fps_2.mov
                                        -- 720p_240fps_3.mov
                                        -- ...
  1. Make the Adobe240 blur dataset by averaging N frames.

We averaging 11 consecutive frames to synthesize 1 blur image.

For example, the frame indexs of a 240-fps video are 0 1 2 3 4 5 6 7 8 9 10 11 12... We average 0-11 frames to synthesize the blur frame 0, average 8-19 frames to synthesize the blur frame 1. The frame rate of synthesized blur video is 30-fps.

$ cd data_scripts/adobe240fps
$ ./create_dataset_blur_N_frames_average_Adobe.sh

If you do not want to create the training set, setting --enable_train to be 0.

  1. Check the dataset

The script of step 3 will create the dataset at path specified at --dataset_folder It contains 7 folders, including full_sharp, test, test_blur, test_list, train, train_blur, train_list

full_sharp: contain all de-compressed frames, not used in this project.
test/train: contain the sharp test/train frams at 240-fps.
test_blur/train_blur: contain the blur test/train frames at 30-fps.
test_list/train_list: contain im_list files used for dataloader.

test/train structure:
                        folder_1 -- 00001.png 00002.png ....
                        folder_2 -- 00001.png 00002.png ....
test_blur/train_blur structure:
                        folder_1 -- 00017.png 00025.png ....
                        folder_2 -- 00017.png 00025.png ....
  1. For those who only want the Adobe240 blur testset with ground-truth frames, we provide download links. For the Adobe240 blur train set, which is too large, we suggest users to use high-fps vidoes to generate.

    Adobe_240fps_dataset/test_blur: link Adobe_240fps_dataset/test: link Adobe_240fps_dataset/test_list: link

Demo using Pre-trained Models

  1. Download pretrained model trained on Adobe240 blur training set,

     $ cd model_weights
     $ download the model for Adobe240 dataset
    

    download link

  2. Download the demo vidoes

     $ cd demo_vidoes
     $ mkdir demo_blur
     $ download the data at the following link, then put it into demo_blur folder 
    

    download link

  3. Run the script

     $ cd ..
     $ mkdir demo_results
     $ cd ..
     $ bash demo.sh
    

Testing Pre-trained Models (Performance Evaluation)

  1. Download pretrained model trained on Adobe240 blur training set,

     $ cd model_weights
     $ download the model for Adobe240 dataset
    

    download link

  2. Run the script

     $ bash test.sh
    
  3. Check the results

The logging file and images are saved at --output_path/60fps_test_results/adobe_stage4

test_summary.log records PSNR, SSIM of each video folders

We get the following performance:

    Frame Interpolation PSNR/SSIM : 33.31/0.9372
    Frame Deblurring    PSNR/SSIM : 33.33/0.9319
  1. Besides, we also provide our results on adobe240 blur test set here:

The downloaded zip file includes:

    a. image folders contains results: 720p_240fps_1 -- 00021.png 00025.png ....
                                        GOPR9635      -- 00021.png 00025.png ....
                                        ....          -- 00021.png 00025.png ....
                                    
    b. test.log : records each img's evaluation performance

    c. test_summary.log : records the folder's average performance

Training New Models

If you want to train the model on our own data

$ bash train.sh ( to be added)

For joint frame interpolation and super-resolution task

In our extented work, we extend our model for joint frame interpolation and super-resolution task.

We provide the operations for user to evaluate our model on Vimeo90K dataset.
  1. Download Vimeo_septuplet dataset

     $ cd Vimeo90k_SR
     $ mkdir vimeo_septuplet
     $ download data
    

    download link [82G]

  2. Create dataset using matlab bicubic function

We generate the data using matlab2015b installed on the Ubuntu system.

    $ cd data_scripts
    $ cd vimeo90k_sr
    $ matlab -nodesktop -nosplash -r generate_LR_Vimeo90K
  1. Download the model trained on Vimeo90k-septuplet set.

     To be added
    
  2. Run the script

     To be added
    

Contact

Wang Shen; Wenbo Bao;

License

About

Blurry Video Frame Interpolation (CVPR20)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published