AAAI 2019 paper "Frame and Feature-Context Video Super-Resolution" [1]
Paper
FFCVSR-motion is a improved version for FFCVSR, which adds motion prediction, feature alignment and gate selection. The new version paper is submited to TPAMI 2020 and under review.
We release FFCVSR and FFCVSR-motion inference model and FFCVSR-motion training code in REDS dataset.
Our testing environment is:
- TensorFlow == 1.9
- Python 3.6
- NVIDIA GTX 1080Ti
-
Download the pretrained checkpoints from WeiYun: https://share.weiyun.com/sEHySs5d
-
Testing model in VID4 dataset
# testing FFCVSR model
python test_VID4_FFCVSR.py
# testing FFCVSR-motion model
python test_VID4_FFCVSR_motion.py
# compile inverse_warp cuda verison to speed up the FFCVSR-motion model if the OS is linux
cd custom_op
make
-
Download the REDS dataset (sharp type): https://seungjunnah.github.io/Datasets/reds
-
Put the REDS dataset in
datasets/REDS
-
Generate tfrecords for REDS:
python tfrecords/gen_REDS_tfrecords.py
- Train the FFCVSR-motion
python train_REDS_FFCVSR_motion.py
Methods | Training Dataset | PSNR | SSIM | Inference Time |
---|---|---|---|---|
FFCVSR | Internet Videos | 26.97 | 0.815 | 28.4 ms |
FFCVSR-motion | REDS sharp | 27.15 | 0.821 | 38.6 ms |
[1] @inproceedings{ffcvsr,
author = {Bo Yan, Chuming Lin, and Weimin Tan},
title = {Frame and Feature-Context Video Super-Resolution},
booktitle = {AAAI},
year = {2019}
}