This is a non-official re-implementation of article: Image Inpainting for Irregular Holes Using Partial Convolutions[Liu+, arXiv2018].
The official implementation is here.
The Cpp deployment of this algorithm is uploaded, using Libtorch, Opencv and Qt. Check here and welcome to star.
Python 3.7.7+
Pytorch 1.7.0+
python install -r requirements.txt
The works of this re-implementation contains:
-
Partial Convolution Layer
-
New Mask Datasets
After training, it is found that the area of the mask will influence the effect of image inpainting, so this re-implementation uses three mask datasets with different areas proportion and three corresponding weights were trained respectively.
checkpoint_mask_lightest_16.8.pth
checkpoint_mask_light_23.55.pth
checkpoint_mask_35.5.pth
name area ratio with holes mask_lightest 16.8% √ mask_light 23.55% √ mask 35.5% × -
Pytorch weights to libtorch weights
-
Libtorch inference implementation in C++17 (This work will be published as a desktop application.)
In windows 10, download the pretrained weights, Extract code:jw2x, and clone the repository.
git clone https://github.com/NiceRingNode/PartialConvolution.git
Then change the working directory in cmd,
cd/d PartialConvolution
and run test.py using the following commands,
python test.py
or
python test.py --batch_size 8 --pretrained_root "./weights/checkpoint_mask_lightest_16.8.pth" --dataset "mask_lightest"
python test.py --batch_size 8 --pretrained_root "./weights/checkpoint_mask_light_23.55.pth" --dataset "mask_light"
python test.py --batch_size 8 --pretrained_root "./weights/checkpoint_mask_35.5.pth" --dataset "mask"
You can see the inpainting result on result.png in the output folder.
Download the dataset Places2, and put it in the data folder, the directory is as follows (the example data set here uses places365_standard, you can replace it with other Places2 dataset)
├─data
│ ├─mask
│ ├─mask_light
│ ├─mask_lightest
│ └─places365_standard
│ ├─train
│ └─val
├─output
├─weights
Then generate the mask dataset, the number of masks is 8000 as default.
python generate_mask.py
Changing to the working directory in cmd, then run train.py
python train.py
Experiments have proved that if there are some small holes in the middle of the covered part, the inpainting effect will be better.
The following shows the training results using three kinds of masks that have different area proportion, from top to bottom:
mask image,original image, predict image, comp image, mask
mask:shadow area: 35.5% batch_size: 8 iter: 175000
mask_light: shadow area: 23.55% batch_size: 8 iter: 295000(base on the weights pretrained on mask that has 35.5% shadow)
mask_lightest: shadow area: 16.8% batch_size: 8 iter: 295000(base on the weights pretrained on mask that has 35.5% shadow)
Here, I only provide the python-based re-implementation of the paper. The libtorch-based re-implementation has been completed, and it is being deployed on the PC as a desktop software using C++. Soon the model will be tried to be deployed on Android.