- Installation
- Experiments Settings and Quick Start
- Superpixel Segmentation Demo
- [Download] Trained Models and Benchmark Databases
- Evaluation Metrics
- Motivation
- [Definition] Local Modeling and Non-local Modeling
- [Definition] Global Distortions and Local Distortions
- [Download] Paper and Presentations
- Structure of the Code
- Citation
- Contact
- Acknowledgement
Framework: PyTorch, OpenCV, PIL, scikit-image, scikit-learn, Numba JIT, Matplotlib, etc.
Note: The overall framework is based on PyTorch. Here, I didn't provide a specific pip install -r requirements.txt
because there are so many dependencies. I would like to suggest you install the corresponding packages when they are required to run the code.
Experiments Settings: π Check this file
βοΈ Split the reference images into 60% training, 20% validation, and 20% testing.
βοΈ 10 random splits of the reference indices by setting seed random.seed(random_seed)
from 1 to 10 args.exp_id
.
βοΈ The median SRCC and PLCC on the testing set are reported.
Quick Start:
python main.py --database_path '/home/jsy/BIQA/' --database TID2013 --batch_size 4 --num_workers 8 --gpu 0
(1) Other hyper-parameters can also be modified via --parameter XXX
, e.g., --epochs 200
and --lr 1e-5
.
(2) Hyper-parameters can be found from the parser
in the main.py.
(3) Please change the database path '/home/jsy/BIQA/'
to your own path.
Experiments Settings: π Check this file
βοΈ One database is used as the training set, and the other databases are the testing sets.
βοΈ The performance of the model in the last epoch (100 epochs in this work) is reported.
Quick Start: (Folder: Cross Database Evaluations)
python cross_main.py --database_path '/home/jsy/BIQA/' --train_database TID2013 --test_database CSIQ --num_workers 8 --gpu 0
Quick Start (Folder: Individual Distortion Evaluation):
python TID2013-Single-Distortion.py
(1) Please change the trained models' path and Database path.
(2) The Index of Distortion Type can be found from original papers: TID2013 and KADID.
Quick Start:
python real_testing.py --model_file 'save_model/TID2013-32-4-1.pth' --im_path 'test_images/cr7.jpg' --database TID2013
Please comment these lines if you don't want to resize the original image.
Quick Start (Folder: Superpixel Segmentation):
python superpixel.py
All trained models and benchmark databases are available on π€ Hugging Face.
βοΈ Trained Models (Intra-Database Experiments): Download here
βοΈ Trained Models (Cross-Database Evaluations): Download here
βοΈ LIVE, CSIQ, TID2013, and KADID-10k Databases: Download here
(1) Pearson Linear Correlation Coefficient (PLCC): measures the prediction accuracy
(2) Spearman Rank-order Correlation Coefficient (SRCC): measures the prediction monotonicity
βοΈ A short note of the IQA evaluation metrics can be downloaded here.
βοΈ In the code (evaluation_criteria
function), PLCC, SRCC, Kendall Rank-order Correlation Coefficient (KRCC), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Outlier Ratio (OR) are all calculated. In this work, I only compare the PLCC and SRCC among different IQA algorithms.
Local Content: HVS is adaptive to the local content.
Long-range Dependency and Relational Modeling: HVS perceives image quality with long-range dependency constructed among different regions.
Local Modeling: The local modeling methods encode spatially proximate local neighborhoods.
Non-local Modeling: The non-local modeling establishes the spatial integration of information by long- and short-range communications with different spatial weighting functions.
Global Distortions: the globally and uniformly distributed distortions with non-local recurrences over the image.
Local Distortions: the local nonuniform-distributed distortions in a local region.
Global Distortions: JPEG, JP2K, WN, and GB
Local Distortions: FF
βοΈ CSIQ Database:
Global Distortions: JPEG, JP2K, WN, GB, PN, and Π‘Π‘
Local Distortions: There is no local distortion in CSIQ Database.
βοΈ TID2013 Database:
Global Distortions: Additive Gaussian noise, Lossy compression of noisy images, Additive noise in color components, Comfort noise, Contrast change, Change of color saturation, Spatially correlated noise, High frequency noise, Impulse noise, Quantization noise, Gaussian blur, Image denoising, JPEG compression, JPEG 2000 compression, Multiplicative Gaussian noise, Image color quantization with dither, Sparse sampling and reconstruction, Chromatic aberrations, Masked noise, and Mean shift (intensity shift)
Local Distortions: JPEG transmission errors, JPEG 2000 transmission errors, Non eccentricity pattern noise, and Local bock-wise distortions with different intensity
βοΈ KADID-10k Database:
Global Distortions: blurs (lens blur, motion blur, and GB), color distortions (color diffusion, color shift, color saturation 1, color saturation 2, and color quantization), compression (JPEG and JP2K), noise (impulse noise, denoise, WN, white noise in color component, and multiplicative noise), brightness change (brighten, darken, and mean shift), spatial distortions (jitter, pixelate, and quantization), and sharpness and contrast (high sharpen and contrast change)
Local Distortions: Color block and Non-eccentricity patch
(1) Thesis can be downloaded here.
(2) Original Paper can be downloaded here.
(3) Detailed Slides Presentation can be downloaded here.
(4) Detailed Slides Presentation with Animations can be downloaded here.
(5) Simple Slides Presentation can be downloaded here.
(6) Poster Presentation can be downloaded here.
(i) Image Preprocessing: The input image is pre-processed. π Check this file.
(ii) Graph Neural Network β Non-Local Modeling Method: A two-stage GNN approach is presented for the non-local feature extraction and long-range dependency construction among different regions. The first stage aggregates local features inside superpixels. The following stage learns the non-local features and long-range dependencies among the graph nodes. It then integrates short- and long-range information based on an attention mechanism. The means and standard deviations of the non-local features are obtained from the graph feature signals. π Check this file.
(iii) Pre-trained VGGNet-16 β Local Modeling Method: Local feature means and standard deviations are derived from the pre-trained VGGNet-16 considering the hierarchical degradation process of the HVS. π Check this file.
(iv) Feature Mean & Std Fusion and Quality Prediction: The means and standard deviations of the local and non-local features are fused to deliver a robust and comprehensive representation for quality assessment. π Check this file. Besides, the distortion type identification loss
At the root of the project, you will see:
βββ main.py
βββ model
βΒ Β βββ layers.py
βΒ Β βββ network.py
βΒ Β βββ solver.py
βββ superpixel
β βββ slic.py
βββ lib
βΒ Β βββ image_process.py
βΒ Β βββ make_index.py
βΒ Β βββ utils.py
βββ data_process
βΒ Β βββ get_data.py
βΒ Β βββ load_data.py
βββ benchmark
βΒ Β βββ CSIQ_datainfo.m
βΒ Β βββ CSIQfullinfo.mat
βΒ Β βββ KADID-10K.mat
βΒ Β βββ LIVEfullinfo.mat
βΒ Β βββ TID2013fullinfo.mat
βΒ Β βββ database.py
βΒ Β βββ datainfo_maker.m
βββ save_model
βΒ βββ README.md
βββ test_images
β βββ cr7.jpg
βββ real_testing.py
If you find our work useful in your research, please consider citing it in your publications. We provide a BibTeX entry below.
@inproceedings{Jia2022NLNet,
title = {No-reference Image Quality Assessment via Non-local Dependency Modeling},
author = {Jia, Shuyue and Chen, Baoliang and Li, Dingquan and Wang, Shiqi},
booktitle = {2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP)},
year = {Sept. 2022},
volume = {},
number = {},
pages = {01-06},
doi = {10.1109/MMSP55362.2022.9950035}
}
@article{Jia2022NLNetThesis,
title = {No-reference Image Quality Assessment via Non-local Modeling},
author = {Jia, Shuyue},
journal = {CityU Scholars},
year = {May 2023},
publisher = {City University of Hong Kong},
url = {https://scholars.cityu.edu.hk/en/theses/noreference-image-quality-assessment-via-nonlocal-modeling(2d1e72fb-2405-43df-aac9-4838b6da1875).html}
}
If you have any questions, please drop me an email at [email protected].
The authors would like to thank Dr. Xuhao Jiang, Dr. Diqi Chen, and Dr. Jupo Ma for helpful discussions and invaluable inspiration. A special appreciation should be shown to Dr. Dingquan Li because this code is built upon his (Wa)DIQaM-FR/NR re-implementation.