Skip to content

"Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image Reconstruction" (CVPR 2022)

Notifications You must be signed in to change notification settings

chenjiachengzzz/MST

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 

Repository files navigation

Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image Reconstruction (CVPR 2022)

winner arXiv zhihu visitors

Yuanhao Cai, Jing Lin, Xiaowan Hu, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, and Luc Van Gool

The first two authors contribute equally to this work

Code and models are coming soon.

News

  • 2022.04.02 : Further work MST++ has won the NTIRE 2022 Spectral Reconstruction Challenge. 🏆
  • 2022.03.02 : Our paper has been accepted by CVPR 2022, code and models are coming soon. 🚀
Scene 2 Scene 3 Scene 4 Scene 7

Abstract: Hyperspectral image (HSI) reconstruction aims to recover the 3D spatial-spectral signal from a 2D measurement in the coded aperture snapshot spectral imaging (CASSI) system. The HSI representations are highly similar and correlated across the spectral dimension. Modeling the inter-spectra interactions is beneficial for HSI reconstruction. However, existing CNN-based methods show limitations in capturing spectral-wise similarity and long-range dependencies. Besides, the HSI information is modulated by a coded aperture (physical mask) in CASSI. Nonetheless, current algorithms have not fully explored the guidance effect of the mask for HSI restoration. In this paper, we propose a novel framework, Mask-guided Spectral-wise Transformer (MST), for HSI reconstruction. Specifically, we present a Spectral-wise Multi-head Self-Attention (S-MSA) that treats each spectral feature as a token and calculates self-attention along the spectral dimension. In addition, we customize a Mask-guided Mechanism (MM) that directs S-MSA to pay attention to spatial regions with high-fidelity spectral representations. Extensive experiments show that our MST significantly outperforms state-of-the-art (SOTA) methods on simulation and real HSI datasets while requiring dramatically cheaper computational and memory costs.


Illustration of MST

Pipeline of MST

Quantitative Results

Main Results of PNGAN

Qualitative Results

Simulation HSI Reconstruction

Simulation

Real HSI Reconstruction

Real

Citation

If this repo helps you, please consider citing

@inproceedings{mst,
	title={Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image Reconstruction},
	author={Yuanhao Cai and Jing Lin and Xiaowan Hu and Haoqian Wang and Xin Yuan and Yulun Zhang and Radu Timofte and Luc Van Gool},
	booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
	year={2022}
}

About

"Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image Reconstruction" (CVPR 2022)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published