Title |
SaReGAN: a salient regional generative adversarial network for visible and infrared image fusion |
ID_Doc |
41762 |
Authors |
Gao, ML; Zhou, YN; Zhai, WZ; Zeng, S; Li, QL |
Title |
SaReGAN: a salient regional generative adversarial network for visible and infrared image fusion |
Year |
2023 |
Published |
|
DOI |
10.1007/s11042-023-14393-2 |
Abstract |
Multispectral image fusion plays a crucial role in smart city environment safety. In the domain of visible and infrared image fusion, object vanishment after fusion is a key problem which restricts the fusion performance. To address this problem, a novel Salient Regional Generative Adversarial Network GAN (SaReGAN) is presented for infrared and VIS image fusion. The SaReGAN consists of three parts. In the first part, the salient regions of infrared image are extracted by visual saliency map and the information of these regions is preserved. In the second part, the VIS image, infrared image and salient information are merged thoroughly in the generator to gain a pre-fused image. In the third part, the discriminator attempts to differentiate the pre-fused image and VIS image, in order to learn details from VIS image based on the adversarial mechanism. Experimental results verify that the SaReGAN outperforms other state-of-the-art methods in quantitative and qualitative evaluations. |
Author Keywords |
Smart city; Image fusion; Visible and infrared image; Generative adversarial network; Salient region |
Index Keywords |
Index Keywords |
Document Type |
Other |
Open Access |
Open Access |
Source |
Science Citation Index Expanded (SCI-EXPANDED) |
EID |
WOS:000920621000001 |
WoS Category |
Computer Science, Information Systems; Computer Science, Software Engineering; Computer Science, Theory & Methods; Engineering, Electrical & Electronic |
Research Area |
Computer Science; Engineering |
PDF |
|