Journal of Systems Engineering and Electronics ›› 2020, Vol. 31 ›› Issue (3): 447-459.doi: 10.23919/JSEE.2020.000027
• Electronics Technology • Next Articles
Jian WANG1,2,*(), Chunxia QIN1(), Xiufei ZHANG1(), Ke YANG1(), Ping REN1()
Received:
2019-10-12
Online:
2020-06-30
Published:
2020-06-30
Contact:
Jian WANG
E-mail:jianwang@nwpu.edu.cn;chunxia_qin@163.com;921391314@qq.com;xgdms_yk@mail.nwpu.edu.cn;1403147639@mail.nwpu.edu.cn
About author:
WANG Jian was born in 1972. He received his Ph.D. degree in signal and information processing from Northwestern Polytechnical University in 2005. Now he is an assistant professor at the School of Electronics and Information Engineering, Northwestern Polytechnical University, Xi'an, China. His current research interests include UAV intelligent processing technology, UAV ground observation video signal processing technology, and multi-source information intelligent processing technology. E-mail: Supported by:
Jian WANG, Chunxia QIN, Xiufei ZHANG, Ke YANG, Ping REN. A multi-source image fusion algorithm based on gradient regularized convolution sparse representation[J]. Journal of Systems Engineering and Electronics, 2020, 31(3): 447-459.
Add to citation manager EndNote|Reference Manager|ProCite|BibTeX|RefWorks
Table 1
Average objective evaluation indicators of multi-focus images when sliding window ${r}$ is fixed and ${\mu} $ and ${\lambda} $ change"
MI | PSNR/dB | ||||
0.1 | 0.1 | 6.753 2 | 0.615 9 | 0.814 6 | 29.326 7 |
0.01 | 0.01 | 7.984 9 | 0.726 0 | 0.905 6 | 30.062 4 |
0.001 | 0.001 | 8.857 3 | 0.747 5 | 0.966 9 | 32.061 4 |
0.000 1 | 0.000 1 | 8.854 4 | 0.747 2 | 0.966 9 | 31.432 5 |
Table 2
Average objective evaluation indicators of multi-focus images when sliding window ${r}$ changes and ${\mu} $ and ${\lambda} $ are fixed"
MI | PSNR/dB | |||
9 | 8.476 9 | 0.742 7 | 0.937 5 | 30.321 5 |
11 | 8.613 9 | 0.746 2 | 0.942 0 | 30.345 6 |
13 | 8.721 5 | 0.747 1 | 0.952 8 | 30.354 6 |
15 | 8.766 2 | 0.747 3 | 0.955 8 | 31.253 7 |
17 | 8.755 2 | 0.747 1 | 0.957 1 | 31.341 6 |
19 | 8.799 8 | 0.747 0 | 0.963 0 | 32.032 4 |
21 | 8.834 8 | 0.747 2 | 0.965 9 | 31.036 8 |
23 | 8.857 3 | 0.747 5 | 0.966 9 | 32.061 4 |
25 | 8.830 7 | 0.746 5 | 0.966 4 | 31.072 5 |
Table 3
Objective evaluation indicators of multi-modal medical and infrared and visible light images when sliding window ${r}$ changes and ${\mu} $ and ${\lambda} $ are fixed"
MI | PSNR/dB | |||
1 | 2.878 3 | 0.732 8 | 0.741 4 | 36.573 9 |
3 | 2.901 2 | 0.738 3 | 0.741 0 | 37.660 9 |
5 | 2.879 8 | 0.738 1 | 0.738 2 | 35.643 7 |
7 | 2.868 0 | 0.726 7 | 0.732 4 | 36.273 2 |
9 | 2.841 2 | 0.705 0 | 0.725 0 | 36.641 7 |
11 | 2.799 0 | 0.687 3 | 0.719 8 | 35.637 8 |
Table 4
Objective indicators of different methods for multi-focus images"
Source | Index | NSCT | DTCWT | GFF | IM | SR | CVT-SR | NSCT-PCNN | Proposed |
Multi-focus image (a) | MI | 7.327 4 | 7.208 7 | 8.052 1 | 8.211 3 | 7.553 5 | 7.920 5 | 7.438 4 | 8.820 2 |
0.730 3 | 0.720 5 | 0.742 6 | 0.747 6 | 0.736 7 | 0.697 0 | 0.679 2 | 0.795 1 | ||
0.925 8 | 0.929 0 | 0.944 6 | 0.910 6 | 0.927 5 | 0.931 6 | 0.923 4 | 0.965 9 | ||
PSNR/dB | 31.942 4 | 31.891 9 | 32.192 1 | 32.047 0 | 32.224 1 | 32.400 1 | 32.555 7 | 32.663 4 | |
Multi-focus image (b) | MI | 7.525 8 | 7.472 3 | 8.297 1 | 8.414 4 | 7.531 3 | 7.736 1 | 7.945 4 | 8.799 7 |
0.733 1 | 0.730 0 | 0.756 1 | 0.752 2 | 0.744 2 | 0.705 7 | 0.690 1 | 0.750 2 | ||
0.945 1 | 0.949 9 | 0.960 6 | 0.907 4 | 0.942 5 | 0.929 6 | 0.937 7 | 0.964 1 | ||
PSNR/dB | 29.621 5 | 29.640 5 | 29.631 8 | 29.612 3 | 29.526 0 | 29.647 8 | 29.841 5 | 29.672 1 | |
Multi-focus image (c) | MI | 6.161 2 | 6.159 1 | 7.670 9 | 7.031 1 | 6.523 9 | 5.913 3 | 5.225 0 | 8.476 4 |
0.782 1 | 0.781 9 | 0.796 3 | 0.782 9 | 0.791 8 | 0.771 2 | 0.749 7 | 0.827 1 | ||
0.958 4 | 0.957 9 | 0.961 7 | 0.955 4 | 0.962 2 | 0.954 8 | 0.942 0 | 0.980 5 | ||
PSNR/dB | 26.516 6 | 26.484 7 | 26.512 2 | 26.497 1 | 26.611 9 | 26.329 1 | 28.056 9 | 28.218 6 | |
Multi-focus image (d) | MI | 8.512 5 | 8.534 3 | 8.678 4 | 7.947 5 | 8.317 6 | 7.882 3 | 7.647 2 | 8.823 9 |
0.741 1 | 0.740 4 | 0.745 9 | 0.728 8 | 0.743 5 | 0.723 0 | 0.709 8 | 0.731 4 | ||
0.974 0 | 0.978 2 | 0.984 0 | 0.962 7 | 0.975 3 | 0.960 7 | 0.917 7 | 0.979 8 | ||
PSNR/dB | 26.340 3 | 26.355 5 | 26.621 5 | 26.343 5 | 26.489 5 | 26.481 2 | 26.623 0 | 26.630 3 | |
Running time | 23.854 1 | 23.070 5 | 12.507 1 | 21.357 5 | 39.695 4 | 41.642 95 | 39.267 5 | 32.178 9 |
Table 5
Objective indicators of different methods for multi-modal medical images"
Source | Index | NSCT | DTCWT | GFF | IM | SR | CVT-SR | NSCT-PCNN | Proposed |
Multi-model medical image (e) | MI | 3.594 2 | 3.187 0 | 4.280 7 | 3.710 2 | 3.933 4 | 3.753 0 | 3.301 1 | 4.386 1 |
0.613 4 | 0.531 9 | 0.655 7 | 0.643 6 | 0.590 8 | 0.617 7 | 0.497 8 | 0.664 1 | ||
0.850 8 | 0.743 3 | 0.885 8 | 0.830 5 | 0.873 3 | 0.800 3 | 0.595 8 | 0.929 8 | ||
PSNR/dB | 29.676 0 | 30.065 4 | 30.003 1 | 30.676 4 | 30.849 9 | 30.467 1 | 30.951 8 | 30.830 9 | |
Multi-model medical image (f) | MI | 3.356 3 | 3.190 0 | 3.600 0 | 3.516 3 | 3.788 6 | 3.610 6 | 4.633 4 | 3.932 8 |
0.614 4 | 0.546 1 | 0.623 0 | 0.590 4 | 0.606 2 | 0.614 9 | 0.618 7 | 0.645 1 | ||
0.794 8 | 0.759 2 | 0.832 5 | 0.821 8 | 0.825 4 | 0.860 2 | 0.898 6 | 0.882 7 | ||
PSNR/dB | 29.127 9 | 29.189 7 | 29.385 4 | 29.004 9 | 29.794 4 | 29.481 0 | 29.405 1 | 29.803 9 | |
Multi-model medical image (g) | MI | 3.300 0 | 3.184 3 | 3.463 5 | 3.437 7 | 3.497 8 | 3.501 6 | 3.211 1 | 3.959 4 |
0.555 4 | 0.501 1 | 0.590 5 | 0.601 4 | 0.556 1 | 0.591 1 | 0.459 2 | 0.593 6 | ||
0.836 0 | 0.810 4 | 0.870 9 | 0.890 6 | 0.861 2 | 0.831 0 | 0.762 2 | 0.895 0 | ||
PSNR/dB | 26.833 7 | 26.926 9 | 26.596 7 | 27.413 7 | 27.923 2 | 26.904 1 | 27.840 5 | 27.937 0 | |
Multi-model medical image (h) | MI | 2.2277 | 1.8767 | 3.4313 | 2.448 8 | 2.740 1 | 1.592 8 | 1.282 4 | 2.868 0 |
0.706 2 | 0.584 3 | 0.778 9 | 0.555 0 | 0.701 4 | 0.544 4 | 0.576 4 | 0.726 7 | ||
0.686 7 | 0.613 2 | 0.896 4 | 0.621 9 | 0.709 9 | 0.526 5 | 0.541 9 | 0.732 4 | ||
PSNR/dB | 36.928 9 | 36.988 7 | 37.302 5 | 32.098 6 | 37.320 3 | 37.302 5 | 37.812 7 | 37.660 8 | |
Running time | 28.949 9 | 29.322 8 | 12.3674 1 | 24.354 3 | 36.029 3 | 38.429 1 | 36.726 8 | 31.921 6 |
Table 6
Objective indicators of infrared and visible image with different methods"
Source | Index | NSCT | DTCWT | GFF | IM | SR | CVT-SR | NSCT-PCNN | Proposed |
Visible-infrared image (i) | MI | 4.720 6 | 4.645 9 | 5.028 3 | 4.121 6 | 5.092 0 | 5.088 2 | 3.902 1 | 5.391 4 |
0.723 9 | 0.716 7 | 0.749 0 | 0.701 9 | 0.717 1 | 0.722 0 | 0.618 3 | 0.735 3 | ||
0.862 7 | 0.855 1 | 0.914 2 | 0.872 6 | 0.835 1 | 0.798 4 | 0.795 9 | 0.920 6 | ||
PSNR | 31.546 9 | 31.569 8 | 31.417 2 | 31.071 1 | 31.694 7 | 31.694 5 | 32.571 8 | 33.014 2 | |
Visible-infrared image (j) | MI | 3.596 5 | 3.467 4 | 4.470 2 | 3.279 5 | 3.944 1 | 3.904 3 | 3.124 3 | 3.907 9 |
0.705 0 | 0.675 9 | 0.722 8 | 0.623 0 | 0.701 5 | 0.681 1 | 0.605 2 | 0.759 2 | ||
0.878 7 | 0.865 3 | 0.943 3 | 0.841 7 | 0.886 0 | 0.906 0 | 0.847 0 | 0.909 2 | ||
PSNR | 30.342 1 | 30.436 4 | 30.501 9 | 30.209 5 | 31.035 9 | 31.359 8 | 31.488 2 | 31.603 5 | |
Visible-infrared image (k) | MI | 2.553 8 | 2.373 2 | 2.711 0 | 2.612 2 | 2.610 8 | 2.627 6 | 2.136 5 | 2.757 9 |
0.643 8 | 0.593 2 | 0.664 8 | 0.591 4 | 0.618 5 | 0.640 5 | 0.565 8 | 0.688 1 | ||
0.857 7 | 0.821 0 | 0.931 1 | 0.932 7 | 0.849 8 | 0.861 3 | 0.868 8 | 0.983 0 | ||
PSNR | 29.849 8 | 29.904 6 | 30.109 7 | 30.207 8 | 30.366 8 | 30.422 7 | 30.492 9 | 30.326 3 | |
Visible-infrared image (l) | MI | 2.852 6 | 2.772 8 | 2.834 4 | 2.893 4 | 2.879 0 | 2.704 4 | 2.605 7 | 2.981 1 |
0.740 2 | 0.730 7 | 0.759 9 | 0.705 9 | 0.732 2 | 0.711 3 | 0.627 0 | 0.749 5 | ||
0.919 2 | 0.913 1 | 0.909 4 | 0.887 6 | 0.923 6 | 0.880 6 | 0.767 5 | 0.914 1 | ||
PSNR | 26.738 9 | 26.747 7 | 26.800 2 | 26.537 5 | 26.958 9 | 26.800 2 | 27.840 7 | 26.963 0 | |
Running time | 32.942 9 | 25.325 8 | 12.547 7 | 23.862 4 | 32.188 4 | 34.146 8 | 35.534 4 | 34.128 9 |
1 |
MA J, MA Y, LI C, et al. Infrared and visible image fusion methods and applications: a survey. Information Fusion, 2019, 45, 153- 178.
doi: 10.1016/j.inffus.2018.02.004 |
2 |
WANG J, YANG K, REN P, et al. Multi-source image fusion algorithm based on fast weighted guided filter. Journal of Systems Engineering and Electronics, 2019, 30 (5): 831- 840.
doi: 10.21629/JSEE.2019.05.02 |
3 |
YANG D, HU S, LIU S Q, et al. Multi-focus image fusion based on block matching in 3D transform domain. Journal of Systems Engineering and Electronics, 2018, 29 (2): 415- 428.
doi: 10.21629/JSEE.2018.02.21 |
4 | MA J, YU W, LIANG P W, et al. FusionGAN: a generative adversarial network for infrared and visible image fusion. Information Fusion, 2019, 48 (8): 11- 26. |
5 | BLANC P, WALD L, RANCHIN T, et al. Importance and effect of coregistration quality in an example of pixel to pixel fusion process. Proc. of the 2nd International Conference on Fusion Earth Data, 1998, 67- 74. |
6 |
ZHANG Z, BLUM R S. A hybrid image registration technique for a digital camera image fusion application. Information Fusion, 2001, 2 (2): 135- 149.
doi: 10.1016/S1566-2535(01)00020-3 |
7 | LI S, KANG X, FANG L, et al. Pixel-level image fusion: a survey of the state of the art. Information Fusion, 2017, 33 (1): 100- 112. |
8 | JIN X, JIANG Q, YAO S. A survey of infrared and visual image fusion methods. Infrared Physics & Technology, 2017, 85 (9): 478- 501. |
9 | ZHANG Q, LIU Y, RICK S B, et al. Sparse representation based multi-sensor image fusion for multi-focus and multi-modal images. Information Fusion, 2018, 40 (3): 57- 75. |
10 |
ABAVISANI M, PATEL V M. Deep sparse representation-based classification. IEEE Signal Processing Letters, 2019, 26 (6): 948- 952.
doi: 10.1109/LSP.2019.2913022 |
11 |
LIU Y, CHEN X, RABAB K, et al. Image fusion with convolutional sparse representation. IEEE Signal Processing Letters, 2016, 23 (12): 1882- 1886.
doi: 10.1109/LSP.2016.2618776 |
12 |
YANG B, LI S. Multifocus image fusion and restoration with sparse representation. IEEE Trans. on Instrumentation and Measurement, 2010, 59 (4): 884- 892.
doi: 10.1109/TIM.2009.2026612 |
13 |
LI S, YIN H, FANG L. Group-sparse representation with dictionary learning for medical image denoising and fusion. IEEE Trans. on Biomedical Engineering, 2012, 59 (12): 3450- 3459.
doi: 10.1109/TBME.2012.2217493 |
14 | CHEN C, LI Y, LIU W, et al. Image fusion with local spectral consistency and dynamic gradient sparsity. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, 2760- 2765. |
15 |
YANG B, LI S. Pixel-level image fusion with simultaneous orthogonal matching pursuit. Information Fusion, 2012, 13 (1): 10- 19.
doi: 10.1016/j.inffus.2010.04.001 |
16 |
ZHANG Q H, FU Y, LI H F, et al. Dictionary learning method for joint sparse representation-based image fusion. Optical Engineering, 2013, 52 (5): 057006.
doi: 10.1117/1.OE.52.5.057006 |
17 |
YIN H, LI S, FANG L, et al. Simultaneous image fusion and super-resolution using sparse representation. Information Fusion, 2013, 14 (3): 229- 240.
doi: 10.1016/j.inffus.2012.01.008 |
18 |
LI S T, YIN H T, FANG L Y, et al. Remote sensing image fusion via sparse representations over learned dictionaries. IEEE Trans. on Geoscience and Remote Sensing, 2013, 51 (9): 4779- 4789.
doi: 10.1109/TGRS.2012.2230332 |
19 | NEJATI M, SAMAVI S. Multi-focus image fusion using dictionary-based sparse representation. Information Fusion, 2015, 25 (1): 72- 84. |
20 | KIM M, HAN D K, KO H, et al. Joint patch clustering-based dictionary learning for multi-modal image fusion. Information Fusion, 2016, 27 (1): 198- 214. |
21 | WANG W, JIAO L, YANG S. Fusion of multispectral and panchromatic images via sparse representation and local autoregressive model. Information Fusion, 2014, 20 (1): 73- 87. |
22 |
ZHANG Q, LEVINE M D. Robust multi-focus image fusion using multi-task sparse representation and spatial context. IEEE Trans. on Image Processing, 2016, 25 (5): 2045- 2058.
doi: 10.1109/TIP.2016.2524212 |
23 | AISHWARYA N, ABIRAMI S, AMUTHA R. Multifocus image fusion using discrete wavelet transform and sparse representation. Proc. of the IEEE International Conference on Wireless Communications, Signal Processing and Networking, 2016, 2377- 2382. |
24 | RONG C, JIA Y, YANG Y, et al. Fusion of infrared and visible images through a hybrid image decomposition and sparse representation. Proc. of the International Conference on Intelligent Human-Machine Systems and Cybernatics, 2018, 15- 26. |
25 | RONG C, JIA Y, YANG Y, et al. Fusion of infrared and visible images based on multi-scale edge-preserving decomposition and sparse representation. Proc. of the International Congress on Image and Signal Processing, 2018, 1- 9. |
26 |
GAI D, SHEN X, CHENG H, et al. Medical image fusion via PCNN based on edge preservation and improved sparse representation in NSST domain. IEEE Access, 2019, 7, 85413- 85429.
doi: 10.1109/ACCESS.2019.2925424 |
27 | WOHLBERG B. Efficient algorithms for convolutional sparse representations. IEEE Trans. on Image Processing, 2015, 25 (1): 301- 315. |
28 | WOHLBERG B. Convolutional sparse representations as an image model for impulse noise restoration. Proc. of the Image, Video, and Multidimensional Signal Processing Workshop, 2016, 1- 5. |
29 |
LI S, KA X D, HU J W, et al. Image fusion with guided filtering. IEEE Trans. on Image Processing, 2013, 22 (7): 2864- 2875.
doi: 10.1109/TIP.2013.2244222 |
30 | WOHLBERG B. Convolutional sparse representations with gradient penalties. Proc. of the IEEE International Conference on Acoustics, 2017, 15- 20. |
31 |
ZHANG Q, GUO B L. Multifocus image fusion using the nonsubsampled contourlet transform. Signal Processing, 2009, 89 (7): 1334- 1346.
doi: 10.1016/j.sigpro.2009.01.012 |
32 | YANG W, CHAI Q, WANG L M, et al. Multi-focus image fusion method based on dual-tree complex wavelet transform. Computer Engineering & Applications, 2007, 43 (28): 12- 14. |
33 | HUANG H Y, YANG H. MR image reconstruction via guided filter. Medical & Biological Engineering & Computing, 2018, 56 (4): 635- 648. |
34 |
YANG Y, QUE Y, HUANG S, et al. Multiple visual features measurement with gradient domain guided filtering for multi-sensor image fusion. IEEE Trans. on Instrumentation and Measurement, 2017, 66 (4): 691- 703.
doi: 10.1109/TIM.2017.2658098 |
35 |
LI S, KANG X, HU J, et al. Image matting for fusion of multi-focus images in dynamic scenes. Information Fusion, 2013, 14 (2): 147- 162.
doi: 10.1016/j.inffus.2011.07.001 |
36 | LIU Y, WANG Z. Simultaneous image fusion and denoising with adaptive sparse representation. IET Image Processing, 2014, 9 (5): 347- 357. |
37 | LIU Y, LIU S, WANG Z F, et al. A general framework for image fusion based on multi-scale transform and sparse representation. Information Fusion, 2015, 24 (7): 147- 164. |
38 |
NENCINI F, GARZELLI A, BARONTI S, et al. Remote sensing image fusion using the curvelet transform. Information Fusion, 2007, 8 (2): 143- 156.
doi: 10.1016/j.inffus.2006.02.001 |
39 |
QU X, YAN J, XIAO H Z, et al. Image fusion method based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Automatic Sinica, 2008, 34 (12): 1508- 1514.
doi: 10.1016/S1874-1029(08)60174-3 |
[1] | Jian WANG, Ke YANG, Ping REN, Chunxia QIN, Xiufei ZHANG. Multi-source image fusion algorithm based on fast weighted guided filter [J]. Journal of Systems Engineering and Electronics, 2019, 30(5): 831-840. |
[2] | Dongsheng YANG, Shaohai HU, Shuaiqi LIU, Xiaole MA, Yuchao SUN. Multi-focus image fusion based on block matching in 3D transform domain [J]. Journal of Systems Engineering and Electronics, 2018, 29(2): 415-428. |
[3] | Kun Liu1, Lei Guo, and Jingsong Chen. Contourlet transform for image fusion using cycle spinning [J]. Journal of Systems Engineering and Electronics, 2011, 22(2): 353-357. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||