供稿: 倪水平,王仕杰,李慧芳,李朋坤 | 时间: 2024-01-25 | 次数: |
倪水平, 王仕杰, 李慧芳,等.多尺度残差密集注意力网络图像超分辨率重建[J].河南理工大学学报(自然科学版),2024,43(1):140-148.
NI S P, WANG S J, LI H F, et al.Image super-resolution reconstruction of multi-scale residual dense attention network[J].Journal of Henan Polytechnic University(Natural Science) ,2024,43(1):140-148.
多尺度残差密集注意力网络图像超分辨率重建
倪水平, 王仕杰, 李慧芳, 李朋坤
河南理工大学 计算机科学与技术学院,河南 焦作454000
摘要:目的 使用单一尺度卷积网络提取低分辨率(low-resolution,LR)图像特征会造成大量图像高频特征丢失,为了获取更多高频特征,重建更清晰的超分辨率图像,方法 提出一种基于多尺度残差密集注意力网络(multi-scale residual dense attention network)的单幅图像超分辨率重建算法。首先,使用卷积网络从低分辨率图像中提取浅层特征并将其作为后续网络各级输入;其次,采用各级多尺度残差密集注意力块(multi-scale residual dense attention block)处理前级网络图像特征并从中提取图像高频特征,多尺度残差密集网络善于提取更丰富的图像特征,并融入注意力机制,增强网络对高频区域特征的关注;然后,将网络各级提取不同深度的图像特征进行全局特征融合;最后,融合后的特征经上采样输出重建的超分辨率图像。结果 放大因子为4时,网络在SET5,SET14,BSDS100,URBAN100和MANGA109数据集上测试,峰值信噪比分别为31.97,28.58,27.57,25.85,29.79 dB;网络中基本模块分别由多尺度残差密集注意力块、残差块和密集块替换提取特征,以峰值信噪比作为模块性能评估标准,多尺度残差密集注意力块表现更优异。结论 该网络结合多尺度残差密集网络能够获取更丰富图像高低频信息,融入注意力机制有效对网络中高频信息进行提取,能重建纹理更清晰的超分辨率图像。
关键词:多尺度残差;密集注意力网络;超分辨率重建;注意力机制;高频区域
doi:doi:10.16186/j.cnki.1673-9787.2021110080
基金项目:国家自然科学基金资助项目(61872126)
收稿日期:2021/11/19
修回日期:2022/06/06
出版日期:2024/01/25
Image super-resolution reconstruction of multi-scale residual dense attention network
NI Shuiping, WANG Shijie, LI Huifang, LI Pengkun
School of Computer Science and Technology,Henan Polytechnic University,Jiaozuo 454000,Henan,China
Abstract: Objective Using a single-scale convolutional network to extract low-resolution(LR) image features could cause a large number of image high-frequency features to be lost.In order to obtain more high-frequency features and reconstruct clearer super-resolution images, Methods a single image super-resolution reconstruction algorithm based on multi-scale residual dense attention network was proposed. Firstly,the convolutional network was used to extract shallow features from low-resolution images and the shallow features were used as input at all levels of the subsequent network.Secondly,the multi-scale residual dense attention blocks at all levels were used to process the image features of the previous network and extracted the high-frequency features of the image.The multi-scale residual dense network was good at extracting richer image features and attention mechanism was fused into the network to make high-frequency region features get more attention.Then,the image features of different depths were extracted at all levels of the network for global feature fusion.Finally,the fused features were up-sampled to output the reconstructed super-resolution image. Results When the upscale factor was set as 4,the network was tested on the SET5,SET14,BSDS100,URBAN100 and MANGA109 datasets,and the peak signal-to-noise ratios were 31.97,28.58,27.57,25.85 and 29.79 dB,respectively.The basic modules in the network were replaced by multi-scale residual dense attention blocks,residual blocks and dense blocks to extract features.The peak signal-to-noise ratio was used as the module performance evaluation standard,and the multi-scale residual dense attention block performed better. Conclusion The network combined with the multi-scale residual dense network could obtain richer high and low frequency information of the image.The attention mechanism was fused to effectively extract the high frequency information in the network,and the super-resolution image with clearer texture could be reconstructed.
Key words:multi-scale residual;dense attention network;super-resolution reconstruction;attention mechanism;high frequency region