>> 自然科学版 >> 当期目录 >> 正文
基于多尺度特征融合的遥感图像小目标检测
时间: 2025-06-19 次数:

魏明军, 葛一珲, 杨轩,等.基于多尺度特征融合的遥感图像小目标检测[J].河南理工大学学报(自然科学版),2025,44(4):40-47.

WEI M J, GE Y H, YANG X, et al. Small-object detection in remote sensing images using multi-scale feature fusion [J]. Journal of Henan Polytechnic University (Natural Science) , 2025, 44(4): 40-47.

基于多尺度特征融合的遥感图像小目标检测

魏明军1,2, 葛一珲1, 杨轩1, 刘亚志1,2, 李辉1

1.华北理工大学 人工智能学院,河北 唐山  063210;2.河北省工业智能感知重点实验室,河北 唐山  063210

摘要: 目的 目前遥感图像小目标的判别特征过少、易受复杂背景干扰,导致检测时存在错检、漏检问题,为了提高遥感小目标检测精度,本文提出一种基于多尺度特征融合网络。   方法 首先,在网络的中尺度特征图中,设计稀疏注意力引导的特征融合模块,进行多尺度特征融合,增强网络对小目标的敏感度,抑制背景干扰;为有效融合不同尺度的特征中包含的上下文信息,解决小目标定位问题,使用扩张卷积为核心的网络层,设计多步扩张特征融合模块,采用多个不同扩张率的卷积,并行融合不同层次间的多尺度特征信息,以更好利用深度特征具有的语义信息。  结果 在NWPU VHR-10数据集、RSOD数据集和HRSID数据集上分别进行相关定量实验,并对比目前的先进算法,结果表明,基于多尺度特征融合网络在保持或略微提高中尺度和大尺度目标检测精度的同时,在小尺度目标上获得了较高的检测精度,数据集NWPU VHR-10、RSOD和HRSID的mAP_50分别达到了93.7%,96.92%,92.5%。   结论 本文通过注意力引导和扩张卷积,提出了2种多尺度特征融合方法,可有效提升遥感小目标的检测精度。 

关键词:小目标检测;多尺度;遥感图像;深度学习;特征融合

doi: 10.16186/j.cnki.1673-9787.2024070040

基金项目:国家重点研发计划项目(2017YFE0135700);河北省高等学校科学技术研究项目(ZD2022102)

收稿日期:2024/07/09

修回日期:2024/09/10

出版日期:2025/06/19

Small-object detection in remote sensing images using multi-scale feature fusion

Wei Mingjun1,2, Ge Yihui1, Yang Xuan1, Liu Yazhi1,2, Li Hui1

1. School of Artificial Intelligence, North China University of Science and Technology, Tangshan  063210, Hebei, China; 2. Hebei Key Laboratory of Industrial Intelligent Perception, Tangshan  063210, Hebei , China

Abstract: Objectives Small objects in remote sensing images often lack sufficient discriminative features and are highly susceptible to interference from complex backgrounds, leading to frequent false and missed detections. To address this, a multi-scale feature fusion network is proposed to improve small-object detection accuracy in remote sensing images.  Methods A sparse attention-guided feature fusion module is first introduced into the medium-scale feature maps to enhance the network’s sensitivity to small objects and suppress background interference. Furthermore, to effectively integrate contextual information across different scales and improve localization accuracy, a multi-step dilated convolution fusion module is designed. This module applies multiple parallel convolutions with varying dilation rates to aggregate semantic information from features at multiple levels.  Results Extensive experiments conducted on the NWPU VHR-10, RSOD, and HRSID datasets demonstrate that the proposed method achieves significantly improved detection accuracy for small objects while maintaining or slightly enhancing performance on medium- and large-scale objects. The mAP@50 values on the NWPU VHR-10, RSOD, and HRSID datasets reached 63.1%, 96.92%, and 92.5%, respectively.   Conclusions These results demonstrate that the proposed method, which incorporates two multi-scale feature fusion strategies based on attention guidance and dilated convolution, can effectively enhance the detection accuracy of small objects in remote sensing targets. 

Key words: small-object detection; multi scale;remote sensing image; deep learning; feature fusion

最近更新