DLNet: A Dual-Level Network with Self- and Cross-Attention for High-Resolution Remote Sensing Segmentation

Weijun Meng, Lianlei Shan, Sugang Ma, Dan Liu, Bin Hu

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

With advancements in remote sensing technologies, high-resolution imagery has become increasingly accessible, supporting applications in urban planning, environmental monitoring, and precision agriculture. However, semantic segmentation of such imagery remains challenging due to complex spatial structures, fine-grained details, and land cover variations. Existing methods often struggle with ineffective feature representation, suboptimal fusion of global and local information, and high computational costs, limiting segmentation accuracy and efficiency. To address these challenges, we propose the dual-level network (DLNet), an enhanced framework incorporating self-attention and cross-attention mechanisms for improved multi-scale feature extraction and fusion. The self-attention module captures long-range dependencies to enhance contextual understanding, while the cross-attention module facilitates bidirectional interaction between global and local features, improving spatial coherence and segmentation quality. Additionally, DLNet optimizes computational efficiency by balancing feature refinement and memory consumption, making it suitable for large-scale remote sensing applications. Extensive experiments on benchmark datasets, including DeepGlobe and Inria Aerial, demonstrate that DLNet achieves state-of-the-art segmentation accuracy while maintaining computational efficiency. On the DeepGlobe dataset, DLNet achieves a (Formula presented.) mean intersection over union (mIoU), outperforming existing models such as GLNet ( (Formula presented.) ) and EHSNet ( (Formula presented.) ), while requiring lower memory (1443 MB) and maintaining a competitive inference speed of 518.3 ms per image. On the Inria Aerial dataset, DLNet attains an mIoU of (Formula presented.), surpassing GLNet ( (Formula presented.) ) while reducing computational cost and achieving an inference speed of 119.4 ms per image. These results highlight DLNet’s effectiveness in achieving precise and efficient segmentation in high-resolution remote sensing imagery.

Original languageEnglish
Article number1119
JournalRemote Sensing
Volume17
Issue number7
DOIs
StatePublished - Apr 2025

Keywords

  • cross-attention
  • high-resolution imagery
  • remote sensing
  • self-attention
  • semantic segmentation

Fingerprint

Dive into the research topics of 'DLNet: A Dual-Level Network with Self- and Cross-Attention for High-Resolution Remote Sensing Segmentation'. Together they form a unique fingerprint.

Cite this