Edge-guided feature fusion network for RGB-T salient object detection
2024

Edge-Guided Feature Fusion Network for RGB-T Salient Object Detection

Sample size: 5000 publication Evidence: high

Author Information

Author(s): Chen Yuanlin, Sun Zengbao, Yan Cheng, Zhao Ming

Primary Institution: Shanghai Maritime University

Hypothesis

The proposed Edge-Guided Feature Fusion Network (EGFF-Net) will improve RGB-T salient object detection by effectively integrating cross-modal information and enhancing edge features.

Conclusion

The EGFF-Net outperforms existing methods in RGB-T salient object detection by effectively integrating cross-modal information and refining object boundaries.

Supporting Evidence

  • The proposed method achieved superior performance on benchmark datasets compared to state-of-the-art methods.
  • EGFF-Net effectively suppresses background noise and enhances salient object boundaries.
  • The model demonstrated robustness in various challenging scenarios, including cluttered backgrounds and low-light conditions.

Takeaway

This study created a new way to find important parts of images using both regular and thermal pictures, making it better at spotting things even in messy backgrounds.

Methodology

The study used a double-input end-to-end network structure with cross-modal feature extraction, edge-guided feature fusion, and salience map prediction.

Limitations

The model's computational complexity may hinder real-time applications, and its effectiveness may decrease in scenarios with ambiguous edges or extreme occlusion.

Digital Object Identifier (DOI)

10.3389/fnbot.2024.1489658

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication