Fusion of Visible and Infrared Aerial Images from Uncalibrated Sensors Using Wavelet Decomposition and Deep Learning
2024

Fusion of Visible and Infrared Aerial Images Using Deep Learning

publication 10 minutes Evidence: moderate

Author Information

Author(s): Vipparla Chandrakanth, Krock Timothy, Nouduri Koundinya, Fraser Joshua, AliAkbarpour Hadi, Sagan Vasit, Cheng Jing-Ru C., Kannappan Palaniappan

Primary Institution: University of Missouri, Columbia, MO, USA

Hypothesis

Can a novel end-to-end pipeline effectively register and fuse uncalibrated visible and infrared images?

Conclusion

The proposed DeepFusion pipeline successfully registers and fuses visible and infrared images, improving scene understanding in challenging conditions.

Supporting Evidence

  • DeepFusion improves image registration and fusion performance compared to classical methods.
  • The proposed wavelet spectral decomposition method effectively extracts relevant features for image matching.
  • Keypoint-based analysis shows that DeepFusion retains more original information compared to existing methods.
  • Experiments demonstrate the effectiveness of the pipeline across various datasets and conditions.

Takeaway

This study created a new way to combine pictures taken in visible light and infrared, helping us see better in tough conditions like bad weather.

Methodology

The study developed an end-to-end pipeline called DeepFusion that uses wavelet decomposition for image registration and a deep neural network for image fusion.

Limitations

The method relies on the quality of input images and may not generalize well to all scenarios due to variations in sensor characteristics.

Digital Object Identifier (DOI)

10.3390/s24248217

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication