DreamOn: a data augmentation strategy to narrow the robustness gap between expert radiologists and deep learning classifiers
2024

DreamOn: A New Method to Improve Deep Learning in Medical Imaging

Sample size: 780 publication 10 minutes Evidence: moderate

Author Information

Author(s): Lerch Luc, Huber Lukas S., Kamath Amith, Pöllinger Alexander, Pahud de Mortanges Aurélie, Obmann Verena C., Dammann Florian, Senn Walter, Reyes Mauricio

Primary Institution: University of Bern, Bern, Switzerland

Hypothesis

Does the DreamOn data augmentation strategy enhance the robustness of deep learning models in medical imaging compared to traditional methods?

Conclusion

The DreamOn data augmentation method significantly improves the robustness of deep learning models against noise in medical imaging, although human radiologists still outperform these models in high-noise conditions.

Supporting Evidence

  • DreamOn significantly improved model robustness in high-noise conditions compared to other augmentation methods.
  • Radiologists outperformed all models in high-noise settings, indicating a gap in robustness.
  • DreamOn maintained performance significantly above chance across nearly all noise levels.
  • Combining DreamOn with standard data augmentation did not improve robustness and may have hindered performance.

Takeaway

The study shows that a new way of creating training images, inspired by dreams, helps computers better understand medical images, especially when they are noisy.

Methodology

The study evaluated various data augmentation strategies on a ResNet-18 model trained with breast ultrasound images, comparing its performance against human radiologists.

Limitations

The study only used one deep learning architecture and one medical dataset, which may limit the generalizability of the findings.

Participant Demographics

The study involved 4 trained radiologists, 2 female and 2 male, with a median experience of 18 years.

Statistical Information

P-Value

0.551

Statistical Significance

p<0.05

Digital Object Identifier (DOI)

10.3389/fradi.2024.1420545

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication