Detecting Road Markings Using Inverse Perspective Mapping
Author Information
Author(s): Lu Eric Hsueh-Chan, Hsieh Yi-Chun
Primary Institution: National Cheng Kung University
Hypothesis
Can Inverse Perspective Mapping improve the detection of road markings for autonomous vehicles?
Conclusion
The study found that using Inverse Perspective Mapping significantly improved the accuracy of road marking detection.
Supporting Evidence
- The model achieved an mAP improvement from 60.04% to 78.66% after applying Inverse Perspective Mapping.
- Data augmentation techniques increased the dataset size from 2785 to 13925 images.
- The study utilized a mixed dataset for training, which included images from both virtual and real-world sources.
- Testing on the Taiwan road scene dataset showed a significant improvement in detection accuracy.
Takeaway
This study shows that changing the way we look at road images can help cars see road markings better, especially when they are far away.
Methodology
The study used a combination of virtual and open datasets, data augmentation, and Inverse Perspective Mapping to train a Mask R-CNN model for road marking detection.
Potential Biases
Potential bias due to the reliance on specific datasets and the manual labeling process.
Limitations
The model's performance may vary with different perspectives and small objects at a distance.
Statistical Information
P-Value
p<0.05
Statistical Significance
p<0.05
Digital Object Identifier (DOI)
Want to read the original?
Access the complete publication on the publisher's website