XAI GNSS—A Comprehensive Study on Signal Quality Assessment of GNSS Disruptions Using Explainable AI Technique
2024

Assessing GNSS Signal Quality with Explainable AI

Sample size: 32676 publication 10 minutes Evidence: high

Author Information

Author(s): Elango Arul, Landry Rene Jr.

Primary Institution: Vignan’s Foundation for Science, Technology and Research

Hypothesis

Can explainable AI techniques improve the classification of GNSS signal disruptions caused by jamming and spoofing?

Conclusion

The study found that using explainable AI models significantly enhances the classification accuracy of GNSS signal disruptions compared to traditional methods.

Supporting Evidence

  • The use of SHAP and LIME models improved classification accuracy in signal prediction.
  • Statistical analysis indicated that frequency domain features provided more reliable information for disruption classification.
  • Machine learning models combined with explainable AI techniques enhanced understanding of signal behavior.

Takeaway

This study shows how we can use smart computer programs to better understand and fix problems with GPS signals when they get messed up.

Methodology

The study recorded GNSS signals under various disruptions and analyzed time and frequency domain features using machine learning and explainable AI techniques.

Potential Biases

Potential bias in model predictions due to reliance on specific features that may not generalize across all signal types.

Limitations

The study primarily used simulated data, which may not fully represent real-world GNSS signal conditions.

Statistical Information

P-Value

p<0.05

Statistical Significance

p<0.05

Digital Object Identifier (DOI)

10.3390/s24248039

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication