A New Model for Detecting Insider Threats Using AI
Author Information
Author(s): Kotb Hazem M., Gaber Tarek, AlJanah Salem, Zawbaa Hossam M., Alkhathami Mohammed
Primary Institution: Imam Mohammad Ibn Saud Islamic University
Hypothesis
Can a novel deep synthesis-based model effectively detect malicious insiders and AI-generated threats?
Conclusion
The DS-IID model achieved 97% accuracy and an AUC of 0.99 in identifying malicious users.
Supporting Evidence
- The DS-IID model achieved 97% accuracy in detecting malicious insiders.
- It effectively distinguishes between real and AI-generated user profiles.
- The model uses deep feature synthesis to automate user profile generation.
- High performance was demonstrated on the CERT insider threat dataset.
- The model addresses challenges posed by generative AI in cybersecurity.
Takeaway
This study created a smart system that can tell the difference between real and fake users trying to cause trouble in a computer system.
Methodology
The study used deep feature synthesis to create user profiles from event data and employed binary deep learning for classification.
Potential Biases
Potential bias due to reliance on synthetic data for training and evaluation.
Limitations
The model was primarily evaluated on synthetic datasets, which may not fully represent real-world scenarios.
Participant Demographics
The dataset included 1000 employees, with 930 normal users and 70 involved in malicious activities.
Digital Object Identifier (DOI)
Want to read the original?
Access the complete publication on the publisher's website