Document Type
Article
Keywords
Explainable AI (XAI), Machine Learning, Botnet IDS, IoT, Random forest
Abstract
The proliferation of IoT networks has necessitated advanced botnet intrusion detection systems beyond conventional security measures. This research addresses the critical challenge of "black box" machine learning models in network security by integrating explainable AI (XAI) with hybrid learning approaches. We developed and evaluated three hybrid ML classifiers—XGBoost, decision tree, and random forest—using the UNSW-NB15 dataset to distinguish between benign and malicious network traffic patterns. The performance metrics demonstrated that our classifier-comparison methodology effectively enhanced botnet detection capabilities within organizational network streams. By implementing XAI techniques through Scikit-Learn, LIME, ELI5, and SHAP libraries, we transformed opaque ML models into interpretable systems with clear decision rationales. The results confirm that XAI integration is both feasible and beneficial, offering network security professionals transparent insights into threat detection processes while maintaining high performance. This research bridges the gap between advanced ML detection capabilities and the interpretability requirements essential for practical security implementations.
How to Cite This Article
Mohammed, Shatha J. and Nema, Bashar M.
(2025)
"Threat Detection Based on Explainable AI (XAI) and Hybrid Learning,"
Mesopotamian Journal of CyberSecurity: Vol. 5:
Iss.
2, Article 10.
DOI: https://doi.org/10.58496/MJCS/2025/029
Available at:
https://map.researchcommons.org/mjcs/vol5/iss2/10