•  
  •  
 

Document Type

Article

Keywords

Explainable AI (XAI), Machine Learning, Botnet IDS, IoT, Random forest

Abstract

The proliferation of IoT networks has necessitated advanced botnet intrusion detection systems beyond conventional security measures. This research addresses the critical challenge of "black box" machine learning models in network security by integrating explainable AI (XAI) with hybrid learning approaches. We developed and evaluated three hybrid ML classifiers—XGBoost, decision tree, and random forest—using the UNSW-NB15 dataset to distinguish between benign and malicious network traffic patterns. The performance metrics demonstrated that our classifier-comparison methodology effectively enhanced botnet detection capabilities within organizational network streams. By implementing XAI techniques through Scikit-Learn, LIME, ELI5, and SHAP libraries, we transformed opaque ML models into interpretable systems with clear decision rationales. The results confirm that XAI integration is both feasible and beneficial, offering network security professionals transparent insights into threat detection processes while maintaining high performance. This research bridges the gap between advanced ML detection capabilities and the interpretability requirements essential for practical security implementations.

Share

COinS