21th AIAI 2025, 26 - 29 June 2025, Limassol, Cyprus

Explainable AI-Driven Feature Selection for Improved Intrusion Detection Systems in the Internet of Medical Things

Yacoubi Mohammed , Moussaoui Omar, Drocourt Cyril

Abstract:

  Explainability and evaluation of AI models are crucial in cybersecurity. This work tackles feature selection in Intrusion Detection Systems (IDS), leveraging Explainable AI (XAI) techniques to enhance interpretability and performance. We propose XAI-based methods using SHAP and LIME to identify key features and improve model transparency. Initially, using the full feature set, we achieved 99.87% accuracy with Random Forest, 95.02% with CatBoost, 99.74% with LightGBM, and 99.80% with XGBoost. However, this high-dimensional space increased complexity and training time. To optimize efficiency, we applied XAI for feature selection. The refined models maintained high accuracy: Random Forest (99.41%), CatBoost (99.20%), LightGBM (99.80%), and XGBoost (99.54%), while reducing computational costs. This demonstrates that XAI-driven selection improves IDS efficiency without compromising detection capability.By enhancing transparency in decision-making, our approach fosters trust and reliability in IDS applications. These findings advance intrusion detection technologies, highlighting the benefits of XAI-driven feature selection for more efficient and explainable cybersecurity solutions.  

*** Title, author list and abstract as submitted during Camera-Ready version delivery. Small changes that may have occurred during processing by Springer may not appear in this window.