IMPROVING THE EXPLAINABILITY AND TRANSPARENCY OF DEEP LEARNING MODELS IN INTRUSION DETECTION SYSTEMS

Authors

  • Daim Ali Department of Computer Science, NFCIET, Multan, Pakistan Author
  • Muhammad Kamran Abid Department of Computer Science, NFCIET, Multan, Pakistan Author
  • Muhammad Baqer Department of Computer Engineering, BZU, Multan, Pakistan Author
  • Yasir Aziz Department of Computer Engineering, BZU, Multan, Pakistan Author
  • Naeem Aslam Department of Computer Science, NFCIET, Multan, Pakistan Author
  • Nasir Umer Department of Computer Science, NFCIET, Multan, Pakistan Author

DOI:

https://doi.org/10.71146/kjmr284

Keywords:

Intrusion Detection Systems, deep learning, LIME, SHAP, CNN, RNN

Abstract

The conventional criteria of Intrusion Detection Systems (IDS) need to evolve because they fail to detect modern cyber security threats adequately. The advantages of machine learning (ML) and deep learning (DL) models in IDS functionality are limited by the inability to provide explanations which prevents cybersecurity professionals from validating decisions. The research analyzes DL-based IDS performance and interpretability standards through the examination of the NSL-KDD dataset. A screening process identified models that combined high reliability and accuracy numbers as selection candidates. feedforward neural networks (FNN), convolutional neural networks (CNN), and recurrent neural networks (RNN)—findings revealed that CNN achieved the greatest accuracy of 94.2% along with an AUC of 0.97 exempting FNN (91.3%) and RNN (93.8%). The effective extraction of spatial features from network traffic data by CNN models leads to its higher performance. The "black-box" nature of CNNs within DL models makes them difficult to understand because they remain concealed from users. The research integrated local interpretable model-agnostic explanations (LIME) and Shapley additive explanations (SHAP) to interpret decisions at the feature level. The implemented methods did not result in substantial accuracy improvements yet they made classification decisions more trustworthy and understandable by indicating the important features involved. Furthermore, in these developments there exist technical obstacles associated with high processing expenses and performance versus deployment speed balancing requirements. Research initiatives should focus on developing explainability techniques that maintain high-performance rates and excellent interpretability in IDS systems.

Downloads

Download data is not yet available.
image

Downloads

Published

2025-02-22

Issue

Section

Engineering and Technology

How to Cite

IMPROVING THE EXPLAINABILITY AND TRANSPARENCY OF DEEP LEARNING MODELS IN INTRUSION DETECTION SYSTEMS. (2025). Kashf Journal of Multidisciplinary Research, 2(02), 149-164. https://doi.org/10.71146/kjmr284

Most read articles by the same author(s)

Similar Articles

21-30 of 120

You may also start an advanced similarity search for this article.