Explainable AI for Cyber Security: Interpretable Models for Malware Analysis and Network Intrusion Detection
DOI:
https://doi.org/10.3126/nprcjmr.v1i9.74177Keywords:
AI, cyber security, malware analysisAbstract
The rise of sophisticated cyber threats, such as malware and network intrusions, necessitates the use of Artificial Intelligence (AI) for efficient and accurate detection. However, traditional AI models often operate as black boxes, leaving security analysts without insights into the reasoning behind critical decisions. Explainable AI (XAI) addresses this challenge by providing interpretability and transparency in AI-driven cybersecurity solutions. This paper explores the role of XAI in malware analysis and network intrusion detection, highlighting how interpretable models enhance trust, improve decision-making, and facilitate regulatory compliance. It examines state-of-the-art XAI techniques, including Shapley Additive Explanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and rule-based systems, for their application in identifying malicious software and detecting network anomalies. Furthermore, the paper discusses challenges, such as computational overhead and scalability, while presenting future directions for integrating XAI in real-time and hybrid security frameworks. By advancing the adoption of interpretable AI, cybersecurity systems can achieve greater effectiveness and reliability, addressing both technical and organizational needs in combating evolving cyber threats.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 The Author(s)
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
This license enables reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator.