Beyond Accuracy and Classification: XAI Driven Interpretability in Cervical Cancer

Authors

  • Saugat Kafle Department of Computer Science, Samriddhi College, Bhaktapur, Nepal
  • Prakash Paudel Department of IT, NCIT, Lalitpur, Nepal
  • Mohan Bhandari Department of Computer Science, Samriddhi College, Bhaktapur, Nepal

DOI:

https://doi.org/10.3126/injet.v3i1.87023

Keywords:

Cervical cancer, eXplainable AI, LIME, SHAP, ELI5

Abstract

Cervical cancer, a prevalent malignancy linked to HPV infection, necessitates accurate and timely diagnosis to mitigate its high mortality rate. Traditional diagnostic methods, such as Pap smears and colposcopy, are often laborious and subjective, highlighting the need for advanced computational approaches. This study covers machine learning (ML) to enhance cervical cancer detection, evaluating models including Multi- Layer Perceptron (MLP), Gaussian Naive Bayes (GaussianNB), Bagging, Random Forest (RF) and K-Nearest Neighbors (KNN). The MLP classifier achieved 99.59% accuracy, while other algorithms surpassed 97% AUC, underscoring their clinical viability. To ensure interpretability, Explainable AI (XAI) techniques – SHAP, LIME and ELI5, are used, explaining feature contributions and decision pathways, thus nurturing clinician trust. The integration of high-accuracy ML models with transparent XAI frameworks not only improves diagnostic precision, but also facilitates the ethical deployment of AI in healthcare, paving the way for reliable, data-driven clinical decision making.

Downloads

Download data is not yet available.
Abstract
0
PDF
0

Downloads

Published

2025-12-24

How to Cite

Kafle, S., Paudel, P., & Bhandari, M. (2025). Beyond Accuracy and Classification: XAI Driven Interpretability in Cervical Cancer. International Journal on Engineering Technology, 3(1), 207–220. https://doi.org/10.3126/injet.v3i1.87023

Issue

Section

Articles