When Should Educational AI Stay Silent? Abstention-Aware Student Risk Prediction Using Learning Analytics
DOI:
https://doi.org/10.3126/jacem.v12i01.93932Keywords:
Learning Analytics, Student Risk Prediction, Abstention-Aware Classification, Trustworthy AI, Educational Data Mining, Probability Calibration, Human-in-the-Loop Systems, Predictive Analytics in EducationAbstract
Universities increasingly rely on machine learning systems to identify students at risk of academic failure or withdrawal, where such predictions directly influence intervention decisions, resource allocation, and students’ academic trajectories. However, in these high-stakes educational settings, conventional models produce predictions for all students regardless of confidence, which can result in incorrect risk identification, unnecessary student anxiety, and misallocation of academic support. This study introduces an abstention-aware prediction approach that allows the model to withhold decisions when confidence is insufficient. The approach is evaluated using the Open University Learning Analytics Dataset (OULAD). Logistic regression is adopted due to its well-calibrated probabilistic outputs, achieving a ROC-AUC of 0.889. Comparisons with Random Forest and XGBoost show that while these models achieve slightly higher discrimination, logistic regression provides more consistent probability estimates for uncertainty-aware prediction. The results demonstrate a clear risk–coverage trade-off, where uncertain cases are deferred while confident predictions remain accurate. The findings support the use of selective prediction as a practical mechanism for improving the reliability of AI-assisted decision-making in education.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
JACEM reserves the copyright for the published papers. Author will have right to use content of the published paper in part or in full for their own work.