Learning Analytics and Predictive Modeling: Enhancing Student Success through Data-Driven Insights
Keywords:
Academic Performance Prediction, Explainable AI (SHAP), Learning Analytics, Predictive Modeling, Student EngagementAbstract
In the evolving landscape of data-informed education, predictive modeling has become a powerful tool for identifying students at risk of academic failure or withdrawal. This study investigates the use of learning analytics techniques to predict student outcomes using the Open University Learning Analytics Dataset (OULAD), a comprehensive, publicly available dataset that includes demographic profiles, continuous assessment records, and detailed interaction logs from a virtual learning environment (VLE). By integrating and preprocessing these data sources, the authors developed a comprehensive set of behavioral and temporal features, with particular focus on total click activity, which acts as a proxy for student engagement. The prediction task was framed as a binary classification problem: distinguishing students who completed a course (pass or distinction) from those who failed or withdrew. Although the specific classification algorithm is not explicitly identified, the trained model achieved a classification accuracy of 71% and an area under the receiver operating characteristic curve (ROC-AUC) of 0.79, indicating a reasonably high level of discriminative ability. A key strength of the study is its use of SHAP (Shapley Additive Explanations) values to interpret the model’s output, offering transparency into how individual features influenced prediction results. The analysis showed that engagement-related features, especially VLE click counts, had the greatest predictive power, while demographic variables such as gender and age contributed little, suggesting a reduced risk of bias from protected attributes. These findings underscore the practical value of interpretable predictive models in supporting early warning systems and learner support strategies in higher education. Additionally, the study addresses important ethical considerations by emphasizing fairness, privacy, and the need for explainable AI. While some methodological limitations remain, such as the lack of algorithm disclosure and validation details, the research provides valuable insights into designing transparent, ethical, and actionable learning analytics tools.
Published
How to Cite
Issue
Section
Copyright (c) 2025 Journal of Science Research and Reviews

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
- Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- NonCommercial — You may not use the material for commercial purposes.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.