Article Open Access

Towards Intelligent Performance Monitoring for Blockchain-Based Learning Systems: A Multi-Class Classification Approach

Aditya Galih Sulaksono, Syaad Patmanthara, Harits Ar Rosyid

Abstract


This study proposes a multi-class classification framework for monitoring blockchain system performance as a step toward integration within blockchain-based learning management systems (LMS). Reliable performance monitoring is essential because smart contracts in educational settings depend on timely and accurate system responses to ensure valid grading and credential issuance. A dataset of 3,081 transactional logs was generated from simulated blockchain testbed, capturing throughput, latency, block size, and send rate. Throughput values were discretized into seven qualitative categories ranging from “Very Poor” to “Very Good” using quantile-based binning. Preprocessing involved data cleaning, categorical encoding, Z-score normalization, and label encoding to ensure model compatibility. Five algorithms: Logistic Regression, Decision Tree, Random Forest, Support Vector Machine (SVM), and K-Nearest Neighbors (KNN) were trained and evaluated using stratified 80–20 partitioning and 5-fold cross-validation with grid search for hyperparameter tuning. Performance metrics included accuracy, macro precision, recall, and F1-score. Random Forest achieved the best results with 91.35% accuracy, 0.910 macro precision, 0.911 recall, and 0.910 F1-score, outperforming other models by handling complex feature interactions and reducing variance. Decision Tree offered strong interpretability (88.32% accuracy), while Logistic Regression (84.97%) and SVM (84.86%) provided stable performance. KNN showed balanced results (87.78%) but incurred high computational costs. The findings demonstrate that multi-class stratification provides more actionable insights than binary methods, supporting low-latency decision-making for smart contract execution in decentralized LMS ecosystems. The novelty of this research lies in applying multi-class classification instead of binary methods, enabling nuanced monitoring. Future work will validate the framework in real blockchain-LMS deployments.


Keywords


Blockchain, Machine Learning, Multi-class Classification, Performance Evaluation, Random Forest

References


S. Nakamoto, “Bitcoin: A Peer-to-Peer Electronic Cash System”, Accessed: Jul. 07, 2025. [Online]. Available: www.bitcoin.org

I. Gede, I. Sudipa, W. Aditama, and C. P. Yanti, “Blockchain Technology Model on Virtual Museum as an Effort to Enchance Balinese Cultural Metaverse,” International Journal of Engineering, Science and Information Technology, vol. 5, no. 3, pp. 394–401, Jun. 2025, doi: 10.52088/ijesty.v5i3.968.

A. Alammary, S. Alhazmi, M. Almasri, and S. Gillani, “Blockchain-Based Applications in Education: A Systematic Review,” 2019, doi: 10.3390/app9122400.

M. Turkanovi?, M. Hölbl, K. Koši?, M. Heri?ko, and A. Kamišali?, “EduCTX: A Blockchain-Based Higher Education Credit Platform,” IEEE Access, vol. 6, pp. 5112–5127, 2018, doi: 10.1109/ACCESS.2018.2789929.

S. Ma’arif, F. Maufiquddin, B. Dhevyantoa, K. Krisdiana, and A. Setiawan, “Decentralized Finance: Bibliometric Analysis and Research Trends,” International Journal of Engineering, Science and Information Technology, vol. 5, no. 2, pp. 474–484, May 2025, doi: 10.52088/IJESTY.V5I2.886.

L. Barreñada, P. Dhiman, D. Timmerman, A.-L. Boulesteix, and B. Van Calster, “Understanding overfitting in random forest for probability estimation: a visualization and simulation study,” vol. 8, p. 14, 2024, doi: 10.1186/s41512-024-00177-1.

S. Lai, Q. Yang, W. He, Y. Zhu, and J. Wang, “Image Retrieval Method Combining Bayes and SVM Classifier Based on Relevance Feedback with Application to Small-scale Datasets,” Tehni?ki vjesnik, vol. 29, no. 4, pp. 1236–1246, Jun. 2022, doi: 10.17559/TV-20210925093644.

M. A. Mohammed, M. Boujelben, and M. Abid, “A Novel Approach for Fraud Detection in Blockchain-Based Healthcare Networks Using Machine Learning,” Future Internet 2023, Vol. 15, Page 250, vol. 15, no. 8, p. 250, Jul. 2023, doi: 10.3390/FI15080250.

S. Y. Meng, A. Orvieto, D. Y. Cao, and C. De Sa, “Gradient Descent on Logistic Regression with Non-Separable Data and Large Step Sizes,” Jun. 2024, doi: https://doi.org/10.48550/arXiv.2406.05033.

A. Hirsi, L. Audah, A. Salh, M. A. Alhartomi, and S. Ahmed, “Detecting DDoS Threats Using Supervised Machine Learning for Traffic Classification in Software Defined Networking,” IEEE Access, vol. 12, pp. 166675–166702, 2024, doi: 10.1109/ACCESS.2024.3486034.

U. Pujianto, H. A. Rosyid, and A. C. Putra, “Performance Comparison of Ensemble-based k-Nearest Neighbor and CART Classifiers for the Classification of Adaptive e-learning User Knowledge Levels,” in 1st UMGESHIC International Seminar on Health, Social Science and Humanities (UMGESHIC-ISHSSH 2020), 2021, pp. 243–25, doi: 10.2991/assehr.k.211020.037

Z. Chen, “Machine Learning-Based Decision Support to Secure Internet of Things Sensing,” Université d’Ottawa/University of Ottawa, 2023.

H. A. Rosyid, U. Pujianto, and M. R. Yudhistira, “Classification of Lexile Level Reading Load Using the K-Means Clustering and Random Forest Method,” Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control, pp. 139–146, 2020, doi: https://doi.org/10.22219/kinetik.v5i2.897.

D. U. Soraya, S. Patmanthara, and G. D. K. Ningrum, “Growing Teaching Motivation for Future Teachers Through Microteaching,” Letters in Information Technology Education (LITE), vol. 5, no. 2, pp. 71–75, Nov. 2022, doi: 10.17977/UM010V5I22022P71-75.

N. M. Yusof, T. Manogaran, and H. Thiagu, “Microteaching as a Method in Developing Teaching Skills,” Journal of Creative Practices in Language Learning and Teaching, vol. 11, no. 2, 2023, doi: 10.24191/CPLT.V11I2.2033.

J. Konseling, D. Pendidikan, Z. Zulhimma, Z. Zulhammi, and A. Abdurrahman, “Teachers’ self-efficacy: through micro teaching, practical field experience, and motivation,” Jurnal Konseling dan Pendidikan, vol. 10, no. 3, pp. 444–460, Sep. 2022, doi: 10.29210/190000.

S. Patmanthara, O. D. Yuliana, F. A. Dwiyanto, and A. P. Wibawa, “The Use of Ladder Snake Games to Improve Learning Outcomes in Computer Networking,” International Journal of Emerging Technologies in Learning (iJET), vol. 14, no. 21, pp. 243–249, Nov. 2019, doi: 10.3991/IJET.V14I21.10953.

N. A. Saran and F. Nar, “Fast binary logistic regression,” PeerJ Comput Sci, vol. 11, p. e2579, 2025, doi: https://doi.org/10.7717/peerj-cs.2579.

N. Šarlija, A. Bilandži?, and M. Stanic, “Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models,” Croatian Operational Research Review, pp. 631–652, 2017.

C. El Morr, M. Jammal, H. Ali-Hassan, and W. El-Hallak, “Logistic regression,” in Machine Learning for Practical Decision Making: A Multidisciplinary Perspective with Applications from Healthcare, Engineering and Business Analytics, Springer, 2022, pp. 231–249.

C. Ngufor and J. Wojtusiak, “Extreme logistic regression,” Adv Data Anal Classif, vol. 10, pp. 27–52, 2016, doi: https://doi.org/10.1007/s11634-014-0194-2.

N. Senaviratna, T. Cooray, and others, “Diagnosing multicollinearity of logistic regression model,” Asian Journal of Probability and Statistics, vol. 5, no. 2, pp. 1–9, 2019, doi: 10.9734/ajpas/2019/v5i230132.

T.-N. Do, “Towards simple, easy to understand, an interactive decision tree algorithm,” College Information Technology Can tho University, Can Tho, Vietnam, technology report, pp. 1–6, 2007.

K. Kim and J. Hong, “A hybrid decision tree algorithm for mixed numeric and categorical data in regression analysis,” Pattern Recognit Lett, vol. 98, pp. 39–45, 2017, doi https://doi.org/10.1016/j.patrec.2017.08.011.

J. Su and H. Zhang, “A fast decision tree learning algorithm,” in Aaai, 2006, pp. 500–505.

M. Bramer, “Avoiding overfitting of decision trees,” Principles of data mining, pp. 119–134, 2007, doi: https://doi.org/10.1007/978-1-84628-766-4_8.

R.-H. Li and G. G. Belford, “Instability of decision tree classification algorithms,” in Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, 2002, pp. 570–575, doi: https://doi.org/10.1145/775047.775131.

A. B. Shaik and S. Srinivasan, “A brief survey on random forest ensembles in classification model,” in International Conference on Innovative Computing and Communications: Proceedings of ICICC 2018, Volume 2, 2019, pp. 253–260, doi: https://doi.org/10.1007/978-981-13-2354-6_27.

M. A. Salam, A. T. Azar, M. S. Elgendy, and K. M. Fouad, “The effect of different dimensionality reduction techniques on machine learning overfitting problem,” Int. J. Adv. Comput. Sci. Appl, vol. 12, no. 4, pp. 641–655, 2021, doi: http://dx.doi.org/10.14569/IJACSA.2021.0120480.

M.-H. Roy and D. Larocque, “Robustness of random forests for regression,” J Nonparametr Stat, vol. 24, no. 4, pp. 993–1006, 2012, doi https://doi.org/10.1080/10485252.2012.715161.

K. J. Archer and R. V Kimes, “Empirical characterization of random forest variable importance measures,” Comput Stat Data Anal, vol. 52, no. 4, pp. 2249–2260, 2008, doi https://doi.org/10.1016/j.csda.2007.08.015.

M. Aria, C. Cuccurullo, and A. Gnasso, “A comparison among interpretative proposals for Random Forests,” Machine Learning with Applications, vol. 6, p. 100094, 2021, doi https://doi.org/10.1016/j.mlwa.2021.100094.

T. P. Dinh, C. Pham-Quoc, T. N. Thinh, B. K. Do Nguyen, and P. C. Kha, “A flexible and efficient FPGA-based random forest architecture for IoT applications,” Internet of Things, vol. 22, p. 100813, 2023, doi: https://doi.org/10.1016/j.iot.2023.100813.

Z. El Mrabet, N. Sugunaraj, P. Ranganathan, and S. Abhyankar, “Random forest regressor-based approach for detecting fault location and duration in power systems,” Sensors, vol. 22, no. 2, p. 458, 2022, doi: 10.3390/s22020458.

N. Ardeshir, C. Sanford, and D. J. Hsu, “Support vector machines and linear regression coincide with very high-dimensional features,” Adv Neural Inf Process Syst, vol. 34, pp. 4907–4918, 2021, doi: https://doi.org/10.48550/arXiv.2105.14084.

M. S. Reza, U. Hafsha, R. Amin, R. Yasmin, and S. Ruhi, “Improving SVM performance for type II diabetes prediction with an improved non-linear kernel: Insights from the PIMA dataset,” Computer Methods and Programs in Biomedicine Update, vol. 4, p. 100118, 2023, doi https://doi.org/10.1016/j.cmpbup.2023.100118.

F. A. K. Q. Alnagashi, N. A. Rahim, S. A. A. Shukor, and M. H. A. Hamid, “Mitigating Overfitting in Extreme Learning Machine Classifier Through Dropout Regularization,” Applied Mathematics and Computational Intelligence (AMCI), vol. 13, no. 1, pp. 26–35, 2024, doi: https://doi.org/10.58915/amci.v13iNo.1.561.

Z. Akram-Ali-Hammouri, M. Fernández-Delgado, E. Cernadas, and S. Barro, “Fast support vector classification for large-scale problems,” IEEE Trans Pattern Anal Mach Intell, vol. 44, no. 10, pp. 6184–6195, 2021, doi: 10.1109/TPAMI.2021.3085969.

P. Ramadevi and R. Das, “An extensive analysis of machine learning techniques with hyper-parameter tuning by Bayesian optimized SVM kernel for the detection of human lung disease,” IEEE Access, 2024, doi: 10.1109/ACCESS.2024.3422449.

S. Zhang, “Challenges in KNN classification,” IEEE Trans Knowl Data Eng, vol. 34, no. 10, pp. 4663–4675, 2021, doi: 10.1109/TKDE.2021.3049250.

F. Acito, “k nearest neighbors,” in Predictive Analytics with KNIME: Analytics for Citizen Data Scientists, Springer, 2023, pp. 209–227.

B.-B. Jia and M.-L. Zhang, “MD-KNN: An instance-based approach for multi-dimensional classification,” in 2020 25th International Conference on Pattern Recognition (ICPR), 2021, pp. 126–133, doi: 10.1109/ICPR48806.2021.9412974.

S. Adhikary and S. Banerjee, “Introduction to distributed nearest hash: On further optimizing cloud based distributed KNN variant,” Procedia Comput Sci, vol. 218, pp. 1571–1580, 2023, doi: https://doi.org/10.1016/j.procs.2023.01.135.

H. Zhu et al., “Visualizing large-scale high-dimensional data via hierarchical embedding of KNN graphs,” Visual Informatics, vol. 5, no. 2, pp. 51–59, 2021, doi https://doi.org/10.1016/j.visinf.2021.06.002

H. Henderi, T. Wahyuningsih, and E. Rahwanto, “Comparison of Min-Max normalization and Z-Score Normalization in the K-nearest neighbor (kNN) Algorithm to Test the Accuracy of Types of Breast Cancer,” International Journal of Informatics and Information Systems, vol. 4, no. 1, pp. 13–20, 2021, doi: https://doi.org/10.47738/ijiis.v4i1.73.




DOI: https://doi.org/10.52088/ijesty.v5i4.1138

Refbacks

  • There are currently no refbacks.


Copyright (c) 2025 Aditya Galih Sulaksono, Syaad Patmanthara, Harits Ar Rosyid

International Journal of Engineering, Science, and Information Technology (IJESTY) eISSN 2775-2674