![]() |
![]() ![]() Last update: |
![]() |
|
18-Feb-2025
|
![]() |
Arch Hellen Med, 42(2), March-April 2025, 236-242 ORIGINAL PAPER Evaluation of academic courses using chatbot. An innovative approach in medical education G. Tzitziou, I. Dratsiou, I. Tsoupouroglou, E. Dafli, P.D. Bamidis |
OBJECTIVE The purpose of this study was the application of some aspects of artificial intelligence (AI) in medical education using conversational agents to improve the evaluation process of academic courses. More specifically, the goal is to develop a chatbot based on the academic course evaluation questionnaire of the Quality Assurance Unit (MODIP) of the Aristotle University of Thessaloniki (AUTH) and to investigate the degree of usability and medical students' experience compared to the electronic evaluation questionnaire.
METHOD The Rasa Open-Source platform was used to design and develop the chatbot, which was given the name "Thalia". The experimental procedure was carried out after the completion of the academic course "Medical Education" of the School of Medicine (AUTH), where students were asked to evaluate the course using the MODIP electronic questionnaire (control group, n=23) and the chatbot "Thalia" (experimental group, n=17). Then, the degree of "perceived enjoyment", "perceived ease of use", "perceived usefulness", "perceived reliability", "attitude towards participation" and "intention to participate" were investigated through a valid scale, as well as the degree of usability of the chatbot through the Chatbot Usability Questionnaire (CUQ).
RESULTS The study's results indicated that the chatbot technology constitutes an innovative application that can be successfully utilized for the assessment of academic courses, achieving higher scores across all the usability dimensions explored compared to the electronic questionnaire. In addition, the chatbot usability evaluation results gathered an average value of 63.2±11.0, a relatively satisfactory performance. Furthermore, the usability evaluation results of the chatbot yielded a mean score of 63.2±11.0, indicating a relatively satisfactory performance.
CONCLUSIONS The findings of the study highlight the acceptance and positive response of medical students to the use of the chatbot for the evaluation of academic courses. The successful interaction with chatbot technology lays the foundation for its inclusion as an innovative method of academic assessment, potentially leading to higher student response rates and the collection of qualitative feedback.
Key words: Evaluation of academic courses, Medical education, Online course evaluation questionnaire, Usability evaluation.