- 1 -Please cite this article in press as: M. Cantabella, R. Martínez-España, B. López, A. Muñoz. A Fine-Grained Model to Assess Learner-Content and Methodology Satisfaction in Distance Education, International Journal of Interactive Multimedia and Artificial Intelligence, (2020), http://dx.doi.org/10.9781/ijimai.2020.09.002 A Fine-Grained Model to Assess Learner-Content and Methodology Satisfaction in Distance Education Magdalena Cantabella, Raquel Martínez-España*, Belén López, Andrés Muñoz Escuela Politécnica, Universidad Católica de Murcia (Spain) Received 3 March 2020 | Accepted 9 September 2020 | Published 14 September 2020 Keywords E-learning, Learning Management System, Learning Analytics, Student’s Success, User Monitoring. Abstract Learning Management System (LMS) platforms have led to a transformation in Universities in the last decade, helping them to adapt and expand their services to new technological challenges. These platforms have made possible the expansion of distance education. A current trend in this area is focused on the evaluation and improvement of the students’ satisfaction. In this work a new tool to assess student satisfaction using emoticons (smileys) is proposed to evaluate the quality of the learning content and the methodology at unit level for any course and at any time. The results indicate that the assessment of student satisfaction is sensitive to the period when the survey is performed and to the student’s study level. Moreover, the results of this new proposal are compared to the satisfaction results using traditional surveys, showing different results due to a more accuracy and flexibility when using the tool proposed in this work. * Corresponding author. E-mail address: rmartinez@ucam.edu DOI: 10.9781/ijimai.2020.09.002 I. Introduction IN the new era of Education, the use of Information Technologies (IT) has already consolidated itself as a fundamental asset. The application of these technologies has allowed, among others, the integral development of distance education, especially at the higher education level. One of the tools that has enabled this development are the Learning Management System (LMS) platforms. These platforms allow students to work independently, facilitating the interaction with other users by means of collaborative tools and providing new methods for resource management that help to strengthen new teaching and learning models. Moreover, the LMS platforms act as content managers that provide students with a wide variety of resources. A current trend regarding distance education is concerned with evaluating the quality in this methodology [1]-[3]. The main items that are considered in this process are the quality of the available resources, the quality of student-student and student- teacher interactions and the flexibility and ease of use of the LMS. A conventional approach to obtaining the data to evaluate these items is through surveys, interviews and/or focus groups. However, these methods do not seem totally well-suited for distance education methodology due to absence of objective measures, lack of evaluation of specific distance education items or imposing a period to complete the survey while using a virtual environment which is available 24/7, among others [4]-[5]. An alternative approach is based on the use of Learning Analytics (LA) methods [6]-[7]. This strategy focuses on analyzing the information about the users’ activity that is recorded in the LMS, which allows to establish behavioral profiles not only for students but also for lecturers [8]-[11]. While both approaches foster the gathering of complementary information to assess the quality of teaching, it is detected a lack of fine-grained information about the student’s satisfaction with respect to the content provided by the lecturer. In other words, there is a gap in the methods applied to obtain detailed information about the quality of each specific resource in each lesson of any subject, namely text documents, multimedia contents, self-assessment tests, assignments, etc. In this paper it is proposed a simple and effective method based on the use of smileys integrated into an LMS to collect these data. As a result, the evaluation of these substantial data can help lecturers to detect not only content that should be improved but also what type of content is the most enjoyed by the students. As a proof of concept, we evaluate the application of this method in a case study at the Catholic University of Murcia (UCAM), Spain. The main objective of this study is to develop a method to measure the student’s satisfaction respect to the available resources and methodology in each of the units of each subject. With these assessments an evaluation will be carried out to, on the one hand, analyze the students’ behavior with respect to this method depending on factors such as their study level or time period when they perform the proposed evaluation method and, on the other hand, compare with data from traditional surveys answered by the same group of students so as to check if there are significant differences among the results of both methods. The rest of the paper is structured as follows. Related works in the area are reviewed in Section II. The methodology followed in this work and the method proposed for student satisfaction evaluation is explained in Section III. Section IV discusses the results obtained in the case study developed to evaluate our proposal. Finally, Section V summarizes the findings obtained in this work. - 2 - International Journal of Interactive Multimedia and Artificial Intelligence II. Related Work In order to evaluate the quality of the methodologies applied in distance education, it is necessary to distinguish the most relevant factors that allow an integral analysis of the main milestones in such methodologies. Initially, the evaluation applied to distance learning only focused on evaluating the quality of the software or the LMS technologies in use [12]-[13], leaving aside the evaluation of the methodology and contents of the courses, being both crucial aspects for measuring the quality. Later on, Plomp and Ely [14] included a more complete evaluation of the quality of distance education courses by suggesting four categories to evaluate: (1) course design, (2) resource selection, (3) methodology and (4) software training for lecturers. One of the main criticisms of this work was that it proposed the instructors as the only ones responsible for reviewing and verifying the quality of the contents, being questionable whether other agents were needed in this process or whether the instructors had enough time for this laborious procedure [15]. There have also been efforts in defining standards and frameworks to evaluate the quality of distance education, for example by adapting software evaluation standards such as the ISO9241-210 “Ergonomics of Human System Interaction”, higlighting the fact that the user experience has a fundamental role to evaluate the functionality, reliability, usability, efficiency, portability and maintainability of the LMS [16]-[17]. On the other hand, the SCORM (Sharable Content Object Reference Model) proposal [18] includes a set of rules for the reuse of content between LMSs in order to achieve a learning process with a common structure. It uses a set of standards and specifications that analyze the relationships and levels of granularity between the materials of different units in order to automatically manage the content of those units and reuse it between different platforms. A third proposal reflects on how to describe, capture, and communicate more effectively the complex and iterative nature of data visualization design throughout the research, design, development, and deployment of visualization systems and tools [19]. Recent studies have identified that the quality of the available services and the students’ satisfaction are of great relevance for measuring the quality of distance education. Many researchers agree that student satisfaction is an important factor to be valued because it is in many cases linked to their academic performance and university experience [20]-[23]. Now with the use of LMS platforms, this student satisfaction in higher education, transformed into their university experience, is considered as a key component, since if the student is not satisfied with some component of the online course, he/she has a greater probability to transfer to other institutions [24]- [25]. The lecturer’s feedback is also a very important factor, as shown by several research works stating that developing new tools in the LMS or updating them without taking into account the satisfaction of the instructors negatively affects the results of the distance learning course [26]-[28]. Different models, surveys and questionnaires have been used to measure student satisfaction, see for example [24], [29]-[31] to name just a few. Despite the differences in such evaluation items, it is possible to identify a common set of main factors affecting student satisfaction. These factors include student-lecturer interaction, student-student interaction, the learning content (resources) and system flexibility and support [24]. In this regard, the use of LMS positively impacts student satisfaction, higlighting the availability of resources, system accessibility, and its tools as the determinants factor of LMS self- efficacy [31]. The inclusion of the learning content among these factors is noteworthy, since it emphasizes that the effective configuration of curriculum content and pedagogical content is necessary to create an effective learning experience. In this work we investigate how to evaluate in a more detailed manner this learning content. New assessment models have also been considered in this study to identify student satisfaction. Currently it is a challenge to develop new tools that specifically assess the resources available to students in LMS. Findings on this research line will help institutions by providing them with psychometric properties that add pedagogical value to distance learning. In [32], a framework has been designed to guide institutions to better improve student’s satisfaction and further strengthen their e-learning implementation. Authors have shown that the satisfaction can be predicted mostly by student’s interactions. Another interesting project can be found elsewhere [33], where authors introduce an intelligent classroom system that is able to classify student’s satisfaction by examining the parameters of the physical environment obtained with different intelligent devices. Whilst these works focus on particular interaction and physical context information, they do not evaluate the resources and methodology of the course. Following this research line on measuring the student’s satisfaction, the work proposed in this paper advances in the evaluation of learning content by providing a method to assess different types of specific content in a more fine-grained level compared to the works reviewed above. III. Materials and Methods This work proposes a new method to evaluate students’ satisfaction with respect to specific elements of learning content in distance courses. A case study to evaluate this method has been developed in the Catholic University of Murcia (UCAM), where several blended and online courses are available. In particular, Sakai1 has been used as the LMS platform for this study, since it is the one adopted in UCAM. A regulation for these blended and online modalities exists to ensure that lecturers use a common framework with the aim of, on the one hand, providing students with quality resources and, on the other hand, ensuring that the students continue engaged in the course and therefore reducing drop-out rates. To achieve these two objectives, the regulation establishes certain parameters that have been designed by the Vice-Rectorate of Virtual Education at UCAM. Among the most important parameters, for lecturers there is a certain maximum number of days to correct tasks or to answer forums or private messages. This allows the students to know beforehand what the waiting times are and thus they are able to do a better planning. To keep track of these answer times and the compliance of the lecturers with the regulation, the university utilizes a tool called Online Data [34], which is integrated within the Sakai LMS. This LMS also provides a tool for organizing content called “Lesson Builder” that allows students to browse learning content of various types organized by topics or units. This content includes every available resource from text material to audiovisual material, as well as direct access to assignments, forums, videoconferences or self-assessments. However, the Lesson Builder tool does not allow gathering students’ opinions about each of the provided content. For this reason, a specific tool based on the representation of smileys has been developed in this work to allow students to evaluate such contents organized in their corresponding units. Fig. 1 shows an example of the smileys tool integrated into a Lesson Builder unit. As can be observed, at the top of each unit it is displayed three smiley emoticons. Each of these emoticons allows student to express their satisfaction with the learning content of that unit. The students can express their satisfaction related to the 5 dimensions that are directly related to each unit: media resources, 1 https://sakaiproject.org/ - 3 - Article in Press text resources, assignments, self-assessment and methodology. These five dimensions are grouped into two general categories, namely “Resources” and “Methodology”. The Resources category groups the first 4 dimensions, while the Methodology category is composed of the dimension with the same name. The Resources category aims to analyze student satisfaction with respect to the available resources, assignments and self-assessment items. The methodology category evaluates the follow-up of the lecturer for that unit. Each dimension is assessed using a Likert-type scale from 1 (strong negative perception) to 5 (strong positive perception). Thus, when the student clicks on any of the smileys a satisfaction evaluation screen is displayed (see Fig. 2). Fig. 2. A screenshot showing how the student’s satisfaction is gathered for the learning content of each unit once a smiley emoticon has been clicked on. If the chosen option is the sad smiley, all the dimensions are marked with one star. If it has been the neutral smiley, three stars are marked for each dimension (exactly as it appears in Fig. 2). If the student selects the happy smiley, then the five stars will be highlighted for each dimension. Regardless of the selected smiley, the student could modify the satisfaction for each item individually and he/she may even write comments to justify the evaluation or suggestions to improve the contents. Students can anonymously evaluate each unit at any time during the academic year. Evaluations can be updated at any time; however, the system only stores the most recent evaluation. Observe also that a student can evaluate separately the different units of a subject, therefore the same subject can have several evaluations from the same student (one for each unit in the subject). Once the data is gathered using the smileys tool, a statistical analysis will be performed to find if there are any significant differences in the results of student satisfaction according to the students’ study levels and according to the different periods of time of an academic year (divided into months). These analyses will distinguish between student satisfaction with respect to the Resources category and the Methodology category. The non-parametric statistical test of Kruskal- Wallis, the Dunn-Bonferroni test post hoc and the Kolmogorov- Smirnov test will be used for statistical analysis. On the other hand, prior to the implementation of this new tool based on smileys, only traditional surveys had been used at UCAM to evaluate student satisfaction (an example of the questionnaire used in this survey can be found in Annex I). In these traditional surveys, students are asked to evaluate four dimensions with respect to the lecturers’ performance in the LMS: Methodology, Planning, Resources and General Overview. These dimensions are graded by the students following a Likert-type scale from 1 (strong negative perception) to 5 (strong positive perception). The activation of these traditional surveys takes place only in the last month of each academic quarter prior to the final exams. Responses to these traditional surveys are also anonymous and a student can only take one survey per subject (differently from the smileys method, where the evaluation takes place at the unit level and there could be more than one response by the same student for the same subject). In order to search for differences between the results of both types of evaluation of student satisfaction, a preliminary comparative study between the results for the Resources and Methodology dimensions of each method will be analyzed in this work using the visualization tool QlikSense [35]. To perform the case study a total of 245 students have participated Fig. 1. A screenshot of a Lesson Builder unit integrating the smileys tool (upper right corner). Content is intentionally blurred for the shake of lecturer’s privacy. - 4 - International Journal of Interactive Multimedia and Artificial Intelligence from 49 different blended and online courses during the 2018/19 academic term, classified in four study levels: Degree, Master, Ph.D. and Own Degree (i.e., university specific degrees endorsed by the prestige of the university but without the official recognition of the State as in the rest of the degrees). For the evaluation of the learning content one subject for each one of the 49 courses has been selected. Table I shows the number of subjects and students involved in the case study classified by study level along with the number of responses gathered in each level through the smileys tool (Resp. Smileys) and the number of responses obtained in traditional surveys (Resp. Traditional). It should be noted that the students who have participated in both surveys are the same, but since the surveys are anonymous, it is not possible to match the students’ answers in both surveys in order to make an association between the two responses (i.e., they cannot be considered as samples of related responses). For this reason, the available data set is considered a set of independent samples and hence the justification for the application of the Kruskal-Wallis statistical test for the analysis of results. TABLE I. Number of Subjects, Students and Responses Involved in Each Study Level. The Study Has Been Performed During the 2018/19 Academic Year Level Degree Master Ph.D. Own Degree Total Students 50 100 20 75 245 Subjects 10 20 4 15 49 Resp.Smileys 582 211 63 63 919 Resp.Traditional 49 97 19 75 240 The data shown in Table I comprises the dataset for the study performed in this work. Specifically, the dataset consists of the following attributes: • Type of survey: Traditional or smileys method. • Subject: Subject that receives the evaluation. • Study level: Level of studies to which the subject corresponds. • Assessment for methodology: Assessment on the scale of 1 to 5 made for the methodology section. • Assessment for resources: Assessment on the scale of 1 to 5 made for the resource items. • Date: Date of the survey. Using this dataset, in this paper we consider the following research questions: • RQ1. Are there differences in the resource evaluation according to the students’ study level when using the smileys tool? • RQ2. Are there differences in the methodology evaluation according to the students’ study level when using the smileys tool? • RQ3. Are there differences in the resource evaluation according to the period of the survey response when using the smileys tool? • RQ4. Are there differences in the methodology evaluation according to the period of the survey response when using the smileys tool? • RQ5. Are there differences in students’ satisfaction results about resources and methodology depending on the type of survey (traditional vs. smileys)? The first four RQs perform a study focused only on the proposed smileys tool. The RQ1 studies the differences in the evaluation of the resources dimension depending on the level of studies that the student is taking. RQ2 studies the differences in the methodology dimension depending on the level of studies the student is taking. RQ3 studies if there are differences in the evaluation of the resources dimension depending on the academic period in which the student answers the survey. RQ4 studies if there are differences in evaluation in the methodology dimension depending on the academic period in which the student responds to the survey. Finally, RQ5 analyzes at a global level if there are differences in the evaluation of the resource and methodology dimensions when the evaluation is done by means of the traditional surveys compared to the new smileys tool. IV. Results This section explores the results of the analysis proposed in Section III. Firstly, a general view of the results obtained by means of the smileys tool is displayed in Fig. 3. It shows the average values obtained for each dimension evaluated through this tool (see Section III) along the academic year for all the study levels. Although the data are shown from January to December, it is important to bear in mind that Fig. 3 reflects the two academic quarters in the Spanish academic year: the first quarter begins in mid-September and ends in January (with the final exams for this period) and the second quarter begins in mid- February and ends at the end of June (again with the final exams for this second period). September was the month for the remedial exams at UCAM in the 2018/19 academic year. Analyzing the results obtained for each dimension, in general they follow the same trend. The best valued dimension is the media resources followed by the text resources, while the worst valued one is the self-assessment content. The increase in better average values coincides with the initial months of each quarters, namely September and February, and with the period of examinations corresponding to the months of January, May and the end of August/beginning of September. The lowest averages are shown for the Christmas period (December) and summer (June, July and August), both of them included in the holiday periods at UCAM. Next, it is performed a statistical analysis of the results of the student satisfaction gathered by means of the smileys tool to evaluate research questions RQ1-RQ4. For all of them, the Kolmogorov-Smirnov test is applied in the first place to check for the normality of data. The test returns a p-value = 0.0 for each research question, therefore it can be stated with a 95% confidence level that the data do not follow a normal distribution for any research question. Therefore, non-parametric tests must be applied and the Kruskal-Wallis test is used as the non- parametric alternative to the One-Way ANOVA. In order to adjust the p-value and get the significant differences at a general level, the Dunn- Bonferroni post-hoc test is applied when necessary. A. Research Questions RQ1 & RQ2 The null hypothesis to tackle RQ1 indicates that there are no significant differences in the student satisfaction regarding the Resource category for any study level (with a 95% confidence level). The Kruskal-Wallis test returns a p-value=0.048, and therefore the null hypothesis is rejected. Hence, there are significant differences in the student satisfaction regarding the Resource category depending on the study level with a 95% confidence level. Table II shows the p-value and the adjusted p-value obtained after using the Dunn-Bonferroni post hoc adjustment test for RQ1. This table only shows combinations of pairs that have significant differences in the p-value with a confidence value of 95%. As can be seen on an individual level (p-value without adjusting), there are significant differences between the student satisfaction related to resources of own-degree and master’s degree, as well as between the satisfaction related to resources of degree and master’s degree. As can be seen in Table IV, Own-degree students are more satisfied with the available resources (slightly above 4 points) than the Master’s students, whose satisfaction is lower and stands at 3.4 points. This table also shows - 5 - Article in Press the difference in satisfaction about resources between Master’s and Degree students. In this case, the difference is somewhat smaller, since the average satisfaction of the latter is 3.7 points. At the general level (adjusted p-value) there are no significant differences in satisfaction between Degree and Master’s students, but there are still significant differences between the satisfaction in the Own-degree and Master’s students with a confidence level of 93%. TABLE II. P-Value Results with 95% Confidence Level Obtained with Kruskal Wallis Test and Adjusted P-Value Obtained with Dunn- Bonferroni Post Hoc Test for the Student’s Satisfaction Regarding the Resource Category Grouped by Study Levels Pairs of study Level Own-degree / Master Degree / Master P-value 0,011 0,048 Adjusted p-value 0,065 0,288 Regarding RQ2, the null hypothesis indicates that there are no significant differences in student satisfaction regarding the Methodology category for any study level (with a 95% confidence level). The Kruskal-Wallis test returns a p-value=0.01, and therefore the null hypothesis is rejected. Hence, there are significant differences in the student satisfaction regarding the Methodology category depending on the study level with a 95% confidence level. Table III indicates the p-value and the adjusted p-value obtained after using the Dunn-Bonferroni post hoc adjustment test for RQ2. This table only shows combinations of pairs that have significant differences in the p-value with a confidence value of 95%. In the comparisons between pairs, there are significant differences regarding the satisfaction about the methodology in the study levels of Own- Degree and Degree, Degree and Master and Degree and Ph.D. Note that at the general level they are not significant, as demonstrated through the adjusted p-value. TABLE III. P-value Results with 95% Confidence Level Obtained with Kruskal Wallis Test and Adjusted P-value Obtained with Dunn- Bonferroni Post Hoc Test for the Student’s Satisfaction Regarding the Methodology Category Grouped by Study Levels Study Level Own-degree / Master Degree / Master Degree/Ph.D. P-value 0,035 0,039 0,017 Adj. p-value 0,209 0,234 0,101 For the smileys tool, the values of Table IV show the mean values (standard deviation in sub-index) for the resource and methodology evaluation, only showing the pairs of study levels for which there are significant differences. TABLE IV. Average Results of the Student Satisfaction Regarding Resources and Methodology Levels Grouped by Study Levels (Data Gathered Through the Smileys tool). The Sub-Indexes of Each Mean Value indicate Its Standard Deviation Study Level Degree Master Ph.D. Own Degree Resources 3.71.8 3.41.7 3.61.7 4.01.3 Methodology 3.81.8 3.51.9 3.32.0 3.61.9 B. Research Questions RQ3 & RQ4 The null hypothesis related to RQ3 indicates that there are no significant differences in the student satisfaction for any month of the year regarding the Resource category (with a 95% confidence level). The Kruskal-Wallis test returns a p-value=0.0, and therefore the null hypothesis is rejected. Thus, there are significant differences in the student satisfaction with respect to the Resource category depending on the month of the year in which the evaluation is conducted. Table V shows the p-value and the adjusted p-value obtained after using the Dunn- Bonferroni post hoc adjustment for RQ3. The first column shows the number of the month instead of the name, indicating by a ‘/’ which pairs of months have significant differences. This table only shows combinations of pairs that have significant differences in the p-value with a confidence value of 95%. Among individual pair comparisons (according to the p-value) there are significant differences between July and the following months: April, May, June, September, October and November; and between the months of June and September and May and June. In a general interpretation of the results (observing the adjusted p-value), there are only significant differences between the month of July and the following months: April, May, September, October and November. This result coincides with the data displayed in Fig. 4, being July the month when students give a worse evaluation of resources compared to the other months. Finally, the null hypothesis related to RQ4 indicates that there are no significant differences in the student satisfaction for any month of the year regarding the Methodology category (with a 95% confidence St at is ti ca l A ve ra ge s Avg(Media Resources) Avg(Text Resources) Avg(Methodology) Avg(Assigment) Avg(Self- assessment) 4.5 4 3.5 3 2.5 2 JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC Fig. 3. Average values for each dimension (media resources, text resources, assignments, self-assessment tests and methodology) evaluated through the smileys tool along the academic year. - 6 - International Journal of Interactive Multimedia and Artificial Intelligence level). The Kruskal-Wallis test returns a p-value=0.0, and therefore the null hypothesis is rejected. Thus, there are significant differences in the student satisfaction with respect to the Methodology category depending on the month of the year in which the evaluation is conducted. TABLE V. P-Value Results with 95% Confidence Level Obtained with Kruskal Wallis Test and Adjusted P-Value Obtained with Bonferroni Post Hoc Test for the Student’s Satisfaction in the Resource Category Throughout the Academic Year Pairs of Months P-value Adjusted p-value 7/4 0,004 0,03 7/5 0,0 0,001 7/6 0,009 0,09 7/9 0,0 0,002 7/10 0,0 0,004 7/11 0,0 0,0 6/9 0,042 0,4 6/5 0,01 0,1 TABLE VI. P-value Results with 95% Confidence Level Obtained with Kruskal Wallis Test and Adjusted P-value Obtained with Bonferroni Post Hoc Test for the Student’s Satisfaction in the Methodology Category Throughout the Academic Year Pairs of Months P-value Adjusted p-value 7/1 0,03 0,2 7/2 0,004 0,03 7/3 0,0 0,001 7/4 0,01 0,1 7/5 0,0 0,0 7/6 0,001 0,01 7/9 0,0 0,001 7/10 0,001 0,007 7/11 0,0 0,0 7/12 0,01 0,1 4/11 0,023 0,2 Table VI presents the p-value and the adjusted p-value obtained after using the Dunn-Bonferroni post hoc adjustment test for RQ4. Again, the first column shows the month number instead of the name. This table only presents combinations of pairs that have significant differences in the p-value with a confidence value of 95%. In an individual comparison of pairs, it can be seen that there are significant differences in the assessment of student satisfaction in the Methodology dimension between the month of July and the following months: January, February, March, April, May, June, September, October, November and December. There are also significant differences between June and May and between June and September. This can be seen in Fig. 4 since in June the satisfaction values regarding the methodology are lower than in May and September. Analyzing the global results with the adjusted p-value, there are only significant differences globally between the month of July and the following months: February, March, May, June, September, October and November. These differences are appreciated visually in Fig. 4. In summary, the significant differences in the student satisfaction value for the Resources and Methodology dimensions are different with respect to the months in the academic year, with July being the worst rated month in terms of satisfaction in both the Resources and Methodology categories. C. Research Question RQ5 Fig. 5 shows a bar graph which compares the results obtained by the two methods of satisfaction evaluation presented in Section III, namely traditional surveys and the smileys tool. The error bars represent the 95% confidence interval for the average values. The results are analyzed with respect to the two comparable categories of each method (Resources and Methodology) and grouped by the different study levels. For the traditional survey method, the blue color has been selected to represent the average value for Resources and the yellow color for the average value of Methodology. For the smileys tool method, the green color has been selected to represent the average value of Resource and the red color for the average value of Methodology. A more in-depth statistical analysis cannot be applied at the moment as the disaggregated data for the traditional survey method have not been made available, but only the mean values and standard deviation of each evaluated category for each study level. All study levels have been considered for this study. However, it is important to highlight that, at the time of this study, the university had not implemented yet quality assessment processes in Ph.D. courses by means of traditional surveys due to their recent implementation (from 2015). In this case it has not been possible to perform a comparative study with the results obtained by the smileys tool method. It is observed that, in general, the results obtained through traditional surveys show a greater student satisfaction than when using the smileys tool. This fact may be caused due to several factors: Firstly, traditional surveys are designed to evaluate the learning content and the methodology of any subject in a global manner, whereas the smileys tool allows for a more fine-grained evaluation. Therefore, if the content or the methodology of a specific unit is evaluated not as positive as the rest of the units, it may affect the global evaluation. St at is ti ca l A ve ra ge s Avg_Smileys(Resources) Avg_Smileys(Methodology) 4.5 4 3.5 3 2.5 JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC Fig. 4. Average values for the Resource and Methodology categories throughout the academic year (in months) obtained from the smileys tool. - 7 - Article in Press Secondly, traditional surveys are activated during a limited period of time and in a specific interval of dates. On the other hand, the smileys tool allows students to evaluate (and revaluate) any content at any time, including the period after examinations. In this way, students have more occasions to reflect on their satisfaction about the learning content or the methodology. A last factor might be related to the fact that, according to some authors, memory is more important than actuality [36]. Thus, the results of traditional survey may be influenced by a general positive memory of the course combined with the temporal distance (which usually highlights positive memories and occludes negative ones) between this memory and the moment when the survey is answered, usually two or three weeks after the course has been completed. Contrarily, the smiley approach proposed in this work does not benefit from this memory factor, since it allows students to express their satisfaction at any time. Studying the graph among study levels, the results obtained by the traditional surveys do not present significant differences between the average values of Resources and Methodology dimensions for the different study levels. However, according to the results obtained by means of the smileys tool it can be observed that the resources are better valued than the methodology in Own Degree and Ph.D. courses and the opposite case is found for Master and Degree courses. D. Discussion and Limitations After analyzing the quantitative results of our study, some reflections and limitations are now discussed. In the first two RQs, we differentiate between the assessments in the dimensions of resources and methodology carried out by the students with the smileys tool depending on their level of study. The result shows that students at the grade level perform higher assessments of resources and methodology than students at other levels of study. This is assumed to be due to the fact that the students of Master and Ph. D. are more demanding students, who have a student profile that is already less time consuming and who are looking for more elaborated materials. However, these students are provided with materials such as scientific articles to encourage research activities and these resources along with the methodology followed take longer than expected by the students. With respect to RQs 3 and 4, related to the period in which the surveys are answered, the results indicate that July is the month with the worst results. Analyzing this result and consulting the statistics of the number of students who enter the subject in July and do not do so in the rest of the year, the number of students who make the most negative assessments coincide. This implies that these students have not followed the subject and do not have a full perspective of it, which causes the evaluations to fall both in resources and in the methodology of the subject. Finally, regarding RQ5, where the results are compared between the traditional surveys and the new proposal of the smileys tool, it is concluded that the latter are more effective and the student expresses his/her opinion at any time, obtaining a finer granularity of the methodology and resources of a subject, by carrying out unit by unit evaluations and not in a global way. Regarding the limitations of the study, they are mainly derived from the comparison between the two survey methods studied. Despite the fact that similar criteria are measured in both methods, we found differences in the results that may be caused by several limiting factors. On the one hand, the assessment of each unit using the smileys tool does not allow to have a global vision of the subject, and there may be significant differences in each one of them, based on objective aspects such as the fact that the lecturer may not dedicate the same time to the elaboration of the materials in one unit or in another or even has more or less knowledge of certain topics of the subject. On the other hand, the general overview provided by traditional surveys may be influenced by factors such as student qualifications, quality of the correction of assignments as perceived by the student, or level of difficulty of the exam (partial or final) and depends on the time at which the survey is conducted. Another limitation in this comparison is given by the anonymous method of data collection, which while we believe to be the correct manner of collecting students’ opinions, does not allow us to make a more accurate study using related samples as we cannot match the answers of the same student in both methods of assessing his/her satisfaction. V. Conclusion and Future Work According to the current trends on quality evaluation applied to distance education, there exists a main focus on the assessment of student satisfaction, differentiating four categories to be evaluated: (1) student-student and student-lecturer interactions, (2) learning resources, (3) methodology and (4) flexibility and ease of use of the Learning Management System software. The main methods to perform the evaluation of these categories are based on surveys containing general questions and activated during a specific and limited time. Although the data obtained using these surveys are valuable for a global overview on the quality in distance courses, there is a lack of fine-grained methods for obtaining the student satisfaction on specific learning contents and methodologies in these courses. St at is ti ca l A ve ra ge s Own Degree Degree Ph.D.Master Avg_survey(Resources) Avg_survey(Methodology) Avg_smileys(Resources) Avg_smileys(Methodology) 5 4 3 2 1 0 Fig. 5. Comparison of average student satisfaction results obtained through traditional surveys and smileys tools with respect to Resource and Methodology categories and grouped by study level. The error bars represent the 95% confidence interval for the average values. - 8 - International Journal of Interactive Multimedia and Artificial Intelligence This work proposes a new evaluation model through a tool based on the representation of smileys, which allows student to evaluate in a simple and intuitive manner the learning resources and the methodology applied at each unit of any subject in any course and at any time during the academic year. A case study has been carried out to evaluate this proposal and the results demonstrate that both the student’s study level and the period when the satisfaction evaluation is performed are sensitive factors to take into account when interpreting the evaluation results. Hence, it has been observed that better qualifications on the learning content are obtained during non-holiday periods. Likewise, it has been detected that the resources and methodologies related to two specific study levels, namely Degree and Own-degree levels, are better valued than the rest. Finally, by comparing student satisfaction results obtained from traditional surveys with the results obtained through the tool proposed in this paper, it can be observed that the latter seems to be more accurate since the way of gathering data is more specific and flexible. As a future work, we are investigating the use of semantic analysis to detect the most highlighted topics and sentiments in the opinions written by the students when using the smileys tool. It is also important to design a process to transfer the results obtained with this new tool to the lecturers in a simple and effective manner, with the aim of improving the lecturer’s awareness about the quality of these elements. We are also planning to analyze students’ changes of opinion throughout the course. To do this we will extend the smiley tool so that it can store a log of the changes in the evaluation performed by the students. Annex I. Questionnaire for Traditional Survey UCAM UNIVERSIDAD CATÓLICASAN ANTONIO STUDENT SATISFACTION SURVEY Please rate your satisfaction with the following aspects with a score of 1 to 5: (from 1: Strongly disagree to 5: Strongly agree) STUDENT SATISFACTION SURVEY - LECTURER ASSESSMENT PLANNING 1 2 3 4 5 1. The planning (date, duration, etc.) of the activities in the Teaching Guide (Syllabus) seems to me to be adequate and useful for the development of the subject 2. The development of the course programme is in accordance with the commitments made in the Teaching Guide (Syllabus) 3. The lecturer encourages self-learning, guides me in task planning and gives me correct guidance in the development of tasks METHODOLOGY 1 2 3 4 5 4. The lecturer motivates active participation and generates interest in the subject 5. The lecturer organizes, structures and clearly explains the content in his/her classes 6. The lecturer encourages the development of the capacity for reflection, analysis, synthesis and reasoning 7. The tutoring of the subject bt the lecturer is adequate 8. The lecturer applies the evaluation systems set out in the course's Teaching Guide 9. The lecturer promotes teamwork to develop communication and relationship skills RESOURCES 1 2 3 4 5 10. The teaching resources (audiovisual media, virtual campus material, etc.) used by the lecturer are adequate to facilitate learning 11. The study materials (books, articles, electronic resources, etc.) used in the course are appropiate GENERAL OVERVIEW 1 2 3 4 5 12. The lecturer is an expert in the subject 13. Evaluate in a global view work developed by the lecturer in the subject, considering all the previous aspects - 9 - Article in Press Acknowledgment This work is supported by the Spanish innovation project funded by the UCAM under grant PID-05/19. Authors would like to thank the Online Vice-Rectorate of this University for their participation in this paper. References [1] A. Corbi, and D. Burgos, “Review of current student-monitoring techniques used in e-learning focused recommender systems and learning analytics. the experience api & lime model case study”, International Journal of Interactive Multimedia and Artificial Intelligence, vol. 2, no. 7, pp.44-52, 2014. [2] S. K. Parahoo, M. I. Santally, Y. Rajabalee, and H. L. Harvey, “Designing a predictive model of student satisfaction in online learning”. Journal of Marketing for Higher Education, vol. 26, no. 1, pp. 1-19, 2016. [3] S. N. Uribe and M. Vaughan, “Facilitating student learning in distance education: a case study on the development and implementation of a multifaceted feedback system”, Distance Education, vol. 38, no. 3, pp. 288- 301, 2017. [4] M. A. O’Neill and A. Palmer, “Importance-performance analysis: a useful tool for directing continuous quality improvement in higher education”, Quality Assurance in Education, vol. 12, no. 1, pp. 39-52, 2004. [5] V. Teeroovengadum, T. Kamalanabhan and A. K. Seebaluck, “Measuring service quality in higher education: Development of a hierarchical model (HESQUAL)”, Quality Assurance in Education, vol. 24, no. 2, pp. 244-258, 2016. [6] R. Ferguson, “Learning analytics: drivers, developments and challenges”, International Journal of Technology Enhanced Learning, vol. 4 no. 5/6, pp. 304–317, 2012. Retrieved from http://oro.open.ac.uk/36374/ [7] V. A. Secades, and O. Arranz, “Big Data & eLearning: A Binomial to the Future of the Knowledge Society”, International Journal of Interactive Multimedia and Artificial Intelligence, vol. 3, no. 6, pp. 29-33, 2016. [8] M. Cantabella, B. López, A. Caballero and A. Muñoz, “Analysis and evaluation of lecturers activity in Learning Management Systems: Subjective and objective perceptions”, Interactive Learning Environments, vol. 26, no. 7, pp. 911-923, 2018. [9] M. Cantabella, R. Martínez-España, B. Ayuso, J. A. Yáñez, and A. Muñoz, “Analysis of student behavior in learning management systems through a Big Data framework”, Future Generation Computer Systems, vol. 90, pp. 262 – 272, 2019 [10] A. Espasa, and J. Meneses, “Analysing Feedback Processes in an Online Teaching and Learning Environment: An Exploratory Study”, Higher Education, vol. 59, no. 3, pp. 277–292, 2010. [11] B. E. Shelton, J-L. Hung and P. R. Lowenthal, “Predicting student success by modeling student interaction in asynchronous online courses”, Distance Education, vol. 38, no. 1, pp. 59-69, 2017. [12] W. S. Cheung, “A new role for teachers: software evaluator. In Proceedings of the IFIP TC3/WG3.” in Proceedings of 5th International Working Conference on Exploring a New Partnership: Children, Teachers and Technology, 1994, pp. 191–199. [13] D. Hawkridge, “Software for schools: British reviews in the late 1980s”, Computers and learning, pp. 88–108, 1990. [14] T. Plomp and D. P. Ely, “International encyclopedia of educational technology”, ERIC, 1996 [15] V. F. Sharp, “Computer education for teachers: Integrating technology into classroom teaching”, Wiley, 2008. [16] W. Nakamura, L. Marques, L. Rivero, E. Oliveira and T. Conte, “Are generic UX evaluation techniques enough? A study on the UX Evaluation of the Edmodo Learning Management System”, In  Brazilian Symposium on Computers in Education (Simpósio Brasileiro de Informática na Educação-SBIE), vol. 28, no. 1, p. 1007, 2017. [17] G. Ssekakubo, H. Suleman and G, Marsden, “Designing mobile LMS interfaces: Learners’ expectations and experiences”,  Interactive Technology and Smart Education, vol. 10, no. 2, pp. 147-167, 2013. [18] C. Fallon and S. Brown, S., “E-learning standards: a guide to purchasing, developing, and deploying standards-conformant e-learning”. CRC Press, 2016. [19] S. P. McKenna, “The Design Activity Framework: Investigating the Data Visualization Design Process” Doctoral dissertation, The University of Utah, 2017. [20] H. Al-Samarraie, B. K. Teng, A. I. Alzahrani and N. Alalwan, “E-learning continuance satisfaction in higher education: a unified perspective from instructors and students”, Studies in Higher Education, vol. 43, no. 11, pp. 2003–2019, 2018. [21] T. Markova, I. Glazkova and E. Zaborova, “Quality issues of online distance learning”,  Procedia-Social and Behavioral Sciences,  vol. 237, pp. 685-691, 2017. [22] Q. T. Pham and T. P. Tran, “Impact factors on using of e-learning system and learning achievement of students at several universities in vietnam”, In O. Gervasi et al. (Eds.), International Conference on Computational Science and Its Applications – ICCSA 2018, pp. 394–409, Springer International Publishing, 2018. [23] I. S. Weerasinghe and R. L. Fernando, “Students’ satisfaction in higher education”,  American Journal of Educational Research,  vol. 5, no. 5, pp. 533-539, 2017. [24] E. Martínez-Caro, J. G. Cegarra-Navarro and G. Cepeda-Carrión, “An application of the performance-evaluation model for e-learning quality in higher education”, Total Quality Management & Business Excellence, vol. 26, no. 5-6, pp. 632–647, 2015. [25] W. Sahusilawane and L. S. Hiariey, “The Role of Service Quality Toward Open University Website on The Level of Student Satisfaction”, Journal of Education and Learning, vol. 10, no. 2, pp. 85-92, 2016. [26] I. Almarashdeh, I., “Sharing instructors experience of learning management system: A technology perspective of user satisfaction in distance learning course”, Computers in Human Behavior, vol. 63, pp. 249- 255, 2016. [27] N. C. Gee, “The Impact Of Lecturers’ Compeencies On Students’ Satisfaction”, Journal Of Arts and Social Science, vol 1, no. 2, pp. 74-86, 2018. [28] M. Kangas, P. Siklander, J. Randolph and H. Roukema, “Teachers’ engagement and students’ satisfaction with a playful learning environment”, Teaching and Teacher Education, vol. 63, pp. 274-284, 2017. [29] R. Yilmaz, “Exploring the role of e-learning readiness on student satisfaction and motivation in flipped classroom”, Computers in Human Behavior, vol. 70, pp. 251–260, 2017. [30] X. Zhai, J. Gu, H. Liu, J. C. Liang and T. Chin-Chung, “An experiential learning perspective on students’ satisfaction model in a flipped classroom context”, Journal of Educational Technology & Society, vol. 20, no. 1, 198-210, 2017. [31] R. Prifti, “Self–efficacy and student satisfaction in the context of blended learning courses”,  Open Learning: The Journal of Open, Distance and e-Learning, pp. 1-15, 2020. [32] M. Asoodar, S. Vaezi and B. Izanloo, “Framework to improve e-learner satisfaction and further strengthen e-learning implementation”, Computers in Human Behavior, vol. 63, pp. 704-716, 2016. [33] A. Uzelac, N. Gligorić and S. Krčo, S., “System for recognizing lecture quality based on analysis of physical parameters”,  Telematics and Informatics, vol. 35, no. 3, pp. 579-594, 2018. [34] M. Cantabella, B. López-Ayuso, A. Muñoz and A. Caballero, A., “Una herramienta para el seguimiento del profesorado universitario en entornos virtuales de aprendizaje”, Revista Española De Documentación Científica, vol. 39, no. 4, pp. 153, 2016. [35] M. García and B. Harmsen, B., Qlikview 11 for developers. Packt Publishing Ltd, 2012. [36] D. A. Norman, “The way I see it. Memory is more important than actuality”, Interactions, vol. 16, no. 2, pp. 24-26, 2009. Magdalena Cantabella Magdalena Cantabella obtained her B.S. in Computer Science at the Catholic University of Murcia in 2008 and her M.S. in New Technologies in Computer Science applied to Biomedicine AT the University of Murcia in 2012, she obtained his PhD in Computer Science in 2018 at the University at the Catholic University of Murcia. Since 2010 she is an associate professor in the Polytechnic School within the Department of Degree in Computer Engineering of the Catholic University of Murcia. Her areas of research include massive statistical analysis of data, e-learning and definition of user profiles. - 10 - International Journal of Interactive Multimedia and Artificial Intelligence Belén López Belén López obtained her M.S. in Computer Science from the University at Murcia and her PhD in Computer Science at the same University. She has 18 years of experience in teaching, both in Degree and Master courses at University Level, include e-learning methodology. She has participated in several educational innovation projects from which publications in the area of educational innovation have been obtained. At the moment she is the Dean of the Degree in Computer Engineering of the Catholic University of Murcia and Heat of the Online Department at this University. Her areas of research include teaching assessment and e-learning methodology evaluation. Raquel Martínez-España Raquel Martínez-España is an associated professor in the Technical School at the Catholic University of Murcia (UCAM), Spain. She obtained her M.S. in Computer Science in 2009 and her PhD in Computer Science in 2014 at the University of Murcia. She has worked on several research projects in artificial intelligence and education. Raquel has participated in various academic and industry projects. Her research interests include data mining, big data, soft computing, artificial intelligence and intelligent data analysis. Andrés Muñoz Andrés Muñoz is a senior lecturer in the Technical School at the Catholic University of Murcia (UCAM), Spain. He obtained his PhD in Computer Science in 2011 at the University of Murcia. He has worked on several research projects in artificial intelligence and education. His main research interests include argumentation in intelligent systems, Semantic Web technologies and Ambient Intelligence and Intelligent Environments applied to education.