key: cord-316047-d9cpe9yl authors: Gonzalez, T.; de la Rubia, M. A.; Hincz, K. P.; Comas-Lopez, M.; Subirats, Laia; Fort, Santi; Sacha, G. M. title: Influence of COVID-19 confinement on students’ performance in higher education date: 2020-10-09 journal: PLoS One DOI: 10.1371/journal.pone.0239490 sha: doc_id: 316047 cord_uid: d9cpe9yl This study analyzes the effects of COVID-19 confinement on the autonomous learning performance of students in higher education. Using a field experiment with 458 students from three different subjects at Universidad Autónoma de Madrid (Spain), we study the differences in assessments by dividing students into two groups. The first group (control) corresponds to academic years 2017/2018 and 2018/2019. The second group (experimental) corresponds to students from 2019/2020, which is the group of students that had their face-to-face activities interrupted because of the confinement. The results show that there is a significant positive effect of the COVID-19 confinement on students’ performance. This effect is also significant in activities that did not change their format when performed after the confinement. We find that this effect is significant both in subjects that increased the number of assessment activities and subjects that did not change the student workload. Additionally, an analysis of students’ learning strategies before confinement shows that students did not study on a continuous basis. Based on these results, we conclude that COVID-19 confinement changed students’ learning strategies to a more continuous habit, improving their efficiency. For these reasons, better scores in students’ assessment are expected due to COVID-19 confinement that can be explained by an improvement in their learning performance. The coronavirus COVID-19 outbreak disrupted life around the globe in 2020. As in any other sector, the COVID-19 pandemic affected education in many ways. Government actions have followed a common goal of reducing the spread of coronavirus by introducing measures limiting social contact. Many countries suspended face-to-face teaching and exams as well as placing restrictions on immigration affecting Erasmus students [1] . Where possible, traditional classes are being replaced with books and materials taken from school. Various e-learning platforms enable interaction between teachers and students, and, in some cases, national television shows or social media platforms are being used for education. Some education systems announced exceptional holidays to better prepare for this distance-learning scenario. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 In terms of the impact of the COVID-19 pandemic on different countries' education systems many differences exist. This lack of homogeneity is caused by such factors as the start and end dates of academic years and the timing of school holidays. While some countries suspended in-person classes from March/April until further notice, others were less restrictive, and universities were only advised to reduce face-to-face teaching and replace it with online solutions wherever practicable. In other cases, depending on the academic calendar, it was possible to postpone the start of the summer semester [2] . Fortunately, there is a range of modern tools available to face the challenge of distance learning imposed by the COVID-19 pandemic [3] . Using these tools, the modification of contents that were previously taught face-to-face is easily conceivable. There are however other important tasks in the learning process, such as assessment or autonomous learning, that can still be challenging without the direct supervision of teachers. All these arguments end in a common topic: how to ensure the assessment's adequacy to correctly measure students' progress. Thus, how can teachers compare students' results if they differ from previous years? On one hand, if students achieve higher scores than in previous years, this could be linked with cheating in online exams or with changes in the format of the evaluation tools. On the other hand, lower grades could also be caused by the evaluation format change or be attributable to autonomous learning as a less effective teaching method. The objective of this article is to reduce the uncertainty in the assessment process in higher education during the COVID-19 pandemic. To achieve this goal, we analyze students' learning strategies before and after confinement. Altogether, our data indicates that autonomous learning in this scenario has increased students' performance and higher scores should be expected. We also discuss the reasons underneath this effect. We present a study that involves more than 450 students enrolled in 3 subjects from different degrees from the Universidad Autónoma de Madrid (Spain) during three academic years, including data obtained in the 2019/2020 academic year, when the restrictions due to the COVID-19 pandemic have been in force. E-learning has experienced significant change due to the exponential growth of the internet and information technology [4] . New e-learning platforms are being developed for tutors to facilitate assessments and for learners to participate in lectures [4, 5] . Both assessment processes and self-evaluation have been proven to benefit from technological advancement. Even courses that solely offer online contents such as Massive Open Online Courses (MOOCs) [6, 7] have also become popular. The inclusion of e-Learning tools in higher education implies that a greater amount of information can be analyzed, improving teaching quality [8] [9] [10] . In recent years, many studies have been performed analyzing the advantages and challenges of massive data analysis in higher education [11] . For example, a study of Gasevic et al. [12] indicates that time management tactics had significant correlations with academic performance. Jovanovic et al also demonstrated that assisting students in their management of learning resources is critical for a correct management of their learning strategies in terms of regularity [13] . Within few days, the COVID-19 pandemic enhanced the role of remote working, e-learning, video streaming, etc. on a broad scale [14] . In [15] , we can see that the most popular remote collaboration tools are private chat messages, followed by two-participant-calls, multiperson-meetings, and team chat messages. In addition, several recommendations to help teachers in the process of online instruction have appeared [16] . Furthermore, mobile learning has become an alternative suitable for some students with fewer technological resources. Regarding the feedback of e-classes given by students, some studies [17] point out that students were satisfied with the teacher's way of delivering the lecture and that the main problem was poor internet connection. Related to autonomous learning, many studies have been performed regarding the concept of self-regulated learning (SRL), in which students are active and responsible for their own learning process [18, 19] as well as being knowledgeable, self-aware and able to select their own approach to learning [20, 21] . Some studies indicated that SRL significantly affected students' academic achievement and learning performance [22] [23] [24] . Researchers indicated that students with strongly developed SRL skills were more likely to be successful both in classrooms [25] and online learning [26] . These studies and the development of adequate tools for evaluation and self-evaluation of learners have become especially necessary in the COVID-19 pandemic in order to guarantee good performance in e-learning environments [27] . Linear tests, which require all students to take the same assessment in terms of the number and order of items during a test session, are among the most common tools used in computerbased testing. Computer adaptive test (CAT), based on item response theory, was formally proposed by Lord in 1980 [28] [29] [30] , as is the case with linear testing. Some platforms couple the advantages of CAT-specific feedback with multistage adaptive testing [38] . The use of CAT is also increasingly being promoted in clinical practice to improve patient quality of life. Over the decades, different systems and approaches based on CAT have been used in the educational space to enhance the learning process [39, 40] . Considering the usage of CAT as a learning tool, establishing the knowledge of the learner is crucial for personalizing subsequent question difficulty. CAT does have some negative aspects such as continued test item exposure, which allows learners to memorize the test answers and share them with their peers [41, 42]. As a solution to limit test item exposure, a large question bank has been suggested. This solution is unfeasible in most cases, since most of the CAT models already require more items than comparable linear testing [43]. The aim of this study is to identify the effect of COVID-19 confinement on students' performance. This main objective leads to the first hypothesis of this study which can be formulated as H1: COVID-19 confinement has a significant effect on students' performance. The confirmation of this hypothesis should be done discarding any potential side effects such as students cheating in their assessment process related to remote learning. Moreover, a further analysis should be done to investigate which factors of COVID-19 confinement are responsible for the change. A second hypothesis is H2: COVID-19 confinement has a significant effect on the assessment process. The aim of the project was therefore to investigate the following questions: 1. Is there any effect (positive or negative) of the COVID-19 confinement on students' performance? 2. Is it possible to be sure that the COVID-19 confinement is the origin of the different performance (if any)? 3. What are the reasons for the differences (if any) in students' performance? 4. What are the expected effects of the differences in students' performance (if any) in the assessment process? We have used two online platforms. The first one is e-valUAM [44] , an online platform that aims to increase the quality of tests by improving the objectivity, robustness, security and relevance of assessment content. e-valUAM implements all the CAT tests described in the following sections. The second online platform used in this study is the Moodle platform provided by the Biochemistry Department from Universidad Autónoma de Madrid, where all the tests that do not use adaptive questions are implemented. Adaptive tests have been used in the subjects "Applied Computing" and "Design of Water Treatment Facilities". Traditional tests have been used in the subject "Metabolism". 2.1.1 CAT theoretical model. Let us consider a test composed by N Q items. In the most general form, the normalized grade S j obtained by a student in the j-attempt will be a function of the weights of all the questions α and the normalized scores ψ (S j = S j (α, φ)), and can be defined as: where the φ i is defined as where δ is the Kronecker delta, A i the correct answer and R i the student's answer to the i-question. By using this definition, we limit φ i to only two possible values: 1 and 0; φ i = 1 when the student's answer is correct and φ i = 0 when the student gives a wrong value. This definition is valid for both open answer and multiple-choice tests. In the case of multiple-choice test with N R possible answers, φ i can be reduced to consider the random effect. In this case: Independently of using Eqs 2 or 3, to be sure that S j (α, φ) is normalized (i.e. 0< = S j (α, φ)< = 1), we must impose the following additional condition on α: In the context of needing a final grade (FG) between 0 and a certain value M, which typically takes values such as 10 or 100, we just need to rescale the S j (α, φ) value obtained in our model by a factor K, i.e. FG j = K S j (α, φ). We will now include the option of having questions with an additional parameter L, which will be related to the level of relevance of the question. L is a number that we will assign to all the questions included in the repository of the test (i.e. the pool of questions from where the questions of a j-test will be selected). The concept of relevance can take different significances depending on the context and the opinion of the teachers. In our model, the questions with lower L values will be shown initially to the students, when the students answer correctly a certain number of questions with the lower L value, the system starts proposing questions from the next L value. By defining N L as the number of possible L values, the L value that must be obtained in the k-question of the j-test can be defined as: where trunc means the truncation of the value between brackets. It is worth noting that L k is proportional to the sum of the student's answers to all the previous questions in the test. This fact means that, in our model, the L k depends on the full history of answers given by the student. L k is inversely proportional to N Q , which means that it takes a higher number of correct answers to increase L k . Once L k is defined, a randomly selected question is shown to the student. Another important fact that implies the use of Eq 5 in the adaptive test is that we will never have L k