key: cord-0939270-nmr8avnd authors: Buttler, Tim; George, Darren; Bruggemann, Kira title: Student input on the effectiveness of the shift to emergency remote teaching due to the COVID crisis: Structural equation modeling creates a more complete picture date: 2021-12-31 journal: International Journal of Educational Research Open DOI: 10.1016/j.ijedro.2021.100036 sha: 6a269896d0ddbed2b9925f3e9fbce2387ab0f41f doc_id: 939270 cord_uid: nmr8avnd A study was conducted to assess student reaction to the shift to Emergency Remote Teaching (ERT) due to the COVID crisis in March of 2020. Four hundred students were randomly selected from a small private university database in central Alberta, Canada. A 65.5% response rate resulted in a final N of 262. These students responded to a 32-item questionnaire that assessed a number of factors that impacted four criterion variables: professor performance, quality of learning, affect on the final grade, and likelihood of returning in the Fall if their university was online. Results showed that the greatest predictors of the criterion variables were: professor support, professor caring, satisfaction with the final exam format, a relaxed schedule, quality of presentation, emotional response, adequate technological resources, and student input. Structural equation modeling creates a model that sorts out the relative impact of predictors on each criterion variable. The topic of synchronous versus asynchronous video presentation is addressed in many studies with the surprising results that either can be equally effective: Wilcox and Vignat (2020) found that both synchronous and asynchronous videos could be a very effective mode of presentation; the active agent was the quality of the instructor performance. Olson and colleagues ( 2020 ) supported this result with the finding that there was no difference in the effectiveness of learning environments -the quality of presentation was the determining factor. Further support documented that instructional techniques are less important than how well they use those techniques to increase the accessibility to interactive formats ( Gillis & Krull, 2020 ) . Problems of presentation are also documented. For instance, student satisfaction was found to be lower due to difficulty in understanding the lectures ( Aboagye et al., 2020 ; Bai et al., 2020 ) . Wilcox and Vignat (2020) added a new twist with the finding that one problem was that lectures moved too quickly because the instructor, in a virtual Pow-erPoint setting, does not need to write on the board. Finally, workload increases were an additional problem that students identified ( Aguilera-Hermida, 2020 ). Wiltse and colleagues (2020) found that creation of structure was essential to effective learning. They further discovered that students craved organizational structure while maintaining the flexibility to address their education in the context of other aspects of their lives. Watermeyer and colleagues ( 2020 ) added support to the finding that the presence of clarity helps students achieve their educational objectives. Mseleku (2020) identified two necessary components of effective presentation: 1. Technology is simple : easy to access assignments, quizzes, resources, easy to download and upload, and consistency of delivery method. 2. Class processes are straightforward : logistics for assignments are clear, lectures tie into assessment, and the absence of "creativity " innovations that cause more confusion than clarity. Research also finds that the absence of such clarity hinders student learning. Aboagye and colleagues ( 2020 ) identified factors that detract from student learning: (a) low quality of online materials, (b) lack of clear learning expectations, (c) inadequate instructors in conveying information, and (d) delays of course materials. The necessity of instructors providing clarity is supported by student comments that irrelevant material on, for instance, a PowerPoint page is distractive and interferes with learning ( Wilcox & Vignat, 2020 ) . Aguilera-Hermida (2020 , p. 7) provides a statement that summarizes an overview of research findings that follow: Educators must be mindful of [student] circumstances and promote a positive attitude, encourage motivation, and invite students to rely on their previous knowledge. The more that members of higher education institutions understand the circumstances students are facing, the better we can respond to them. We find that literature on this topic renders these three concepts (support, caring, contact with) functionally equivalent as an essential component of online learning. O' Keefe et al., and Vignare (2020) found that interaction opportunities with students should be taken advantage of on a regular basis for maximum learning outcomes. Wiltse et al.'s innovative study (2020) found that clear and frequent interaction between students and the professor/university was the most appreciated of the university's responses. Bai et al. (2020) discovered that the impact of perceived frequency of the interactions between professors and students dramatically increased their satisfaction level. Further, difficulties in communication between students and professors result in poorer educational outcomes ( Mseleku, 2020 ) . Watermeyer and colleagues ( 2020 ) notes that professors in the UK found they were spending increasing amounts of time on the "pastoral " side of student interaction. That is, they were much more involved in dealing with students' fears and emotional concerns than in a face-to-face setting. The strong implication is that these interactions are critical to effective learning in an ERT context. Aguilera-Hermida (2020) created context by noting that students' fear of online challenges due to their attachment to conventional approaches yielded emotional distress levels that often resulted in resistance to change. Add the fear and uncertainty due to the pandemic, and the outcome was often lack of motivation and unwillingness to continue. This phenomenon is illustrated by the survey statement: "Online learning cannot help to achieve course objectives. " Students rated 4.22 (on a 5-point scale), indicating their agreement with this statement ( Aboagye et al., 2020 ) . Mseleku (2020) found that the stress of ERT often resulted in depression, anxiety, and other mental-health related issues; and seriously impacted student learning and satisfaction. Other researchers support these findings ( Gillis & Krull, 2020 ; Aguilara, 2020 ; Bozkurt et al., 2020 ) . Students' responses make it clear that opportunities for students to engage with their peers and the instructor need to be built into both synchronous and asynchronous instruction ( Wilcox & Vignat, 2020 ) . Shim and Song (2020) found that the absence of interaction between students and professors reduced their satisfaction with ERT. Bai and colleagues ( 2020 ) found that most students preferred the physical classroom because of the absence of interaction in an online environment. And other researchers support these findings (e.g., Aboagye et al., 2020 ; Aguilera-Hermida, 2020 ; Jeffery & Bauer, 2020 ; Wiltse et al., 2020 ) . Students' likelihood of returning to their university (or any university) in an online setting is addressed in the present study. However, research on this topic is thin in the literature: Aboagye et al. (2020) found that the two primary factors that predicted a return to their university in an online environment were social and interactive issues and presentation quality. Mseleku (2020) came up with the same two: social interaction and the quality of the lectures. This study includes four dependent variables; the effect of ERT on Grades, Professors Performance, student Learning, and likelihood of Attending given an online setting. Based on literature just reviewed the following hypotheses are presented: Hypothesis. #1: Professor Performance will be predicted by (a) support provided by the professor, (b) amount of care showed, and (c) the quality of synchronous and/or asynchronous presentation. Hypothesis. #2: the student rating of Learning will be predicted by (a) professor support, (b) professor care, (c) simple logistics, (d) and positive emotions. Hypothesis. #3: The students' grade will be predicted by (a) easy logistics, (b) professor support, (c) quality of video presentation. Hypothesis. #4: The students' likelihood to return (given an online environment) will be predicted by (a) opportunity for interaction and (b) quality of presentation. The present study explores the impact of three additional variables not addressed in other studies: student satisfaction with the Final exam format, the impact of student input on criterion variables, and the impact of a relaxed schedule in the online environment. In mid-June 2020 (five weeks after completing final exams), 400 students were randomly selected from among students of a small university in Central Alberta, Canada, and SurveyMonkey questionnaires were sent to all 400 with an email request to participate. Follow-up reminders were sent after three days and seven days. Eventually, 262 students responded (65.5% response rate), the final N for the study. The gender balance was 34% male ( N = 88) and 66% female ( N = 174). Physical location of students included 63% ( N = 165) from Alberta, 30% ( N = 79) from other provinces in Canada, and 7% ( N = 18) from outside Canada. The distribution of academic majors of the sample fairly closely paralleled the actual distribution of majors for the entire student body. An introduction informed students about the nature of the study, reassured them of the ethics: confidentiality/anonymity, informed consent (they participated of their own free will and could stop at any point if they wished), and debriefing (full disclosure was offered to any students interested). The university's Institutional Review Board approved the study prior to the questionnaires being sent out. The Questionnaire included 32 Likert-style questions assessed on 7point scales. Several of these questions allowed additional comment. The only open-ended questions asked what security measures were implemented to minimize cheating on exams (with a 55% response rate). The questions administered were based on a pilot study ( Buttler, 2020 ) conducted by the first author, concerns raised by students during the shift to ERT, and other items the authors felt logically consistent. There was no possibility of considering other instruments because no studies had yet published in professional journals on ERT due to the COVID pandemic at the time of administration. The following topics were assessed: Gender, living location (Alberta, other Canadian provinces, outside Canada), and academic major. These addressed various aspects of ERT, including experiences with asynchronous presentation, synchronous presentation, social interaction, professor contact, students input on assignments and assessment methods, professor support, professor caring, emotional response to online learning, technology issues, logistics of class processes, questions about the final exam (format, administration, satisfaction with, security measures), and the effect of a relaxed schedule. Rating of Professor Performance, the effect of ERT on Grades (assignment, quiz, final exam, overall), Learning of class content, likelihood of Attending university if online (their university or any university). Table 1 identifies all variables used in the study, and when needed, how they were computed. This study follows a logical sequence in exploring the impact of predictors on the criterion variables. 1. The standard psychometric assessment of variables in the equations. 2. Bivariate correlations of key pridictors with each other and particularly correlates with the four criterion variables. 3. Stepwise regression using each of the four criterion variable to find the unique contribution of each predictor. 4. This positions us to conduct Structural Equation Modeling (SEM) to see how the unique variance sorts itself out when all four criterion variables are included in the same model. Note that SEM is used to form a path analysis only, as there is no rationalle for the use of latent variables. 5. Since no other published research has included SEM, the SEM model could be labeled as exploratory. So the final step is to conduct confirmatory factor analysis to see if the factor structure can support the final structural model. 6. The only sum-of-squares operations are t-tests that explore gender differences among variables. All variables fell within "excellent " designation for skewness (-1 < skewness < 1) and kurtosis (-1 < kurtosis < 1) (see George & Mallery, 2020 ) except for three. All three were the likelihood-to-attend variables: their university, any university, or the mean of the two; if they were online in the fall semester. Skewness values were fine (.46, .34, .45), but the equivalent kurtosis values of 2.17, 3.10, and 2.73 are outside the generally acceptable range. Closer examination of the distributions (7-point scale) found that between 63% and 72% responded doesn't make any difference (4). Then, 5% rated far less likely (1), and 8% rated far more likely (7). With the extreme centrality of answers, log or natural-log transformations would not improve psychometrics. Because of this, those variables are used only with caution. Correlations explored the bivariate impact of different variables on four criterion variables: Grades (a combination of grades on assignments, quizzes, final exam, and overall grade), Professor Performance rating, quality of Learning, and a combination of the three (weighted equally) called Educational Outcome. We also looked at correlates with the problematic Attend variable. The N for all correlations is 262, and the variables mentioned in text are all significant at the .001 level. Significant correlates ( = .001) included: satisfaction with the final exam format ( r = .498), easy logistics ( r = .441), Professor support ( r = .375), Professor caring ( r = .343), relaxed exam schedule ( r = .339), student input ( r = .300), positive emotions ( r = .263), and quality of presentation ( r = .225). Significant correlates ( = .001) of effective learning included: satisfaction with the final exam format ( r = .471), professor support ( r = .441), professor caring ( r = .418), easy logistics ( r = .360), student input ( r = .344), positive emotions ( r = .337), relaxed exam schedule ( r = .323), and quality of presentation ( r = .264). Correlates ( = .001) of a high rating for the professors' performance included: professor support ( r = .857), professor caring ( r = .800), student input ( r = .587), quality of presentation ( r = .566), satisfaction with final exam format ( r = .564), Emails to the class ( r = .382), relaxed exam schedule ( r = .350), total video time (note: this contrasts with email instructions and attached assignments) ( r = .297), easy logistics ( r = .284), and emails to individuals ( r = .266). A list of all variables used in analyses: note all questions refer to the final three weeks of the semester (the ERT) and the final exam period that followed. Total video time (per week) used following ERT shift 7-point (1) none to > 2 hours (7) Quality of video presentation (both sync and async) 7-point (1) very problematic to very effective (7) How much social interaction did you experience? 7-point (1) much less than f2f to much more than f2f (7) Number of messages to the whole class 7-point (1) none to > 6 (7) Number of messages to you individually 7-point (1) none to > 6 (7) Student input on assignments and quizzes/exams 7-point (1) No input allowed to input encouraged (7) Satisfaction with level of support by professor 7-point (1) Not at all satisfied to very satisfied (7) Did professor care about you and your success ? 7-point (1) Not at all to cared a great deal (7) How stable was your technology for this class? 7-point (1) very problematic to entirely adequate (7) Clear logistics for quizzes and assignments? 7-point (1) difficult to straightforward (7) Satisfaction with the final-exam experience? 7-point (1) entirely unsatisfied to entirely satisfied (7) Relaxed schedule for final exams was helpful 7-point (1) actually, hindered to very helpful (7) Quality of Professor Performance, final 3 weeks? 7-point (1) disaster! to extraordinary! (7) How did ERT affect GRADE (assignments, quizzes, final exam, overall)? 4 questions weighted equally How did ERT affect your LEARNING of class content? 7-point (1) hurt! to helped! (7) Emotional response to all classes being online 7-point (1) deeply disturbed! to whoopee! (7) Educational Outcomes (mean of Grade, Learn & PP) 7-point (1) low to high (7) If online, likelihood of attending own university 7-point (1) decreases to increases (7) If online, likelihood of attending any university 7-point (1) decreases to increases (7) ATTENDING : the former two equally weighted 7-point (1) decreases to increases (7) Combined variable educational outcome Correlates ( = .001) with Educational Outcome: professor support ( r = .715), professor caring ( r = .667), satisfaction with final exam format ( r = .638), student input ( r = .523), quality of video presentation ( r = .452), easy logistics ( r = .443), relaxed exam schedule ( r = .420), messages to the class ( r = .325), positive emotions ( r = .299), and interaction among professors and students ( r = .235). The psychometrically-challenged likelihood-to-attend variable produced the weakest results; only three significant correlations at the .001 level: positive emotions ( r = .448), good technology ( r = .254), and easy logistics ( r = .244). We ran stepwise regressions using the same criterion variables and all independent variables with significant bivariate correlations. Of course, this procedure identifies each variable's unique contribution after prior variables have been accounted for. Here we include the overall R, the R 2 , degrees of freedom, and significant predictors ( = .08) along with the beta weights. Five variables predicted a high Professor Performance rating: In order of magnitude: good class support ( = .437), care for the student ( = .287), quality of video presentation ( = .146), relaxed finalexam schedule ( = .084), and satisfaction with the final exam format ( = .081). This result produced R, R 2 , and degrees of freedom values of .892, .793, 1, 256. All hypothesized relationships are included in the model (support, caring, good quality video); two new variables, satisfaction with the final-exam format and a relaxed schedule were also significant predictors. Six variables predicted better student Learning. In order of magnitude: positive emotions ( = .197), satisfaction with the final exam format ( = .193), easy logistics ( = .153), care for the student ( = .136), good class support ( = .131), and the relaxed final-exam schedule ( = .119). This result produced R, R 2 , and degrees of freedom values of .604, .365, 1, 255. The hypothesized reltionships were all included in this model (support, caring, logistics and emotions); and, just like Professor Performance, satisfaction with the final-exam format and relaxed schedule were also predictors. Five variables predicted the composite Grades variable. In order of magnitude: satisfaction with the final exam format ( = .309), easy logistics ( = .284), relaxed final-exam schedule ( = .164), student input on assignments and exams ( = .116), and gender -being male was associated with better grades ( = .106). This result produced R, R 2 , and degrees of freedom values of .615, .378, 1, 256. Of hypothesized predictors, only easy logistics was included in the final model. Three variables predicted the Attend variable: In order of magnitude: positive emotons ( = .396), good technology ( = .167), and a relaxed schedule ( = .122). This result produced R, R 2 , and degrees of freedom values of .497, .247, 1, 258. Neither of the two hypothesized relationships were confirmed. The interaction and quality-of-video variables were included as potential predictors and neither even approaced significance. Seven variables predicted a strong overall effect on three criterion variables (Grades, Professor Performance, Learning) weighted equally: In order of magnitude: good class support ( = .276), satisfaction with the final exam format ( = .224), care for the student ( = .203), easy logistics ( = .183), relaxed final-exam schedule ( = .142), positive emotions ( = .105), and student input on assignments and exams ( = .081). This result produced R, R 2 , and degrees of freedom values of .830, .689, 1, 253. A summary comment on regression results before we turn to structural equation modeling: One predictor show up in all five regressions: a new variable, relaxed schedule. One variable is a significant predictor in four of the regressions: another new variable, satisfaction with the final-exam format. Four variables are significant predictors in three of the regression equations: professor support, professor caring, easy logistics and positive emotions.. The amount of variance explained (R 2 ) was exceptionally high for two of the criterion variables. Predictors of Professor Performance explained almost 80% (.793) of the total variance; predictors for Educational Outcome explained almost 70% (.689) of the variance. Recall that SEM serves at least three masters in constructing the model. First, you want a model that is a good fit of the data, and many fit indices allow the researcher to assess the quality of fit. Second, you want your model to be as parsimonious as possible without the loss of valuable information. If you connect all possible significant links, you can get an exceptionally good fit, but the model will often be too complex to interpret. Finally, you want a model that has good face validity. It needs to make sense to the reader, even a reader who is not fluent in SEM. The sample size (N = 262) is entirely adequate based on the Bentler and Choi criterion of a 5:1 ration of participants to free parameters ( Bentler & Chow, 1987 ) . The 27 free parameters in the model suggest that the sample size should be at least 135; we're double that. In the first model, we included only standardized regressions coefficients ( weights) greater than .10. Although a better fit was possible by including other (even statistically significant) variables, that would violate the law of parsimony. See Fig. 1 for the final structural model. Fit indices include: 2 (23, N = 262) = 25.63, p = .32, The point estimate of Root Mean Square Error of Approximation (RMSEA) was .021; the 90% CI ranged from 0 to .06. The Comparative Fit Index (CFI) was .996. These values indicate an excellent model fit ( Hu & Bentler, 1999 ) . This model includes the Attend variable, which operates quite independently of the other three -notice the absence of covariance between the residual of Attend with residuals of the other three dependent variables. The covariances between the residuals of Professor Performance, Learning, and Grades are quite instructive. These values (.121, .222, .106) illustrate that even after most of the variance is accounted for, there remain strong interactive relationships between Professor Performance and Learning/Grades and between Learning and Grades. It would be possible to include direct arrows between the three, but at the unacceptable cost of diminishing clarity of which variables contribute to each dependent variable. Here is the summation. Predictors of Grades. satisfaction with final exam format ( = .33), easy logistics ( = .27), relaxed schedule ( = .17), and student input ( = .10). Predictors of Likely to attend. positive emotions ( = .36), good technology ( = .17), and relaxed schedule ( = .15), First a word about the innovations required to conduct confirmatory factor analysis (CFA) in the present setting. What we sought was a factor structure that supported the validity of the structural model. To accomplish this, all variables, endogenous and exogenous, were included in the analysis. After the rotated matrix was calculated, four factors were selected that include the highest factor loading for each of the four dependent variables. Factor #1 had the highest factor loading for Professor Performance (.541), Factor #2 has the highest loading for Attend (.635), Factor #3 was highest for Grades (.587), and Factor #4 for Learning (.440). Then the factor loadings for other variables were examined to see to what extent it reflected the structural model. Professor-Performance would be expected to have the highest factor loadings because the residual (.225) was so low that most of the variance would be explained by the predictors. The three other criterion variables would be expected to have lower factor loadings as the residuals were much higher: Learn (.650), Grades (.655), and Attend (.770). Factor loadings for supporting variables for dependent variables Learn, Grades, and Attend would be expected to be in the .1 -.3 range due to a smaller portion of the variance being explained. The factor loadings among the four dependent variables are ignored. However, it would be expected that covariance between the residuals in the structural model should reflect that the Attend variable operates independent of the other three (hence very low factor loadings) and there should be higher factor loadings among Professor Performance, Learn, and Grades. Finally, if the factor structure fits well, the supporting variables in the Structural model should show higher factor loadings than non predictors. The actual factor loadings is not of primary importance, but that the included variables rank higher than non-included variables. Factor analysis was conducted with the 14 variables described earlier. The variables were well suited for factor analysis with a KMO rating of .880 and Bartlett's Test of Sphericity yielding 2 (91) = 1559.4, p < .001. Even the most strigent of criteria place the sample size as excellent: a 19:1 ratio of participants to variables ( Mundfrom et al., 2005 ) . Factors were extracted by the Generalized Least Squares procedure and were rotated to a final solution by the Equimax method with Kaiser normalization. Results reflected what was predicted. The Professor Performance factor had the highest factor loadings. All supporting variables in each of the four factors were within a .1 to .7 factor-loading range. The Attend factor showed very low loadings on the other three Criterion variables, which, in turn showed moderate correlation with each other. The only violation was that three variables, not in the structural model, showed factor loadings higher than included variables. For instance, For the Professor Performance factor, the non-included variable Student Input (.209) showed a higher factor loading than the included variables Sat-Final (.168) and Good quality video (.134). In general, the factor structure provides good support for the structural model. For those who have frequently conducted factor analysis, they will attest that the "perfect model " never occurs and the occasional violation and additional information that may be gleaned from the factor structure are all part of the process. Table 3 shows the final Rotated solution with the four factors described above. There were only four areas in which men and women differed significantly. In contrast to many studies that measure gender differences, in each of the four, men were more generous in their rating: Men ( M = 4.95) rated the importance of student input on quizzes and assignments higher than women ( M = 4.32), t (260) = 2.529, p = .012. Men ( M = 5.45) felt that greater support was provided than did women ( M = 4.93), t (260) = 2.093, p = .038. Men ( M = 5.89) felt that the professor cared more than women ( M = 5.34), t (260) = 2.379, p = .018. Finally, Men ( M = 5.45) rated the professor's performance higher than women ( M = 4.90), t (260) = 2.411, p = .017. Recall that a primary purpose of the present study is to provide support for or contrast with the growing literature that identifies factors that impact ERT quality following the COVID crisis. Then, provide new insights into the active agents that influence key criterion variables in the ERT context. A challenge in the present comparisons is the contrast of dependent variables. In our study, there are several dependent variables, all of which are based on student opinion. How did ERT affect their Grades, their rating of Professor Performance, their Learning, and their likelihood of Attending in the coming year if their university was online? We also created a composite variable called Educational Outcomes that weighted Grades, Professor Performance, and Learning equally. Table 2 provides comparisons between this study and studies cited in the literature. It might be instructive to consult this chart to clarify text that follows. Prior studies identified technology issues as a serious challenge with ERT; however, the present study did not find this issue a major player in student ratings. In the structural model, technology issues impacted only the likelihood of Attending. In regression (and even in bivariate correlations), technology, once again, was a significant predictor of returning to the university only. There is solid sense that it would diminish your enthusiasm about returning in an online setting if you have technology challenges. But in the present study, it did not impact Grades, Learning, Professor Performance, or the composite Educational Outcome variable. Like many studies cited earlier, ours did not find that synchronous or asynchronous presentation differed significantly in their impact on educational outcomes. Consistent with other studies, the quality of presentation played a significant role. In the structural model, the quality of video presentation was a predictor of the Professor Performance only. In correlations, it was a significant predictor of all four. However, in regressions, just like the structural model, it impacted only Professor Performance. These results suggest that, while good quality video presentation is appreciated, students are sufficiently resourceful to perform well even when presenters are challenged in that department. The present study supported the research that identifies the importance of keeping logistics simple when faced with ERT. In the structural model, easy logistics was the second greatest predictor of both Grades and Learning. Regression analysis included these two but also significantly predicted the composite variable Education Outcomes. Professor Performance was significantly predicted in correlations, but that variance was consumed by other variables in the regression equations. Past and present studies urge that logistics be kept simple and straightforward for the best educational outcomes. Prior research is confirmed that both support and caring had a robust impact on Professor Performance and Learning. In the structural model, support had a greater effect (on Professor Performance) than any other variable in analyses, with caring the second greatest. In bivariate correlations, the impact of support and caring was greater than .8 as predictors of Professor Performance -a correlation value rarely seen in human-subjects research. Curiously, support and caring had no impact on student grades in either the structural model or the regressions -despite significant bivariate correlations. Communication between the university and students and between professors and students in prior studies has been identified as having a significant impact on student performance. Despite the questions being asked directly in the present study, neither contact with the whole class nor contact with individual students played much of a role. Neither variable was a significant predictor in the structural model or regressions and showed up in correlations as a predictor of only Professor Performance. All correlations were positive but did not suggest the magnitude of impact of other studies. Studies on ERT due to COVID implicate emotions as seriously impacting student performance. Positive emotions predicting success, and Much prior research has documented students lamenting the loss of interaction between themselves and professors and other students. Although a variety of distance-learning techniques allow a rich dynamic of interaction, the loss is still noted. In the structural model, interaction did significantly predict Learning but was not a significant predictor in any of the regression equations. In bivariate correlations, interaction was a significant predictor of only the combined Educational Outcomes variable. The present results contrasted with other research. Aboagye et al. (2020) and Mseleku (2020) found that the two greatest predictors of returning to their university in an online environment were social and interactive issues and the quality of presentation. Our research found a clear pattern held in correlations, regressions, and in the structural model: return to university (given an online environment) was predicted by positive emotions, flexible schedule, and the quality of their technology. As results progressed from correlations, through regressions, and finally the structural model, a fairly clear and intuitive model emerged. When looking at each predictor's unique contributions with the four dependent variables, the model, in addition to an exceptionally good fit of the data, provided excellent face validity; a model that could be understood by non-SEM experts. Note the four dependent variables: Professor Performance was predicted by four factors: professor support, professor caring, and high-quality lectures and satisfaction with the final-exam format. Student perception of quality Learning was impacted by an intriguing array of emotional and cognitive factors (from greatest to least influence): satisfaction with the final exam format, positive emotions, easy logistics, professor support, and professor caring. Grades were impacted by the logically congruent: satisfaction with the final-exam format, easy logistics, a relaxed schedule, and student input opportunity. And, as mentioned previously, students were more likely to return if their emotions were positive, their technology was good, and the online environment provided flexibility to fit other activities. Three influential variables in this study, not addressed in the literature, were (a) the format of the final exam, (b) relaxed schedule, and (c) student input. A word on how this university handled the final exam following the emergency shift to distance learning is in order. The campus was closed on Thursday, March 12, 2020; the following week was the university spring break. When classes started up again (now in distance format), there were three weeks left in the semester. The university stayed with the published schedule for regular classes. However, the final exams, scheduled for completion April 27, were extended an additional ten days (till May 7) to allow professors and students to adjust to their different world. Faculty members discussed with each other and, in many instances, with their students how they were going to conduct the final exam or exercise. The "relaxed schedule " in our study refers to the additional ten days allowed to complete the final exam. Student input was assessed by two questions [both on a scale of not allowed (1) to allowed (4) to encouraged (7)] that assessed to what extent students had input on class activities (assignments and exercises) and assessment (quizzes and exams). In the present study, both were combined into a single variable. Satisfaction with the final exam format (SatFinal) . This variable played a far more significant role in the study than anticipated. In simple correlations, SatFinal was the greatest predictor of Grades and Learning ( r s = .498, .471) and were robust predictors of Professor Performance and Education Outcomes ( r s = .564, .638). In regressions, SatFinal was the number one predictor for Grades and the number two predictor for Learning ( s = .309, .193 ). It remained a robust predictor of the composite Education Outcome ( = .237) and a weak (but significant) predictor of Professor Performance. ( = .081) In the structural model, SatFinal was the top predictor of Grades and the third most influential in Learning. Why should satisfaction with the final exam format play such a large role? There is no serious collinearity with either emotions or a relaxed schedule. Perhaps, as professors, we lose track of how intimidating the final exams are for students. In the academic year, no event seems to strike the same terror as do the final exams. This study points to the centrality in the student experience of the final exams, which introduces a related construct, a relaxed schedule. Relaxed schedule . Like the final-exam format, the relaxed schedule was a robust predictor of all five dependent variables in both correlational analyses and all five regression equations. In the structural model, a relaxed schedule significantly impacted Learning, and likelihood of Attending. Once again, perhaps professors forget the sense of time-pressure tension students feel as final exams approach. Clearly, adding ten days to complete finals was one of the university's most appreciated moves. Student input . Student input did not have the magnitude of impact as the prior two variables; however, in correlational analysis, it did have a significant influence on all four of the criterion variables with correlation values ranging from .30 to .59. In regressions, input retained significance as predictors of Grades and Educational Outcome with beta weights of .10 and .12. In the structural model, Input remained a significant predictor of Grades. A wealth of research outside of education documents that when employees participate in a decision-making process that directly impact their lives, a higher level of motivation and commitment occurs leading to greater productivity -the equivalent of greater learning in the educational context. ( Humphrey et al., 2007 ) . So, what have we learned, and how might it apply to our present world? The last pandemic was 102 years ago: the Spanish Flu that killed 60 million. Five hundred years earlier was the Black Death that killed 200 million, one-third of the world population. So, pandemics are infrequent phenomena. However, there are many times that circumstances, such as natural disasters, might force a shift to ERT in localized areas (note Czerniewicz et al., 2019 ) . In this setting, lessons learned from the COVID crisis could be applied profitably. If we rank order findings of the present study (based on correlations, regressions, and the structural model), The big five appear to be: 1) Provide support that allows students to have all necessary resources to complete their classes effectively. This includes standard academic fare such as assignments, notes, quizzes, timely feedback, and consistent communication with the students. 2) Show that you care. Whether the crisis is a pandemic or a more localized catastrophe, students are often frightened, insecure, and need assurance that the professor (and the university) care about them. 3) Format of the final exam is the most unexpected of present-study findings. Allow students to become involved in the creation of the final-exam format so that they feel they have a say in their final evaluation. 4) Relaxed schedule. In an online setting, students are often trying to fit their academics into a work schedule. Allowing more time seems to be much appreciated. 5) Simple logistics. Professors and university administration should bend over backward to make sure the LMS is straightforward, that students understand how to use it, and that assignments, quizzes, and exams are easily accessed and processed. As Mseleku (2020) suggested, curb your urge to be creative if clarity is lost. There is an array of other factors supported by this study and prior studies that should also be considered: high-quality presentation, promoting interaction in the many ways that online programs allow, encouraging positive emotions, making sure their technology is adequate, and contacting your classes and individual students frequently. Combine this with the rich literature on distance learning that occurred long before the pandemic, and a framework can be formed that allows maximum learning for students in the face of ERT. Hindsight is 20-20, as the old saying goes. When the questionnaires were sent out, no research on ERT due to the COVID crisis had yet been published in professional journals. Thus, there was very little guidance to instruct one in the present venture. The growing volume of literature on ERT due to the pandemic is now sufficient to provide better insights for future studies. And, not long down the road, meta-analyses of this literature can create useful consensus. This study is one of many that will be included in that analysis. The present study's unique contributions include the impact of the final-exam format, the appreciation of a relaxed schedule, the importance of student input, and a structural model that forms a composite picture that create order for these and other data. In future studies it might be well to explore further how student input and the final exam format are predictive of quality learning and student satisfaction in an online context. COVID-19 and E-learning: The challenges of students in Tertiary institutions College students' use and acceptance of emergency online learning due to Covid-19 A study of the effectiveness of remote instruction from students' perspectives Practical issues in structural modeling A global outlook to the interruption of education due to COVID-19 Pandemic: Navigating in a time of uncertainty and crisis Stability under pressure: How a teacher educator sought to align beliefs and practices during a pandemic Online teaching in response to student protests and campus shutdowns: Academics' perspectives IBM SPSS statistics 26 step by step; A simple guide and reference < ? covid19?>COVID-19 remote learning transition in Spring 2020: Class structures, student perceptions, and inequality in college courses Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives Integrating motivational, social, and contextual work-design features: A meta-analytic summary and theoretical extension of the work-design literature Students' responses to emergency remote online teaching reveal critical factors for all teaching Temporal experiences of e-learning by distance learners Examining the relationship among student perception of support, course satisfaction, and learning outcomes in online learning Evaluation of evidencebased practices in online learning: A meta-analysis and review of online learning studies. Association for Learning Technology Internationalisation at a distance and at home: academic and social adjustment in a South African distance learning context Emergency remote teaching during Coronavirus pandemic: The current trend and future directive at Middle East College Oman A literature review of E-learning and E-teaching in the era of Covid-19 pandemic Minimum sample size recommendations for conducting factor analyses Delivering high-quality instruction online in response to COVID-19: faculty playbook Transferring interactive activities in large lectures from face-to-face to online settings College students' experience of emergency remote teaching due to COVID-19 E-learning and m-learning: Challenges and barriers in distance education group assignment collaboration A grounded theory of professional learning in an authentic online professional development program. International Review of Research in Open and Distributed Learning Should teachers be trained in emergency remote teaching? Lessons learned from the COVID-19 pandemic COVID-19 and digital disruption in UK universities: Afflictions and affordances of emergency online migration Emergency remote teaching environment: A conceptual framework for responsive online teaching in crises Recommendations for emergency remote teaching based on the student experience. The Physics Teacher Assessment of the impact of COVID-19 on honors student learning, institutional connections, and intent to return to campus