key: cord-0936356-0bgz5wl9 authors: Giray, Görkem title: An assessment of student satisfaction with e-learning: An empirical study with computer and software engineering undergraduate students in Turkey under pandemic conditions date: 2021-03-04 journal: Educ Inf Technol (Dordr) DOI: 10.1007/s10639-021-10454-x sha: e61e681f2ef2613096aa54e0b6482f2360f7979d doc_id: 936356 cord_uid: 0bgz5wl9 As COVID-19 reached Turkey in March 2020, all universities switched to e-learning in a very short period. Computer and software engineering (CE/SE) undergraduate students studying at university campuses have switched to e-learning. This paper seeks to understand the e-learning experience of CE/SE undergraduate students. A questionnaire was created and applied to CE/SE undergraduate students in Turkish universities. The data were analyzed using quantitative and qualitative techniques. The questionnaire received 290 usable responses. The highlights from the findings include: the participants (1) used video recordings intensively for e-learning and found them useful; (2) found face-to-face lectures more beneficial compared to digital live lectures; (3) used external online resources to improve their learning performance in courses; (4) thought that the materials and methods utilized for assessment should be adapted to e-learning for a better and fair evaluation; (5) perceived significantly less instructor support and classmate interaction and collaboration in e-learning compared to on-campus education settings; (6) rated their perceived satisfaction from e-learning as 2.85, slightly under the mid-level of the 5-point Likert scale; (7) perceived instructor support, student interaction and collaboration, and student autonomy as noteworthy factors in high-quality e-learning. Hypothesis 1c: Student autonomy positively affects CE/SE students' satisfaction with e-learning. Previous research suggests that perception of instructor support, student interaction and collaboration, and student autonomy differ among education approaches. For instance, students perceived less instructor support when face-to-face interaction with instructors is absent (Kuh and Hu 2001; Simonson et al. 2019) . Interpersonal dialogue with instructors positively affects student engagement and satisfaction (Chigeza and Halbert 2014; Nortvig et al. 2018 ). In the area of student interaction and collaboration, student perceptions favor face-to-face environment (Carver 2014) . Some prior studies suggest that students who were higher in independence and autonomy showed greater success in online classes than students who were lower in those traits (Milligan and Buckenmeyer 2008; Carver 2014; Seiver and Troja 2014) . The second group of hypotheses aimed at testing these previous findings for CE and SE students. Hypothesis 2a: CE/SE students perceive that instructors support them less in elearning compared to on-campus education. Hypothesis 2b: CE/SE students perceive that they interact and collaborate less with other students in e-learning compared to on-campus education. Hypothesis 2c: CE/SE students perceive that they act less autonomously in elearning compared to on-campus education. Course material is another important factor for students' learning experience (Martínez-Argüelles et al. 2013; Simonson et al. 2019) . Some researchers found out that students learn better in face-to-face lectures compared to online lectures (Adams et al. 2015; Powers et al. 2016) . In contrast, some other researchers claim that a better learning academic outcome may be obtained by combining relevant materials and methods in an online environment (Northey et al. 2015) . To understand the usefulness of course materials and methods in CE/SE education context, the following hypotheses were formed: Fig. 1 The relationships between instructor support, student interaction and collaboration, student autonomy, and student satisfaction Hypothesis 3a: CE/SE students perceive live lectures as more useful in elearning compared to on-campus education. Hypothesis 3b: CE/SE students perceive video recordings as more useful in elearning compared to on-campus education. Hypothesis 3c: CE/SE students perceive lecture notes as more useful in e-learning compared to on-campus education. Hypothesis 3d: CE/SE students perceive reading materials as more useful in elearning compared to on-campus education. Hypothesis 3e: CE/SE students perceive digital interaction environments as more useful in e-learning compared to on-campus education. In addition to the hypotheses formed, four research questions (RQ) were formed to obtain an insight on the challenges, positive and negative aspects of e-learning and some recommendations on how to improve it. RQ1 What are the challenges faced by CE/SE students in e-learning? RQ2 What are the positive and negative aspects of e-learning, according to CE/SE students? RQ3 What else can be done to improve the satisfaction of CE/SE students? RQ4 What did CE/SE students do to improve their learning performance during elearning? 2 Method 2.1 Participants CE/SE undergraduate students were invited to fill out a questionnaire online through www.surveymonkey.com platform in May 2020. 290 participants completed the questionnaire. 18% of the participants were first-year students (n = 52), 32% were second-year students (n = 92), 28% were third-year students (n = 81), and 22% were fourth-year students (n = 65). While 80% of the participants were enrolled in public universities, the rest were enrolled in private universities. The majority of the participants, 93%, were studying CE; the rest were studying SE. The Scientific Research and Publication Ethics Legislation (The Council of Higher Education 2016) published by the Council of Higher Education were taken into account when conducting this study. The professors disseminating the questionnaire were given clear information about the objective of the study. A written consent in the first section of online survey was given to all participants before filling the questionnaire. It was made clear that the collected empirical data would be used only for academic purposes and that the questionnaire did not include any question addressing participants' identity, gender, age, or university name. The questionnaire started with a brief explanation of the objective and anonymity of the study. A filter question was used to exclude participants who did not meet the inclusion criterion, i.e. currently being an undergraduate student in a CE/SE department of a university in Turkey. The second section aimed at identifying what materials and methods students used in on-campus education and e-learning and how useful they perceived these to be in both settings. The predefined list of materials and methods included face-to-face lectures in classroom, video recordings, lecture notes, reading materials, digital interaction environments for on-campus education. For e-learning, face-to-face lectures in classroom have been replaced by online lectures. For all of these materials and methods, participants were asked to select "0" if they did not use the corresponding material or method. If they used it, they were asked to rate how useful they perceived that material or method to be using a Likert scale ranging from "very useful" to "not useful at all". The third section intended to quantify participants' perceptions of instructor support, student interaction and collaboration, and student autonomy. The Turkish versions (Özkök et al. 2009 ) of three psychological scales designed and validated by Walker and Fraser (2005) were used to measure students' perceptions on these three independent variables for on-campus education and e-learning. The participants used a 5-point Likert scale ranging from "always" to "never" to provide their responses. The fourth section involving seven questions aimed at measuring the perceived satisfaction of e-learning participants. The participants used a 5-point Likert scale ranging from "strongly agree" to "strongly disagree" to provide their responses on their perception of satisfaction with e-learning. The last section included open-ended questions to enable participants to submit their opinions on challenges, positive and negative aspects, and improvement opportunities of e-learning. All of the questions except the open-ended questions were mandatory to reduce the number of incomplete questionnaires. Although it is possible to use statistical techniques to estimate the values of missing data (Little and Rubin 2019), such techniques become usually inappropriate when the amount of missing data is excessive (Kitchenham and Pfleeger 2003) . A two-step process was followed to ensure content validity. First, two professors from two distinct CE departments reviewed the questionnaire. They evaluated if the questions successfully captured the topic and based on their feedback, the questionnaire was revised. Second, two academicians from a psychology department, who are experts in conducting surveys in the field, reviewed the questionnaire, and ensured that it did not contain common errors, such as leading and confusing questions. A pilot study was performed to reduce the chance of misleading questions and poor instructions (Kitchenham and Pfleeger 2003) by distributing the questionnaire in a CE department in a public university. No feedback that resulted in a revision on the questionnaire was received. The target population for this study was undergraduate students in CE/SE departments of Turkish public and private universities. The questionnaire was shared with 15 professors from different universities via email. This type of sampling is named convenience sampling, which is the dominant survey and experimental approach in CE/SE (Sjøberg et al. 2005) . The main drawback of convenience sampling is that it may result in a homogeneous sampling frame, and this sample may have limited generalizability to the broader population (Valerio et al. 2016) . Therefore, snowball sampling (Goodman 1961) was employed to obtain a more heterogeneous sample. In the scope of snowball sampling, the professors were asked to forward the questionnaire to other instructors they know. Additionally, some of the participants were asked to recruit their friends who meet the participant criteria. Parametric and nonparametric statistics are two broad classifications of statistical analysis (Hoskin 2012) . Parametric tests are used where data are normally distributed (Van Belle 2011). The most widely used parametric tests are t-test (paired or independent), ANOVA (one-way non-repeated, repeated; two-way, three-way), linear regression, and Pearson correlation (Hoskin 2012) . The skewness and kurtosis indexes were used to identify the normality of data. Table 1 shows the indexes, and the results suggested the deviation of data from normality was not severe as the absolute value of skewness and kurtosis index were below three and ten, respectively (Kline 2015) . Besides, the sample size has the effect of increasing statistical power by reducing sampling error (Hair et al. 2006) . For sample sizes of 200 or more, the impact of departures from normality may be negligible (Hair et al. 2006 ). Based on skewness and kurtosis index along with the large sample size (n = 290), the collected data were normally distributed, and hence are appropriate for parametric analysis. Each scale, namely instructor support, student interaction and collaboration, student autonomy, and student satisfaction, was assessed for internal consistency. Table 1 displays Cronbach's alpha coefficients for each of the scales used in the questionnaire. It is suggested that the reliability should be ideally high and should not be lower than 0.70 (Carmines and Zeller 1979) . The Cronbach's alpha coefficients of the scales ranged from 0.82 and 0.94. This range is considered good to excellent (George and Mallery 2010) . In addition, Average Variance Extracted (AVE) and Composite Table 1 . AVE for all constructs are above 0.5 and CR for all constructs are above 0.7. Due to the high Cronbach alpha coefficients and AVE and CR values above the threshold, it can be concluded that the scales are reliable. Construct validity refers to the degree of how well a scale measures the construct that it was designed to measure. The construct validity was evaluated using a principle component analysis (PCA) with Varimax rotation and Kaiser normalization. For the three scales used for on-campus education, Barlett's test is significant (chi-square = 2936; df = 171; p < 0.001) and KMO measure of sampling adequacy is high (0.897). For the four scales used for e-learning, Barlett's test is significant (chi-square = 5372; df = 325; p < 0.001) and KMO measure of sampling adequacy is high (0.920). These results indicated that the data are appropriate for factor analysis. As Table 2a shows, for on-campus education, 19 items loaded on the three dimensions, and as Table 2b displays, for e-learning, 26 items loaded on four dimensions. The resulting factor structure demonstrated that the factors for which this questionnaire was designed to assess were measured within each of the instruments' scales. All free text answers were recorded in a spreadsheet file and initially, each of the answers was read on multiple occasions to identify meaning units. These were the different patterns used by the participants to share their opinions. Then the meaning units were inductively coded (Guest et al. 2012) . A code symbolically assigns a summative or evocative attribute for a portion of qualitative data (Miles et al. 2018) . Coding was done in cycles. In the first cycle, any emerging patterns of similarity or contradiction were identified. In the second cycle, the codes were collapsed and expanded to understand any patterns. After the main themes and codes were extracted, the codes assigned to each response were revised and the frequencies were calculated to report the results. Some examples from the answers were also extracted to exemplify the themes that emerged. These examples are presented in the results. Table 3 shows the results of the paired samples t-test, which was conducted to understand whether the participants perceive a difference in the levels of instructor support, student interaction and collaboration, and student autonomy in on-campus education and e-learning. On average, the perceived level of instructor support was higher in on-campus education (M = 3.70, SD = 0.77) compared to e-learning (M = 3.15, SD = 0.92). This difference, 0.55, 95% CI [0.43, 0.66], was statistically significant, t(289) = 9.30, p < 0.001. Similarly, the perceived level of student interaction and collaboration was higher in on-campus education (M = 3.82, SD = 0.85) than e-learning (M = 2.84, SD = 1.10). This difference, 0.97, 95% CI [0.82, 1.13], was statistically Rotation converged in 5 iterations Table 3 The perceived level of instructor support, student interaction and collaboration, and student autonomy in on-campus education and e-learning. Finding Based on the paired samples t-test, while hypotheses 1a and 1b were supported, hypothesis 1c was rejected. Associations were explored between student satisfaction with e-learning and the three factors (instructor support, student interaction and collaboration, and student autonomy) using Pearson correlation analysis and regression analysis. Based on the Pearson correlation analysis, instructor support (r = 0.45, p < 0.01), student interaction and collaboration (r = 0.47, p < 0.01), and student autonomy (r = 0.46, p < 0.01) are strongly related to student satisfaction. To ascertain which factors are independently related to student satisfaction when all other factors are mutually controlled, the regression coefficients were examined. A significant regression equation was found (F(3, 286) = 52.016, p < 0.000), with an R square value of 0.346. The perceived satisfaction of the participants is equal to −0.353 + 0.328 (instructor support) + 0.256 (student interaction and collaboration) + 0.370 (student autonomy), where all three factors were measured using a 5-point Likert scale ranging from "always" to "never". According to the results, all three factors are independently, positively, and significantly related to student satisfaction. Based on the statistically significant R square value, these three factors explain 35% of the variation in student satisfaction. Finding Pearson correlation and regression analyses suggest that a higher level of each factor is associated with greater student satisfaction in e-learning. The hypotheses 2a, 2b, and 2c were supported. The participants perceived digital interaction environments as the most useful media for learning in both on-campus education (M = 3.83, SD = 1.04) and e-learning settings (M = 3.75, SD = 1.15). Lecture notes were also recognized as valuable as digital interaction environments in on-campus education (M = 3.83, SD = 0.95). While faceto-face lectures were perceived as the third useful method in on-campus education (M = 3.72, SD = 1.09), digital live lectures were regarded as the least helpful method in e-learning (M = 3.27, SD = 1.26). Reading materials were seen as beneficial in both settings (on-campus: M = 3.65, SD = 0.94; e-learning: M = 3.50, SD = 1.11). While the participants found video recordings as useful in both environments (on-campus: M = 3.56, SD = 0.95; e-learning: M = 3.60, SD = 1.13), a significant percentage of the participants did not use video recordings in on-campus education. 56% and 90% of the participants reported that they used video recordings in on-campus education and elearning settings respectively. A paired samples t-test was conducted to understand whether the participants perceive the usefulness of a material or method differently in on-campus education and e-learning. To be able to conduct a paired samples t-test, the responses of the participants who do not use a material/method in both on-campus education and e-learning were filtered out. 269, 155, 281, 250, and 223 participants out of 290 used live lectures, video recordings, lecture notes, reading materials, and digital interaction environments in both settings respectively. Table 4 shows the results of the paired samples t-test. On average, the perceived usefulness of face-to-face lectures was higher ( Finding Based on the paired samples t-test, while hypotheses 3a, 3c, and 3d were supported, hypotheses 3b and 3e were rejected. 227 of the participants reported one or more challenges of e-learning as a free text answer. Figure 2 shows the top ten challenges frequently faced by the participants. The most commonly encountered challenge was problems related to Internet connection and infrastructure (CHA1). The participants heavily complained about the Internet connection stability and speed and power cut. The participants stated that they had difficulties in self-motivation, concentration on courses (CHA2), and maintaining selfdiscipline (CHA7). Exams (CHA3), too many and hard assignments (CHA5), and insufficient course materials (CHA6) challenged the participants in e-learning. One participant commented on the exams: "The exams are improper for e-learning and exam durations are not sufficient." The participants encountered difficulties in interacting with instructors (CHA4) and their classmates (CHA8). One participant commented: "In the past, when I needed to ask questions to my professors, it was enough to go to their office, but now I have to send an e-mail. This slows down communication." Another remarked: "When I have questions about homework, I have difficulty getting quick answers and not being able to cooperate with my friends." Performing team assignments (CHA10) was another challenge, mostly due to the problems in communicating with classmates. Some participants thought that instructors' expectations from students were too much, and the workload in e-learning has been increased compared to the on-campus education. To explore the positive and negative aspects of e-learning perceived by the participants, 217 and 213 free text answers were analyzed, respectively. As Fig. 3 shows, the most positively viewed aspect of e-learning is on-demand access to course materials (PAS1). One student commented: "Unlike face-to-face lectures, being accessible at any time and at any time and place." Another participant remarked: "The only positive aspect is that we can revisit the lectures recordings." Also related to on-demand access, the participants appreciated flexible scheduling (PAS3) and location convenience (PAS5). Traveling (PAS2) and spending less (PAS7) were perceived positively by the participants. Also, some participants found e-learning safer (PAS10) considering the extraordinary conditions of the COVID-19 pandemic. Some participants considered e-learning as more time-efficient (PAS4). Some participants improved their self-learning and research competencies (PAS6) and felt less stress since assignments replaced exams for course assessments (PAS8) in most cases. As Fig. 4 shows, interaction with instructors (NAS1) was the most mentioned negative aspect, although four participants perceived better instructor support (PAS9) in e-learning. Interaction in general (NAS4) and interaction with classmates (NAS8) were also negatively perceived factors of e-learning. Closely related to interaction, lack of social life was another handicap (NAS5). This finding might have been affected by the restrictions posed due to the COVID-19 pandemic. Some participants faced with focus, motivation (NAS3), and self-discipline (NAS7) problems and some of them were not satisfied with their learning performance (NAS6) in e-learning. The participants perceived assessment (NAS2) as a downside in e-learning. Most of the responses emphasized unfairness due to illegitimate cooperation among some students during assessments, especially in take-home exams. One participant commented: "Exams are Education and Information Technologies not fair and reliable." Another student emphasized: "Insufficient exam durations and hard exams to prevent cheating." Some students complained about more workload (NAS9). Some participants saw lab sessions (NAS10) as a negative side of e-learning. 177 of the participants reported their opinions on what could be done to improve their satisfaction with e-learning. As Fig. 5 shows, the most frequently mentioned opportunity was about improving the materials and methods used in courses (ISA1). Some of the participants commented: "More course resources can be provided." One student asked for more diverse resources: "More online interactive course materials can be offered." Another participant remarked: "Lecture slides should be updated for elearning environment." The participants asked for the adaptation of assessments (ISA2) and assignments (ISA3) to e-learning. Instructors should arrange lectures (such as their duration, breaks, content) (ISA7) and promote student involvement (ISA9) according to some participants. Not surprisingly, the learning platform (ISA4) is an essential component for e-learning and is subject to be improved. The participants called for more opportunities to interact with instructors (ISA5). Some participants stated that Q&A sessions (ISA8) and office hours (ISA10) could improve their interaction with instructors and hence improve their learning performance. A minority of the participants were hopeless about the success of e-learning and were in favor of switching to on-campus education (ISA6). 166 of the participants reported what they did to improve their learning performance in e-learning. As Fig. 6 shows, more than half of the participants who responded to this question were using other online resources (ILP1), such as Udemy and YouTube, to enhance their learning performance. Some of them revisited video and lecture recordings (ILP2) and took advantage of on-demand access. While some of the participants preferred self-study (ILP9), others interacted with their classmates (ILP8). Regular study (ILP3), doing more research (ILP4), taking notes (ILP5), repetition (ILP6), and studying more (ILP7) were the techniques used by some of the participants. A minority of the participants tried to apply the knowledge (ILP10) obtained from courses to various problems. This study aims to understand how CE/SE undergraduate students perceived their elearning experience during COVID-19 pandemic. The participants rated their perceived satisfaction from e-learning as 2.85, which is slightly under the mid-level of the 5-point Likert scale. The reason behind this finding could be moderate perceived satisfaction with instructor support and student interaction and collaboration in e-learning. Since the transition from on-campus education to e-learning happened unexpectedly in a short period, instructors might not have been able to adapt their courses to the new conditions. This subsection unifies the results presented in the previous section and gives some sample practices from the literature to enhance learning experience using ICT within the scope of CE/SE education. The participants stated that they applied various methods to improve their learning performances, including self-study, interacting with their classmates via digital environments (Çakıroğlu 2014) , conducting more research, exploring other online resources, and lecture/video recordings prepared by the course instructor. This finding is consistent with the learning styles theory, which emphasizes that students learn more when the educational experience is geared toward their learning styles (Shih and Mills 2007) . Instructors can support students in discovering the proper learning style for themselves, especially by guiding them in accessing suitable materials and methods. The participants faced challenges with the sufficiency of course materials and methods and 66 participants consider that improving course materials and methods would result in higher student satisfaction. Another critical point to mention is that the participants found on-demand access to course materials as the most positive aspect of e-learning. Additionally, they also appreciated the flexible scheduling and location convenience aspects of e-learning, in line with the findings in the literature, such as Arbaugh (2000), Ersoy (2003) , Djenic et al. (2010) and Hannay and Newvine (2006) . These findings support the value of integrating digital content into course materials. Fox and Patterson (2013) , who are SE professors, claimed that online courses and electronic books might become the textbook of the twenty-first century. There are many pieces of evidence on the positive effects of blending a wide variety of course materials and methods to support learning effectively (Hannay and Newvine 2006; Djenic et al. 2010; Alammary 2019) . Instructors may provide a wide range of course materials to guide students in discovering their learning styles and address the challenge of sufficiency of course materials. Students can use a proper subset of course material based on their preference. Instructors should specify the materials covering the core mandatory course content in advance to guarantee that all students agree on the minimum learning outcomes of a specific course. Such a specification may also prevent students from feeling that instructors expect too much from students. 85 participants reported that they use other online resources to improve their learning performance. Instructors can provide an additional list of online resources, such as relevant MOOCs, digital materials, and web sites. Such a list could be a good starting point for students who need various course materials to learn. Another vital course material identified by this study are lecture/video recordings. Participants reported that they used lecture/video recordings in e-learning settings (90%) and found them useful with a mean of 3.56 out of 5. Video recordings have been perceived as one of the most positive four aspects in a blended learning environment for a programming fundamentals course (Djenic et al. 2010) . Video lectures were one of the fundamental components of the SE course delivered as a flipped classroom format (Erdogmus and Péraire 2017) . In a SE MOOC, the instructors found video lectures highly efficient in conveying information (Fox and Patterson 2013) . Besides, the instructors also observed that students could cope with dense information by pausing and reviewing videos at any point (Fox and Patterson 2013) . The participants of this questionnaire stated that they revisit lecture/video recordings, take notes, and repeat the content to improve their learning performance. In this study, we can also observe that the use of lecture/video recordings have been increased from 56% to 90% after students switched from on-campus education to e-learning. The main reason for this could be that lecture/video recordings are rare in on-campus education in Turkish universities. Switching from on-campus education to e-learning may have revealed that students may benefit from lecture/video recordings to improve their learning performance. Suggestion 1 Instructors should provide an inventory of learning content involving various course materials. For instance, the author used an extensive list of course material in a Software Project Management course offered in Spring 2020. The list contained book chapters (Villafiorita 2014; Cobb 2015; Goodpasture 2010; Greene and Stellman 2018) , videos from MOOCs (AdelaideX 2020; Grushka-Cockayne 2020; Johnson 2020; Meloni 2020; Orso 2020), lecture slides, and pre-recorded videos prepared by the instructor. The participants found face-to-face lectures more useful compared to digital live lectures. Some instructors might have directly used their content and method used in the classroom without any adaptation to the e-learning environment. This type of use can be inferred from the improvement suggestions on lecture arrangement (such as their duration, breaks, content) and promoting student involvement. Q&A sessions may also increase the perceived usefulness of live sessions. In a SE course, Erdogmus and Péraire complemented video lectures involving theoretical content with short live Q&A sessions (Erdogmus and Péraire 2017) . Suggestion 2 Instructors should adapt their course materials to the e-learning environment. They can use cloud-based tools to promote student involvement. For instance, lectures slides can be shared via zeetings.com and enriched with questions asked to students during the lecture. In addition, students can be engaged with live polls published on mentimeter.com and sli.do. Online quizzes can be published using a gamified approach using kahoot.com (Compton and Allen 2018) . Last but not least, the learning platform is also significant for delivering a course in a blended or e-learning environment (Erdogmus and Péraire 2017) . Some of the participants identified the learning platform as one of the components to be improved. In addition, the root causes of the connection and infrastructure problems faced by students should be identified by the universities. Suggestion 3 The universities should analyze their infrastructure and learning platform based on the data collected during COVID-19 pandemic. For instance, Favale et al. (2020) analyzed the changes in the traffic patterns in the Politecnico di Torino campus in Italy. They shared the challenges faced and the solutions they applied during COVID-19 pandemic. Other universities can benefit from these experiences and form their own "education continuity plan" under pandemic conditions. Prior research revealed that instructor support is an essential factor affecting student satisfaction (Bolliger 2004; Walker and Fraser 2005; Özkök et al. 2009 ). Based on the qualitative analysis results, the participants perceived significantly less instructor support in e-learning compared to on-campus education. In line with the quantitative results, the qualitative analysis showed that 14% of the participants who reported the challenges of elearning perceived interaction with the instructor as a challenge. Furthermore, communication with the instructor has been viewed as the most frequently mentioned negative aspect of e-learning. 13 participants thought that better interaction with instructors would result in an improvement in the quality of e-learning. There are many factorssuch as communication, feedback, encouragement, accessibility, professionalismthat can affect perceived instructor support (Bolliger 2004) . Earlier research found out that students have expressed a desire to have more interaction on the topic of their education and of their professional careers (Kilicay-Ergin and Laplante 2012). Suggestion 4 Instructors can organize office hours to talk about career plans and the learning performance of students. In addition, Q&A sessions can be a good medium to interact with students. While more interaction between instructors and students seems to improve student satisfaction, all of the actions to interact with students would bring extra workload to instructors. Therefore, it is vital to select instructors who are interested in teaching within blended and e-learning (Sun et al. 2008) . Moreover, professional expertise should not be the sole criterion in choosing online instructors, their attitudes toward using ICT in delivering education would impact students' attitudes and affect their performance (Sun et al. 2008) . In this respect, CE/SE instructors may be a little more advantageously positioned compared to instructors in other disciplines due to their expertise in ICT. Interaction among students plays an essential role in student learning in e-learning (Moore 1989) . A minority of participants stated that they interacted with their classmates to improve their learning performance. On the other hand, based on the qualitative analysis results, the participants perceived significantly less interaction and collaboration with their classmates in e-learning compared to on-campus education. Complementing these findings, the participants mentioned interaction with classmates as a challenge and a negative aspect. Moreover, the participants faced with challenges in performing team assignments mainly due to interaction difficulties. Suggestion 5 Instructors should set a communication platform for their courses and announce it in the first session. CE/SE instructors successfully utilized such platforms in a requirements engineering course (Kilicay-Ergin and Laplante 2012) and in some CE laboratory courses involving hardware (Digital Design, Computer Networks, VLSI Design) (Saniie et al. 2015) . Suggestion 6 Instructors can use collaborative coding platforms to promote student interaction. Teiniker et al. (2011) used Google code as the development and communication platform in their SE course to enable student collaboration. Google CoLaboratory can be used in various courses in CE/SE curriculum, including coding (Tock 2019), artificial intelligence (Nelson and Hoover 2020) , and robotics programming (Cañas et al. 2020 ). Suggestion 7 Instructors can use SE practices, such as reviews and retrospectives, to promote collaborative learning in SE-related courses (Teiniker et al. 2011; Kropp et al. 2016) . Suggestion 8 Instructors can organize virtual collaborative problem-solving events, i.e. hackathons, to foster student interaction and collaborative learning. Hackathons were proved to be successful for SE-related tasks (Porras et al. 2018; Valença et al. 2019; Falk Olesen and Halskov 2020) , but also applicable to other types of tasks involving teamwork. Although the questionnaire did not include any questions regarding assessment, the qualitative analysis identified assessment as one of the main themes. 33 participants (15%) identified taking exams as a challenge, and 32 participants (15%) perceived assessment as a downside in e-learning. Most of the responses emphasized unfairness due to illegitimate cooperation among some students during assessments, especially in take-home exams. 36 participants thought that assessment is an improvement area for e-learning. In line with the findings, Erdogmus and Péraire are seeking fairer ways of assessing individual performance while encouraging better collaboration among students in their SE course (Erdogmus and Péraire 2017) . Suggestion 9 Instructors can use auto-graders in some programming tasks for a fair assessment and decreasing the grading burden (Fox and Patterson 2013) , such as nbgrader (Blank et al. 2019 ). Thurmond et al. stated that diversity in assessment influences e-learning satisfaction considerably (Thurmond et al. 2002) . Besides, Sun et al. (2008) found out that diversified assessment tools and methods increase student satisfaction due to various types of feedback from each assessment. Suggestion 10 Instructors can use just-in-time assessments to improve students' preparedness for live sessions and their knowledge retention (Erdogmus et al. 2019) . Suggestion 11 Instructors can use an evaluation framework, such as proposed by Tubino et al. (2020) , to assess student outcomes individually in team assignments. This study is subject to some threats to construct and content validity. To enhance construct validity, validated scales were used for instructor support, student interaction and collaboration, and student autonomy. As the result of the reliability analysis, Cronbach's alpha, Average Variance Explained, and Composite Reliability revealed high internal consistency of the responses on instructor support, student interaction and collaboration, student autonomy, and student satisfaction. Moreover, construct validity was evaluated from another perspective using a Principle Component Analysis (PCA) with Varimax rotation and Kaiser normalization. The results revealed that the factors for which the questionnaire was designed to assess were measured within each of the instruments' scales without any exception. To ensure the content validity of the questionnaire, two professors from two distinct CE departments reviewed the questionnaire and evaluated whether the questions successfully capture the topic. Also, two academicians from a psychology department, who are experts in conducting surveys in the field, checked the questionnaire against common errors such as leading and confusing questions. Another potential threat to internal validity could be that the participants did not have a common understanding of the questions. To minimize this potential threat, a pilot study was conducted in a CE department. One of the potential threats to external validity in this study is the selection bias in participant selection. Convenience sampling has some drawbacks, like its potential for leading to a homogeneous sampling frame and producing results that have limited generalizability to the broader population (Valerio et al. 2016) . To minimize these drawbacks, convenience sampling was combined with snowball sampling by asking the instructors to share the questionnaire with their academic network. Four of the instructors contributed to snowball sampling by confirming that the questionnaire was announced in CE/SE departments of six other universities. Nevertheless, the limited sample size drives the generalizability of the findings. The COVID-19 pandemic forced instructors to integrate new materials and methods into their courses in a very short period to maintain the quality of education under the limitations posed by the pandemic. This study analyzed self-reported opinions and perceptions of Turkish CE/SE undergraduate students regarding on-campus and elearning measured by responses to an online questionnaire. The COVID-19 pandemic has created unique conditions to conduct such a study since thousands of CE/SE undergraduate students switched from on-campus education to e-learning. Obtaining feedback from students is an essential part of identifying what has worked and where improvements could be made in the future. The author hopes that this study inspires more research on the development of the CE/SE curriculum by using ICT and blending various materials and methods. The potential future work is to listen to the other critical stakeholders of education, i.e., instructors. A tale of two sections: An experiment to compare the effectiveness of a hybrid versus a traditional lecture format in introductory microbiology Introduction to Project Management Blended learning models for introductory programming courses: A systematic review How blended learning reduces underachievement in higher education: An experience in teaching computer sciences Virtual classroom characteristics and student satisfaction with internet-based MBA courses What matters in college? Four critical years revisited Student retention and satisfaction: The evolution of a predictive model nbgrader: A tool for creating and grading assignments in the Jupyter Notebook Key factors for determining student satisfaction in online courses Analyzing the effect of learning styles and study habits of distance learners on learning performances: A case of an introductory programming course. The International Review of Research in Open and Distributed Learning Open-source drone programming course for distance engineering education Reliability and validity assessment Analysis of student perceptions of the psychosocial learning environment in online and face-to-face career and technical education courses Navigating E-learning and blended learning for pre-service teachers: Redesigning for engagement, access and efficiency The project manager's guide to mastering agile: Principles and practices for an adaptive approach Student response systems: A rationale for their use and a comparison of some cloud based tools Blended learning of programming in the internet age Involvement, ability, performance, and satisfaction as predictors of college attrition Flipping a graduate-level software engineering foundations course Introducing low-stakes just-in-time assessments to a flipped software engineering course Blending online instruction with traditional in the programming language course: A case study 10 years of research with and on Hackathons Campus traffic and e-learning during COVID-19 pandemic Software engineering curriculum technology transfer SPSS for Windows Step by Step: A Simple Guide and Reference 17.0 Update Snowball sampling Project management the agile way: Making it work in the enterprise Head first PMP: A Learner's companion to passing the Project Management professional exam Applied thematic analysis Multivariate Data Analysis. 6th Edition Perceptions of distance learning: A comparison of online and traditional learning Parametric and nonparametric: Demystifying the terms Applied Scrum for Agile Project Management Türkiye'de Dijital Dönüşüm Ve Dijital Okuryazarlık An online graduate requirements engineering course Principles of survey research part 6: Data analysis Principles and practice of structural equation modeling Teaching agile collaboration skills in the classroom The effects of student-faculty interaction in the 1990s Statistical analysis with missing data The online learning environment of a technologyrich secondary college Transitioning from face-to-face to blended and full online learning engineering master's program Dimensions of perceived service quality in higher education virtual learning environments Initiating and Planning Projects Qualitative data analysis: A methods sourcebook Assessing students for online learning Editorial: Three types of interaction Notes on using Google Colaboratory in AI education Increasing student engagement using asynchronous learning A literature review of the factors influencing E-learning and blended learning in relation to learning outcome, student satisfaction and engagement Software Development Process Reliability and validity of a Turkish version of the DELES Hackathons in software engineering education: Lessons learned from a decade of events Testing the efficacy of MyPsychLab to replace traditional instruction in a hybrid course Transforming computer engineering laboratory courses for distance learning and collaboration Satisfaction and success in online learning as a function of the needs for affiliation, autonomy, and mastery. Distance Education Setting the new standard with mobile computing in online learning. The International Review of Research in Open and Distributed Learning Employers' perceptions of online degree programs Teaching and learning at a distance: Foundations of distance education A survey of controlled experiments in software engineering What drives a successful e-learning? An empirical investigation of the critical factors influencing learner satisfaction A practical software engineering course with distributed teams Evaluation of student satisfaction: Determining the impact of a web-based environment by controlling for student characteristics. The American journal of distance education Google CoLaboratory as a platform for Python coding with students Authentic individual assessment for team-based software engineering projects On the benefits of corporate Hackathons for software ecosystems-a systematic mapping study Comparing two sampling methods to engage hard-to-reach communities in research priority setting Introduction to software project management Development and validation of an instrument for assessing distance education learning environments in higher education: The distance education learning environments survey (DELES) Blended learning: The convergence of online and face-to-face education. Promising Practices in Online Learning, North American Council for Online Learning Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations Acknowledgments I would like to thank Dr. Muazzez Deniz Giray for her feedback on the questionnaire and her support for statistical analyses. I would like to thank Dr. Duygu Güngör Culha for her feedback on the questionnaire. I am grateful to all faculty members who distributed the questionnaire and to all students who responded to the questionnaire.