key: cord-0805845-sp2p51dn authors: Gasell, Crystal; Lowenthal, Patrick R.; Uribe-Flórez, Lida J.; Ching, Yu-Hui title: Interaction in asynchronous discussion boards: a campus-wide analysis to better understand regular and substantive interaction date: 2021-09-25 journal: Educ Inf Technol (Dordr) DOI: 10.1007/s10639-021-10745-3 sha: 0d929c611a21537fd612ffda53c656cd9f304bcf doc_id: 805845 cord_uid: sp2p51dn Discussion boards can provide a glimpse into the regular and substantive interaction required in online courses. Advances in technology and an increased interest in learning analytics now provides researchers with billions of data points about instructor and student interaction within a learning management system (LMS). This study used LMS data to explore the frequency of interaction between instructors and students in discussion boards in online courses at one institution. Overall, 415 courses were analyzed spanning two semesters. Results from the study found that the average number of posts by an instructor was 32.9. The average instructor interaction was 1.49 instructor posts per student. 23% of courses had no instructor posts. Student posts averaged 470 per course and the average posts per student was 19.9. Based on the discussion board activity, the most discussion interaction occurred during the first two weeks of the semester. Results also suggested that there is no relationship between student satisfaction and the number of total posts in a course. The paper concludes with implications for research and practice. The number of online students in higher education have increased steadily during the last decade (Seaman et al., 2018) . Traditional and non-traditional students are choosing online courses, among other reasons, to fit within their busy schedules (Ortagus, and how this interaction is related to student satisfaction. In the following paper, we present the results of our inquiry and implications for future research and practice. Moore (1989) identified three types of interaction: learner-content interaction, learner-instructor interaction, and learner-learner interaction. Learner-content interaction refers to the interaction of the learner with the subject matter. Moore (1989) described student-content interaction as "… the process of intellectually interacting with the content that results in changes in the learner's understanding, the learner's perspective, or the cognitive structures of the learner's mind" (p. 2). Learner-instructor interaction references the dialogue between the instructor and student, but also includes how the instructor motivates the learners, presents or demonstrates information, provides feedback, and supports and encourages the learners (Moore, 1989) . The separation of instructor and student in online courses creates gaps in communication between the student and instructor, but also creates psychological challenges for the student (Moore, 1997) . To address the challenges of separation, Moore (1997) suggested an increase of dialogue between student and instructor could create a decreased sense of transactional distance. Finally, the third type of interaction is learner-learner interaction. According to Moore (1989) , learner-learner interaction is important in the learning process and challenges traditional ideas of teaching and learning. Together, the three types of interaction provide a framework that can enable educators to be more thoughtful and purposeful about how they teach online (Falloon, 2011) . Although all three types of interaction are equally important in online learning, learner-instructor interaction has been found to be the most important type of interaction for predicting satisfaction (Hong, 2002; Jung et al., 2002; Kuo et al., 2014; Swan, 2004) . Hong (2002) concluded that "interaction with the instructor was the most significant contributor to satisfaction and learning in web-based courses" (p. 278). Based on these results, Hong (2002) concluded that active participation by the instructor could increase student participation and would increase learning. Similarly, Dennen et al. (2007) found that "posting to discussion board" was ranked by students as the second most important action by an instructor, below checking email (p. 74). Therefore, Dennen et al. (2007) recommended that instructors prioritize interactions and focus on maintaining frequency of contact, having a regular presence in class discussion spaces, and making expectations clear to learners. Moore's theory offers a lens which can be used to identify ways in which students and instructors interact. Interactions can occur synchronously or asynchronously, and instructors can facilitate these interactions with a variety of technologies, such as web conferencing, chat, discussion boards, and email (Lowenthal & Moore, 2020; Lowenthal et al., 2021; Sher, 2009 ). Discussion boards are widely used in online teaching, allowing interaction to occur without being limited by time or space (Hew et al., 2010) . In discussion boards, participants can see discussion posts of others, organized by author, topic, and date/time, and respond to them on their own time (Brown & Green, 2009) . Research suggests that when instructors participate in discussion boards students are more motivated (Xie et al., 2006) , students are more satisfied (Sher, 2009) , and instructor participation is highly valued by students (Nandi et al., 2012; Lowenthal & Dunlap, 2020) . This increase in dialogue between student and instructor can not only reduce transactional distance but serve to meet regular and substantive interaction requirements for online courses. We used a quantitative exploratory research design to answer the following research questions: 1. How do instructors interact in asynchronous discussion boards in online courses? 2. How do students interact in asynchronous discussion boards in online courses? 3. How do students and instructors interact each week in asynchronous discussion boards in online courses? 4. Is there a relationship between interaction in asynchronous discussion boards and student satisfaction? This study utilized archival data from two sources, Canvas Data and end-of-course evaluation data. Data from Canvas was exported from the Amazon cloud and imported into Exasol, a high performance, in-memory database. End-of-course evaluation data was downloaded from a publicly accessible database. A query was run in Exasol to create a comprehensive list of online courses offered during the period of the study. We first identified all courses in a single academic year (N = 6152). Next, we filtered the list to only include online courses (N = 675). Then we removed courses with multiple sections, courses with multiple instructors or Teaching Assistants, and any courses with less than five students. We ended up with 415 courses in the initial dataset representing six schools or colleges (see Table 1 ). Personal Interest Scale = 1 (low) to 6 (high); Rate your personal interest in this material before you enrolled. Instructor Effectiveness Scale = 1 (low) to 6 (high); Rate your instructor's effectiveness in encouraging interest in the subject. Instructor Availability Scale = 1 (low) to 6 (high); Rate your instructor's availability for course-related assistance such as email, office hours, individual appointments, phone contact, etc. Intellectual Challenge Scale = 1 (low) to 6 (high); Rate the intellectual challenge of this course. Learning Scale = 1 (low) to 6 (high); Rate how much you have learned in the course. Instructor Respect Scale = 1 (low) to 6 (high); Rate the instructor's respect and professional treatment of all students. Course Overall Scale = 1 (low) to 6 (high); Rate the course overall. Instructor Overall Scale = 1 (low) to 6 (high); Rate the instructor overall. Ordinal Student Satisfaction Score Calculated Field; Scale = 1 (low) to 6 (high); Average of the eight end-of-course evaluation questions. We then pulled specific data for each course. Course information from Canvas Data was combined with end-of-course evaluation data (see Tables 2 and 3) to create the data set for this study. The end-of-course evaluation had eight questions, answered on a scale of 1 (low) to 6 (high). Since no single question asked about student satisfaction, the scores on the eight questions were combined and averaged to create a student satisfaction score due to previous research suggesting that while end-of-course student evaluations might not be good measures of teaching quality, they are an adequate indicator of student satisfaction (see Lowenthal & Davidson-Shivers, 2019) . As is common in the field of data science, preliminary data analysis was performed using Tableau to explore the variables in the dataset through frequencies, descriptive statistics, and cross-tabulations. Once the initial analysis was complete, the dataset was exported to an Excel file and imported into IBM SPSS Statistics for the statistical analysis. Descriptive statistics were used to answer the first three research questions about how instructors and students interact in discussion boards. Then correlation testing was used to determine if a relationship existed between discussion board interaction measures and student satisfaction. After exploring the variables, it was determined that a Spearman's Rho test would be used to determine if a relationship existed. A Spearman's Rho test was selected because the assumptions regarding normality were not met. Table 4 illustrates the data source and type of analysis used to answer each research question. The purpose of this study was to explore the frequency of interaction between instructors and students in discussion boards in online courses and if this interaction is related to student satisfaction to better understand and set a baseline for this institution and others for understanding regular and substantive interaction. We report the results of our inquiry in the following section. A total of 415 online courses, taught over a single academic year, across six schools and colleges, were identified for the study (see Table 5 ). Most of the courses were taught in the College of Liberal Arts and Sciences (44.6%) which serves not only a diverse student population but offers a diverse number of online programs at the university. The other schools and colleges made up the remaining 55% of the courses in the study. In the study, 82% of the instructors were non-tenure-track, thus less than 20% of the instructors were tenure-track. However, the Business School (27%), the College of Arts and Media (23%), and the School of Education and Human Development (26%) had slightly higher percentages of tenure-tracked faculty teaching online courses compared to the other schools and colleges. The distribution of course levels is shown in Table 5 . Courses were categorized as lower division, upper division, and graduate. For this study, 27.71% (N = 115) of the courses were lower-level undergraduate courses, 38.07% (N = 158) were upper level undergraduate courses, and 34.22% (N = 142) were graduate level courses. Courses in the study had only one instructor and no teaching assistants (TA). This decision was made to eliminate courses with multiple instructors or a TA. Courses with TAs were also removed since a TA can have a combination of roles in a course, from designer to facilitator, to teacher. The number of students in a course, though, ranged from five to 79 students (N = 415, M = 25.43, SD = 11.3). To better understand how instructors and students interact in discussion boards, it was important to analyze the number of discussions in a course. The total number of discussion boards in a course ranged from 0 to 140. There were 23 courses with no discussions. These courses were removed from further analysis since these courses did not use discussion boards. Therefore, 392 courses were included in the remaining analysis. Total posts refer to the total number of posts per course to any discussion board in the course. A post is a reply to the discussion topic or another post. A post can be made by the instructor or a student. This number is used to describe the amount of interaction in a course because a post in a discussion board is like a face-to-face discussion where students and instructors exchange ideas through taking turns speaking. The minimum number of posts in a course was two and the maximum number of posts was 2468 with an average of 503.21 (SD = 447.2) posts per course. Research question one was, "How do instructors interact in asynchronous discussions in online courses?" Results answering this question provides baseline data regarding frequency of discussion board posts as well as the rate of interaction for instructors in online courses. It is not possible to determine whether the instructor or students created the initial discussion board in the data set. However, regardless of who created the discussion, interaction occurs through a series of posts, or replies between the instructor and students. The number of posts by an instructor ranged from 0 to 347, with the average instructor posting 32.90 times throughout a course. An instructor post would be in response to either the initial discussion board or a student in the course. We found that 63.7% (or 250 out of 392 courses) of courses had the instructor post less than 32 times (the mean in this sample) during the semester. Of those 250 courses, 28.8% of the courses had no instructor posts at all. It is important to note that the total number of posts an instructor makes in an online course provides only a glimpse into their interactions with students during a course. While it is helpful to know if an instructor is posting below the average number of posts for the institution (e.g., to identify absentee instructors), the number does not take into account situational factors, such as class size. For instance, we contend that the effect of 32 posts by an instructor is more impactful with a course with 25 students versus a course with 75 students. Thus, researchers and practitioners need a way to better understand how active instructors are in a course. One method was created by Bliss and Lawrence (2009a, b) . In this method, the calculation of instructor participation is the total number of instructor posts divided by the number of students in the course. This means that in a course with five students and an instructor who posted 80 times during the semester would have an average interaction rate of 16 posts per student. While a course with 25 students and an instructor who posted 80 times during the semester would have an average interaction rate of 3.2 posts per student. Instructor interaction rate was calculated for each course in the study. Instructor interaction ranged from 0 to 18.9 with a mean of 1.49 and a standard deviation of 2.33. These results indicate a varied approach to discussion boards. A closer look at the distribution (see Fig. 1 ) shows that although most courses had an average instructor interaction rate of less than one post per student, there was a large spread with some instructors having an interaction rate of over ten posts per student. This spread could indicate varied approaches by the instructors. For instance, some instructors may post less frequently in discussions, but have other strategies or methods of communication, like summarizing discussions after each week, sending individual emails or using synchronous forms of communication. The wide variety of tools available within and outside the learning management system means that interaction is not limited to discussion boards only. With this in mind, though, based on the data from Canvas, instructors in this study posted an average of 1.49 times a semester for every student in their class. Research question two was "How do students interact in asynchronous discussions in online courses?" Results answering this question provides baseline data about student use of discussion boards in online courses. In an online course, discussion boards serve as a primary opportunity for person-to-person interaction (Lieberman, 2019) . When a student posts to a discussion board, makes a reply to a discussion board, or another person's post, it is meant to simulate a conversation in a faceto-face classroom. Descriptive statistics were used to analyze the number of posts by students. The total number of student posts per course ranged from 0 to 2438 (N = 392, M = 470.31, SD = 432.8). When assessing the shape of the distribution (see Fig. 2 ), almost half of the courses in the study had over 350 student posts (N = 194) throughout the semester. However, 48 courses (12.2%) had less than 0 student posts. Since each course has a variable number of students, it is difficult to determine from total posts alone whether a course has a lot of interaction. Therefore, it was important to look at the average number of posts per student, in addition to total numbers. Our analysis revealed that the average number of total posts per student was 19.9 per student per course (SD = 18.1). This means that on average, a student posted in the discussion boards approximately 19 times per semester. Given that the semester is 15 weeks, plus final weeks, this averages out to each student posting a little more than once a week. We also found that 25% of courses had an average of less than 5 posts per student (N = 98). Based on these results, students who post more than 20 times per semester have an above average number of posts. This information could be used by instructors or administrators looking to identify students who may need additional support or encouragement in order to fulfil the requirement of regular interaction. In this case, an instructor may identify students who have posted only a few times during the first two weeks of the semester. Then, the instructor could reach out to those students regarding the expectation of regular interaction. Research question three was "How do students and instructors interact each week in asynchronous discussions in online courses?" Results answering this question provides baseline data for discussion board activity in online courses. This data could be used to identify courses early in the semester who have low levels of discussion board interaction. An instructor or administrator may wish to identify students or instructors who have low levels of interaction to promote regular learner-instructor interaction. To answer this research question, weekly totals of discussion posts were calculated. For each week, the number of student posts and instructor posts were reported for each of the courses. The courses in the data set were offered over fall or spring semester; the courses were assumed to have followed the university's traditional 15-week schedule, plus finals week. All courses are expected to take part in finals week, either by giving an exam or fulfilling two contact hours of instruction. Table 6 shows the weekly totals of posts for all courses as well as the totals for instructors and for students. Additionally, the average number of posts per course was calculated along with the percentage of overall posts for each week. Based on the data set, most interaction happened in the discussion boards during the first two weeks of a semester. This was true for both students and instructors. After that, there was a steady decrease in the number of discussion board posts. The least amount of interaction in the discussion boards happened during finals week and spring or winter break (depending on the semester). Further, it is worth pointing out that the last few weeks of the semester have about a third of the interaction as the first week (see Table 6 ). As discussed previously, class size can influence interaction. Therefore, using the average class size of the courses in the study (M = 25.43), average instructor interaction rate and average posts per student were calculated each week. These numbers provide a baseline measure which could be used to identify courses with low interaction rates. Since this data could be particularly helpful during the first few weeks of the semester to encourage participation from students and ensure that instructors are practicing regular interaction, Table 7 shows the average instructor interaction rate and average posts per student for the first four weeks of the semester. After that, average interaction drops off. Based on the average instructor interaction rate and average posts for students, instructors should possibly attempt to post an average of once, per every three students in their class and a student should post at least twice. During week two, an instructor should post an average of once per every seven students in their class and a student should post at least once. Using the average instructor interaction rate and average posts per students, these numbers could help assist instructors on setting targets numbers which they can use to help ensure they are maintaining regular interaction with their students. The two semesters used in the study showed similar results for interaction. Term 1 had 207 courses and term 2 had 185 courses. Fig. 4 .9 shows the total posts by term. As shown in Fig. 3 , posts for both students and instructors decrease from the first week of the semester to the last week. This decrease in posts may indicate a reduction in interaction throughout the semester. However, additional research would need to be done to determine if interaction was occurring in different ways at different points in the semester. Research question 4 was, "Is there a relationship between asynchronous discussion interaction measures and student satisfaction?" This research question focuses on whether there is a correlation between total posts (i.e., interaction) in a course and student satisfaction. It is important to understand if the total posts in an online course is associated with student satisfaction. If a correlation was found, course design and delivery methods could be modified to increase student satisfaction. For the variable, total posts, from the 392 courses with discussions, the total number of posts ranged from two to 2468 posts, with a mean of 503.21 (SD = 447.2). For the variable, student satisfaction, from the 392 courses with discussions, student satisfaction ranged from 2.625 to 6.0 with a scale from zero to six. The mean was 4.96 (SD = 0.5). To determine the appropriate statistical technique, a test of normality was used to assess the distribution of the scores (Pallant, 2013) . Results of the Kolmogorov-Smirnov and Shapiro Wilk provided the Sig. value of .000 for both total posts and student satisfaction, suggesting violation of the assumption of normality. An inspection of the normal probability plots confirmed a non-normal distribution for both variables. Several attempts were made to normalize the data. This included removing outliers and transforming the variables. Since student satisfaction was already a new variable introduced by averaging the scores from eight questions from the end-of-course evaluation, it felt excessive to transform that variable. In addition, there is "considerable controversy" concerning transforming variables (Pallant, 2013, p. 96) . When removing outliers, results from correlation testing produced similar results as when not removing outliers. Therefore, a non-parametric technique was selected. Non-parametric tests are useful in cases where the assumption required for parametric tests are not met (Pallant, 2013, p. 221 ). Therefore, a Spearman's Rho correlation was selected to measure the relationship between the two variables. A Spearman's rank-order correlation was run to assess the relationship between student satisfaction score and total posts in a course. 392 courses were used in the analysis. Preliminary analysis showed the relationship to be non-monotonic, as assessed by visual inspection of a scatterplot. There was no statistically significant correlation between student satisfaction scores and total posts, r s = −.060, p = .240 (see Table 8 ). Findings from this study are intended to provide insight into how instructors and students interact in discussion boards. The exploratory nature of this research was meant to provide baseline data that can help instructors, department chairs, and administrators to better understand how instructors and students interact in online courses. Research question 1 explored how instructors interact in discussion boards in online courses. Research suggests that instructors should play an active role in online discussions and research indicates that regular interaction between students and instructors encourages discussion and improves learner satisfaction (Darabi et al., 2013; Dennen, 2005; Moller, 1998; Nandi et al., 2012) . Results from this study showed that instructor interaction varies greatly from course to course. In some courses, instructors did not post at all in discussions, while in other courses, instructors posted over 200 times. On average, an instructor posted 33 times during the semester. In addition, instructor interaction rate was calculated for each course. The calculation was determined by taking the total number of instructor posts and dividing it by the number of students in the course. Instructor interaction ranged from 0 to 18.9 with a mean of 1.49 (SD = 2.3). Since there was a wide range of instructor interaction, it is possible that instructors used different approaches to discussion boards or perhaps instructors used other tools, beyond the discussion boards, for facilitating interaction; at the same time, it is also possible that instructors were simply absent from the course. Although there is no magic number for the number of posts an instructor makes in a course, research indicates and regulation requires, that regular interaction from the instructor has an impact on student perceived learning, student satisfaction, and student engagement (U.S. Department of Education, 2014; Hrastinski, 2008; Jung et al., 2002; Swan, 2004) . Online discussions create opportunities for collaborative, knowledge sharing, and social interaction (Fleming, 2008; Rovai, 2002; Thompson, 2006) . Specifically, when it comes to instructor interaction, Ringler et al. (2015) found that "there is a positive relationship between the number of instructor posts and the number of posts per student" (p. 23). Meaning that the more often instructors participated, the more discussion occurred. The thought is that more discussion means greater learning and a stronger sense of community. However, depending on one's teaching style, the instructor may post more or less often (Quitadamo & Brown, 2001) . Meaning if an instructor posted infrequently, perhaps they were writing (or recording) longer posts of higher quality or choosing to summarize discussions at the end of the week (Rovai, 2007) . Or perhaps an instructor found that when posting too frequently, students shut down or merely waited for the instructor to respond instead of responding to a fellow student's post and therefore believed that posting less frequently actually simulated student-student discussion (Mazzolini & Maddison, 2003) . The variety of strategies and facilitation strategies makes it difficult to judge the quality of the course just on the number of posts by an instructor. In addition to instructor posts, the number of discussion boards also varied greatly from course to course. In some courses there were no discussion boards, while in other courses over 100. The average number of discussion boards in a course was 14, roughly one a week. The design of the course and the beliefs of the instructor likely influenced how many discussion boards were in the course. According to Covelli (2017) , there are several techniques that can be applied to the course or by the instructor to encourage effective discussions. Research suggests that facilitating discussions may not come naturally to instructors and therefore, instructors should engage in professional development on facilitating effective discussions (Covelli, 2017) . For example, learning how to incorporate audio and video into discussions can add texture and personality to discussions (Covelli, 2017) . Additionally, the course design may offer opportunities for small group or whole class discussions which can assist in building community within the course (Covelli, 2017) . The institution at which this study was conducted has a faculty-driven development and delivery model, meaning that courses are designed and taught by instructors with little or no assistance from an instructional designer. This was common practice during the early years of online learning to increase production of online courses (Oblinger & Hawkins, 2006) . Faculty were provided release time or a stipend in exchange for developing and delivering online courses (Oblinger & Hawkins, 2006) . However, this decentralized approach to course design also means that some instructors may have received no training or limited support, which can lead to different approaches to course design and specifically to the design and facilitation of online discussions. Research suggests there are many factors that influence student contribution in online discussions (Hew et al., 2010; Xie et al., 2006) . Results from this study found that the frequency of student posts varied from zero to over two thousand in a course during the semester with a mean of 470 posts per course. Due to differences in class size, the average number of posts per student was calculated by dividing the total number of student posts by the number of students in the course The average posts per student was 19 times per semester or just barely more than once per week. One of the challenges with this measure is that it assumes that every student participated in the discussions (Bliss & Lawrence, 2009a) . Like instructor postings, the total number of student posts only tells part of the story. Other factors, such as instructor expectations, the design of the discussion, and extrinsic motivation can influence the number of posts or level of engagement of students in online discussions (Rovai, 2007) . These factors are reflected in popular online learning standards. For example, Chickering and Gamson's (1987) seven principles of good teaching includes communicating high expectations. Specifically related to online discussions, Rovai (2007) suggests clearly communicating with students what the requirements are for active participation in discussions; a discussion rubric can assist in setting those expectations (Rovai, 2007) . Popular online learning standards include the design of learning activities, and specifically online discussions, as an important component in effective online courses. Maddix (2012) argued that discussion questions should be open-ended and encourage critical and creative thinking. Related to design, the size of the discussion board can also affect participation. For example, Reonieri (2006) found that 10-15 students was the ideal size for an effective online discussion. In addition, Bliss and Lawrence (2009b) found that students participated more frequently in small group discussions than in whole class discussions. Finally, extrinsic motivation can affect discussion participation. All the popular online learning standards include assessment. Best practices for discussion boards recommend evaluating and grading discussion board interactions in online classes (Maddix, 2012; Rovai, 2007) . The use of rubrics can assist not only in the grading process, but also provide expectations for participation (Ringler et al., 2015) . Research question 3 looked at weekly interaction between students and instructors in discussion boards. This is because it is one thing to understand how instructors and students interact across an entire semester once a semester is over, but it is another thing to better understand how these interactions occur each week. Total posts, average posts per course, and the percentage of overall posts was calculated for each week. Results found that the most interaction occurred during the first two weeks of the semester. After the first week, interaction dropped nearly every week for both instructors and students. During week two, interaction dropped 25% and then during week three interaction dropped 20%. After the first three weeks, on average, interaction dropped about 4% each week. The lowest number of interactions occurred during semester break and finals week. Best practices for online learning often recommend an "introductory discussion" where students and the instructor can introduce themselves and become acquainted (Gunawardena & Zittle, 1997; Rovai, 2007) . These introductory discussions are meant to spark a sense of community (Gunawardena & Zittle, 1997) . However, like other studies (Pham et al., 2014) , interactions in this data set dropped over the semester. Pham et al. (2014) found after a high level of engagement at the beginning of the course, momentum faded as the semester continued. Research, though, has highlighted the importance for online instructors to create motivation throughout the semester to increase student engagement in discussions (Rovai, 2007) . This means that without extrinsic motivation, even the most motivated student may have a hard time staying engaged in an online course. One strategy identified by researchers to increase extrinsic motivation is to assign a grade for discussion participation ranging from 10 to 35% of the overall course grade (Rovai, 2007) . Rovai (2007) points out that students should be clear on what and how their being graded. Some instructors use discussion board rubrics, to assist students in self assessing their participation and provide clear expectations, while others simply require a minimum number of posts each week. Other strategies for maintaining motivation and increasing interaction throughout the semester include making sure the discussions are directly tied to the course objectives, use small group discussions to encourage participation from students who may be reluctant to post in larger discussions, and provide tutorials or detailed instructions for those who may not be familiar with discussion board technology (Suler, 2004) . Finally, many researchers believe that the instructor should actively participate in discussions, but without taking over or responding too quickly (Bliss & Lawrence, 2009a) . Research suggests that learner-instructor interaction plays an important role in student satisfaction, therefore, research question 4 looked at the possible relationship between asynchronous discussion interaction measures and student satisfaction scores based on end-of-course evaluations. Although there is a large body of research which suggests that classroom participation and engagement is positively associated with student satisfaction, results of this study found no association (Hrastinski, 2008; Jung et al., 2002; Sher, 2009; Swan, 2004) . However, there are several possible explanations for this. First, there could be issues with using an average of the end-of-course evaluation as a measure of student satisfaction. Another possible explanation is that the discussions were not the only place instructors and students interacted. Huang and Hsiao (2012) identified seven different communication tools which facilitated online interaction between learners and instructors. Those tools included email, discussion boards, announcements, blogs, streaming audio/ video, chat, and web-conferencing (Huang & Hsiao, 2012) . It could be that a variety of communication tools were being used in online courses and in order to fully understand the effects of interactions on student satisfaction additional research needs to be done. The U.S. Department of Education has identified regular and substantive interaction between the instructor and students as a standard and required practice for online education to be considered for federal funding (U.S. Department of Education, 2014). Best practices for online education also acknowledge the importance of interaction (Lowenthal & Davison-Shivers, 2019; Richardson & Swan, 2003; Swan, 2004) . And although there are an increasing number of ways to facilitate this interaction, asynchronous discussion boards are still the most popular (Lieberman, 2019) . Therefore, this study sought to explore and better understand the frequency of interaction between instructors and students in discussion boards in online courses. The first major finding was that numbers alone do not tell the entire story. Although LMS data has become more readily available and accessible for analysis, the differences in course design and course facilitation made it difficult to generalize across all courses. Courses in this study had wildly different practices when it came to discussions. For example, in some courses, no discussions were present while other courses had over 100 discussions. Due to the decentralized development model for online courses at this institution, the differences in the number of discussions is unexplained. However, perhaps courses with many discussions break students into discussion groups or even pairs. Meaning that for every discussion, there are duplicates of that discussion to allow groups or pairs to respond to one another, as opposed to the entire class. Although there are other ways of accomplishing this in an LMS, depending on training, the instructor may be unaware. Additionally, there may be pedagogical reasons for making group discussions available to other groups in the course. Without a deeper analysis of course design and course facilitation, the numbers from the LMS data only tell a part of the story. Therefore, it would be suggested that if department chairs or administrators wanted to use discussion board activity to inform evaluation or any type of student intervention, they not only identify appropriate levels of interaction for the courses offered in their programs as well as possibly adding other data points. Another major finding was that the total posts in a course was not correlated to student satisfaction. However, additional research would need to be conducted to confirm these results in other contexts as well as with other instruments to measure student satisfaction. Given this, it would be logical to continue to follow best practices which include making efforts to participate regularly in discussions, setting expectations, and assigning grades for participation in discussions. In addition, this research makes use of LMS data, which historically has been difficult to obtain. With a growing interest in using student data to improve teaching and learning, this research serves as an example of how advances in technology and reduced data storage costs has allowed institutions to take advantage of the tremendous amount of data available in the LMS (Viberg et al., 2018) . However, this research also brings up a number of concerns about "if" Canvas data should be used. Viberg et al. (2018) suggests that concerns of data privacy, security and informed consent of learning data should be considered as institutions scale research efforts using learning data. Although data from this study was anonymized and exploratory in nature, it brings up questions about how institutions should ensure ethical practices as future research is conducted. Specifically, the results from this study could be used to inform department chairs and administrators of the general practices of discussion board use. Using this information, department chairs or administrators could target courses with low number of discussions or instructors and students with fewer than average number of discussion posts during the first few weeks of class. By catching low levels of interaction early, support and guidance can be provided to instructors or students to increase interaction throughout the semester. These results could also be used by instructional designers in order to guide recommendations for future training and support. This research could also be helpful to share with instructors as a baseline of minimum interaction that should be occurring in their online classes. The results of this study are limited due to the size and scope of the study. The courses, instructors, and students in this study were from a single university with a common LMS, Canvas. The actual teaching methods used varied. Additionally, this research took a campus wide view of discussion interactions. It did not consider situational variables, (e.g., class size, subject matter, faculty experience). Additionally, we did not have access to other datasets, such as course grades or retention rates, which would be worthwhile to investigate beyond student satisfaction. Finally, due to the exploratory nature of this study, additional research would need to be completed to more fully understand how students and instructors are interacting in online courses. Another limitation is that this study focused only on the quantity and not the quality of discussion boards which in turn really focuses just on "regular" and not necessarily on "substantive" interaction. Analyzing the quality of posts would be needed to truly assess the substance of discussion board interactions. However, these additional metrics would require significant resources and therefore not be as available simply from current learning analytic data. Future research could expand to include the quality of posts, length of posts, as well as the extent of threading. Bliss and Lawrence (2009a) recommend using multi-factor metrics to provide a more complete view of how interactions occur in online discussion boards. Additionally, with an array of best practices for discussion boards, it would be valuable to explore if the use of best practices, like providing clear guidelines for discussions or grading discussions, has any effect on the quantity or quality of posts. Although not touched on in this research, the impact of faculty training on the quantity of interactions may provide guidance or direction for faculty development organizations. With access to Canvas data, there are many possibilities to explore. Discussion boards are just one tool for interacting in online courses. The single metric is not adequate for measuring or ensuring that online courses meet the "regular and substantive" interaction requirement set by the U.S. Department of Education. Future research could look more diversely at the toolset used for communication in online courses to establish metrics which could be used to measure interaction. The findings of this study build upon current research and theory related to the importance of interaction between students and instructors in online courses. Results found that discussion board activity of students and instructors varied greatly depending on the course. There was no relationship between the number of discussion board interactions and student satisfaction, as tested in this study. The result of this study, though, still contributes to the research and practice for online education by extending the research related to asynchronous discussion boards. In addition, this research serves as a proof of concept for additional research which uses data available from the LMS to continue the work of improving online education. Availability of data and material Not able to share. Code availability Not applicable. Ethics approval Approved Protocol Number: 101-SB19-175. Consent for publication Not applicable. Online report card: Tracking online education in the United States Interaction online: A reevaluation From posts to patterns: A metric to characterize discussion board activity in online courses Is the whole greater than the sum of its parts? A comparison of small group and whole class discussion board activity in online courses Time students spend reading threaded discussions in online graduate courses requiring asynchronous participation. International Review of Research in Open and Distributed Learning Implementing the seven principles: Technology as lever Seven principles of good practice for undergraduate education Online discussion boards: The practice of building community for adult learners Effectiveness of online discussion strategies: A meta-analysis From message posting to learning dialogues: Factors affecting learner participation in asynchronous discussion Instructor-learner interaction in online courses: The relative perceived importance of particular instructor actions on performance and satisfaction. Distance Education Making the connection: Moore's theory of transactional distance and its relevance to the use of a virtual classroom in postgraduate online teacher education Using best practices in online discussion and assessment to enhance collaborative learning Critical inquiry in a text-based environment: Computer conferencing in higher education Social presence as a predictor of satisfaction within a computer mediated conferencing environment An application of the seven principles of good practice to online courses Student contribution in asynchronous online discussion: A review of the research and empirical exploration Relationships between students' and instructional variables with satisfaction and learning from a web based course Asynchronous and synchronous e-learning Synchronous and asynchronous communication in an online environment: Faculty experiences and perceptions Survey of faculty attitudes on technology. Inside Higher Ed and Gallup Effects of different types of interaction on learning achievement, satisfaction and participation in web-based instruction A case study of integrating interwise: Interaction, internet self-efficacy, and satisfaction in synchronous online learning environments. International Review of Research in Open and Distance Learning Examining the relationship among student perception of support, course satisfaction, and learning outcomes in online learning Legal Information Institute (n.d.). 20 U.S Code 1003 Discussion boards: Valuable? Overused? Discuss Strategies used to evaluate online education Social presence and online discussions: A mixed method investigation Exploring student perceptions of Flipgrid in online courses Faculty perceptions of using synchronous video-based communication technology Generating and facilitating effective online learning through discussion Sage, guide or ghost? The effect of instructor intervention on student participation in online discussion forums Designing communities of learners for asynchronous distance education Three types of interaction Theory of transactional distance Evaluating the quality of interaction in asynchronous discussion forums in fully online courses The myth about online course development From the periphery to prominence: An examination of the changing profile of online students in America higher education. The Internet and Higher Education SPSS survival manual Frequency and pattern of learner-instructor interaction in an online English language learning environment in Vietnam Assessing online faculty: More than student surveys and design rubrics Interpreting what is required for "regular and substantive" interaction. WCET Frontiers Does online education live up to its promise? A look at the evidence and implications for federal policy Effective teaching styles and instructional design for online learning environments Optimizing the number of students for an effective online discussion board learning experience. Unpublished master's thesis Examining social presence in online courses in relation to students' perceived learning and satisfaction Improving the asynchronous online learning environment using discussion boards. I-manager's Building a sense of community at a distance Facilitating online discussions effectively Grade increase: Tracking distance education in the United States Assessing the relationship of student-instructor and student-student interaction to student learning and satisfaction in web-based online learning environment Quantitative social research methods The effectiveness of teaching and learning process in online education as perceived by university faculty and instructional technology professionals Bridging the transactional distance gap in online learning environments In-class and online: Using discussion boards in teaching Relationships between interactions and learning in online environments. Sloan Consortium Learning analytics: Where information science and the learning sciences meet. Information and Learning Sciences Best practices in asynchronous online course discussions Defining 'regular and substantive' interaction in the online era DCL id: GEN-14-23. Subject: Competency-based education programs-questions and answers The current landscape of learning analytics in higher education Extending the traditional classroom through online discussion: The role of student motivation