<head> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <meta charset="UTF-8"> <title>Filius et al publication2</title>

Frontline Learning Research Vol.6 No. 2 (2018) 92 - 113
ISSN 2295-3159

Promoting deep learning through online feedback in SPOCs

Renée M. Filiusa, Renske A.M. de Kleijnc, Sabine G. Uijld, Frans J. Prinsc, Harold V.M. van Rijenb, Diederick E. Grobbeea

aUniversity Medical Center Utrecht, Julius Center for Health Sciences and Primary Care, the Netherlands
bUniversity Medical Center Utrecht, Biomedical Sciences Department, the Netherlands
cUtrecht University, Department of Education, the Netherlands
dUtrecht University, University College, the Netherlands

Article received 8 March 2018/ revised 18 April/ accepted 6 September/ available online 12 November

Abstract

Higher education aims for deep learning and increasingly uses a specific form of online education: Small Private Online Courses (SPOCs). To overcome challenges that instructors face in order to promote deep learning through that format, the use of feedback may have significant potential. We interviewed eleven instructors and four students and organized a focus group to formulate scalable design propositions for instructors in SPOCs to promote deep learning. Propositions have been formulated according to the CIMO-logic. This study resulted in identification of four mechanisms by which the desired outcome (deep learning) can be achieved, which we describe here along with proposed interventions. Results show that the “online learning interaction model” can be deepened with these mechanisms: 1) Feeling personally committed, 2) Asking and providing relevant feedback, 3) Probing back and forth, and 4) Understanding one’s own learning process. To activate these mechanisms, scalable feedback interventions are described in three categories. Results at this relatively young field of SPOCs also show that feedback as a dialogical process may contribute to solving the current challenges of instructors in SPOCs to achieve deep learning with their students.

Keywords: online learning; deep learning; peer feedback; SPOCs; teaching/ learning strategies.

Info corresponding author Mail: r.m.filius@uu.nl Doi: 10.14786/flr.v6i2.350

1. Introduction

Deep learning involves critical thinking, integrating what the student is learning with what he or she already knows and creating new connections (Biggs, Kember, & Leung, 2001; Entwistle, 1991; Marton & Saljo, 1997; Hounsell, 1997) and is related to higher-order thinking skills. Promoting deep learning is an important task for higher education (Biggs & Tang, 2011, Nicolls, 2002, Lynch, Mc Namara & Seery, 2012, Ramsden, 1992), which is increasingly conducted online (Geitz, Brinke, & Kirschner, 2015). Small Private Online Courses (SPOCs) are a distinctive form of online education that is used in higher education, especially since the last decade (Uijl, Filius, & Ten Cate, 2017). Recently, Filius, de Kleijn, Uijl, Prins, van Rijen and Grobbee (2018) found that instructors experience specific challenges when trying to promote deep learning in SPOCs. Their study resulted in a description of five main challenges for instructors: alignment of learning activities, insights into student needs, adaptivity of teaching strategy, social cohesion, and creation of dialogue. These challenges are due to a lack of facial contact and visual cues, as online learning tends to involve mostly asynchronous written interaction, and to the fact that the course material is usually developed and set before the start of the course.

To overcome such challenges to promoting deep learning in online education, the incorporation of feedback as a pedagogical strategy may have significant potential, which is currently not optimally exploited (Lynch, McNamara, & Seery, 2012; Rushton, 2005). Following Carless (2011), we take a broad definition of feedback as “all dialogue to support learning in both formal and informal situations” (Askew & Lodge, 2000, p. 1). This illustrates how we view feedback as a two-way form of interaction and not as a one-way comment from one to the other. It is generally agreed that feedback plays an important role in higher education (Nicol & Macfarlane-Dick, 2006). Feedback leads to the development of higher-order skills (Davies & Berrow, 1998), connecting new knowledge to what students already know and to knowledge construction (Nicol, 2009). Engaging students in peer feedback helps develop skills for reflection, self-regulation, and critical thinking (Boud, 2001; Dochy, Segers, & Sluijsmans, 1999; Lin, Liu, & Yuan, 2001; P. M. Sadler & Good, 2006).

Feedback in SPOCs may be even more important than in face-to-face classes, because it increases the student-instructor interaction and student-student interaction, and thus compensates for the potential geographical disconnect in online courses that may affect students’ retention (Dennen, Aubteen Darabi, & Smith, 2007; Richardson, Koehler, Besser, Caskarlu, Lim, and Mueller, 2015). However, nowadays instructors are under pressure to provide high-quality feedback to students in a prompt manner, often to large and diverse cohorts (Allan & Bentley, 2012; Nicol, 2009; Planar & Moya, 2016). And even though SPOCs involve small groups, the number of parallel courses that run at the same time and the diversity of students may be high, which makes the provision of feedback very time-consuming (Crook, Mauchline, Maw, Lawson, Drinkwater, Lundqvist, … and Park, 2012). Therefore, this study aims at exploring how challenges of instructors can be overcome when providing feedback by developing design propositions for instructors in SPOCs to promote deep learning.

2. Design propositions to promote deep learning in SPOCs

2.1 Deep learning

The distinction between deep learning and surface learning as students’ approaches to studying has been supported by the results of previous research (Biggs, Kember, & Leung, 2001; Entwistle, 1991; Marton & Saljo, 1997). Deep and surface learning are considered to be two extremes of a continuum. Surface learning indicates that the learner simply memorizes new ideas. Deep learning is defined as the process of actively integrating new ideas into the existing cognitive structure through critical thinking, integrating what is learned with what was already known, and creating new connections between concepts (Aharony, 2006; Biggs, 1999; Hall, Ramsay, & Raven, 2004). According to Garrison, Anderson, and Archer (2001), in order to promote deep learning, the whole person should be engaged–cognitively, socially and affectively–in the learning process.

A deep learning approach is more likely to result in better retention and transfer of knowledge (Ramsden & Moses, 1992) and to lead to high-quality learning outcomes such as a good understanding of the discipline and critical thinking skills (Athanassiou, McNett, & Harvey, 2003; Athanassiou, 2003; Biggs, 1999; Booth, Luckett, & Mladenovic, 1999; Lindblom-Ylänne, 1999; Ramsden & Entwistle, 1983; Trigwell, Prosser, & Waterhouse, 1999). Students are unlikely to experience high-quality learning outcomes, or develop appropriate skills and competences through a surface approach to learning (Hall et al., 2004).

2.2 Using feedback to promote deep learning

There are several instruments that instructors can use to promote deep learning, such as using concepts maps (Hay, 2015), cross-cultural chat (Osman & Herring, 2007), podcasting (Pegrum, Bartle, Longnecker, 2014) and online asynchronous discussions (Du, Harvard & Li, 2005). One of the most powerful instruments for instructors to influence learning is feedback (Hattie & Timperley, 2007; Kluger & DeNisi, 1996). We argue that feedback through dialogue between instructors and students, among peers, or perhaps even between student and computer may promote deep learning. The purpose of feedback is to reduce the discrepancies between the students’ current understanding or performance and the understanding or performance that is being aimed for (Hattie & Timperley, 2007). According to Hattie and Timperley, feedback is information provided by a source (e.g., teacher, peer, book, parent, self, experience) regarding aspects of one’s performance or understanding. However, once the feedback has been provided, the receiver needs to process and respond to the feedback. The way the student receives the feedback is just as important as how the provider intended the feedback (Ilgen, Fisher, & Taylor, 1979). Ilgen and colleagues composed a model in which they showed the student’s processing of feedback into different stages. Emphasis was put on those aspects of feedback that influence: a) the way feedback is perceived, b) its acceptance by the recipient, and c) the willingness of the recipient to respond to the feedback (Ilgen et al., 1979). In line with this model, according to Nicol (2010), Carless, Salter, Yang, and Lam (2011), Boud and Molloy (2013), and Planar and Moya (2016), feedback can be viewed as two-directional and needs to constitute a dialogue between the person who facilitates it and the one who receives it. It must explicitly promote self-regulation and a proactive attitude on the part of the student towards it; at the same time, it needs to focus on the learning process and involve peers. According to Geitz et al. (2015), feedback should be supported by dialogue and by activities that not only inform students about their current performance, but also teach them to seek and ask for feedback on future performances. This will put students more in control. It will also enable them to add meaning to the feedback and to discuss the feedback as equals with their peers.

2.3 The role of the instructor and the student

This student-centered approach assumes that no longer the instructor, but the student, has become the center of the learning process. The instructor has become a facilitator who guides the learning process. Garrison, Anderson, and Archer (2000) developed the Community of Inquiry Framework that sheds more light on the role of the instructor to influence students’ deep learning approaches. In order to promote deep learning, the instructor should aim at three interdependent structural elements of the framework—social, cognitive, and teaching presence. Social presence reflects the development of climate and interpersonal relationships in the community. Cognitive presence provides a description of the progressive phases of practical inquiry leading to resolution of a problem or dilemma. Teaching presence provides leadership throughout the course or study. These three elements that the instructor should focus on in online education show similarities with the “online learning interaction model” from Ke and Xie (2009). As Garrison and colleagues (2000) focus on the teaching activities of the instructor, Ke and Xie focus on the learning activities of the students. Both view interaction as a core indicator for deep learning. Ke and Xie (2009) distinguish three different types of interaction of students in an online course: 1) social interaction, 2) knowledge construction, and 3) regulation of learning. Their model is based on concepts for deep learning in adult education and helps to examine the quality of online education.

Even though instructors may view interaction as essential to deep learning, given the high student-staff ratio it can be difficult for the instructor to engage in dialogue with students. Thus, instructors look for alternative feedback strategies that are efficient and effective and less time-consuming (Allan & Bentley, 2012) and that can be implemented in SPOCs. For example, peer feedback strategies have shown to be beneficial to deep learning (Anderson & Rourke, 2002; Boud, Cohen, & Sampson, 1999; Moon, 2013). The combination of feedback strategies and the specific context of SPOCs will lead to a set of design propositions specifically useful for promoting deep learning in SPOCs.

2.4 Design propositions and CIMO-logic

Design propositions are heuristic statements about how and why a pedagogical intervention works in a certain context (Plomp & Nieveen, 2009). A design proposition is intended to be transparent, comprehensive, and described in such a way as to make clear under which conditions it lends itself to generalization for other contexts. In this study, the design propositions will be formulated according to the CIMO-logic (van Aken, 2007; van den Akker, 1999) used in design literature (e.g. Denyer, Tranfield, & van Aken, 2008) and several recent studies (Bronkhorst, Meijer, Koster, & Vermunt, 2011; Brouwer, Brekelmans, Nieuwenhuis, & Simons, 2012; Dobber, Akkerman, Verloop, Admiraal, & Vermunt, 2012). A design proposition describes the specific Context to which it applies, the Intervention proposed, and the Mechanism by which the desired Outcome is achieved: CIMO-logic. The causal relation between the intervention and outcome in the context is (potentially) more plausible when all CIMO components are described (Brouwer et al., 2012). This inclusion of context dependency and mechanisms triggered is why the CIMO-logic is preferred over other ways of specifying design propositions that exist in the literature, which are often limited to specification of intervention and outcome.

CIMO-logic determines that a design principle has the following structure: “In this class of problematic contexts, use this intervention type to invoke these generative mechanism(s), to deliver these outcome(s)” (Denyer et al., 2008, p. 395). For example, “If you have a SPOC in which you want students to respond to each others’ contributions and try to look for common understanding (context), support group work (intervention type) to promote deep learning (intended outcome) through probing back and forth (mechanism).” Figure 1 shows how the CIMO-logic has been applied in this study. The context is defined by the specific challenges that instructors in SPOCs experience when aiming to promote deep learning. The contexts elucidate the context dependency of the intervention. Interventions are purposeful measures (products, processes, or activities) that are formulated by the designer (or instructor) in order to solve a design problem or need (Denyer et al., 2008; Midgley, 2000), for example the need for deep learning. Van Aken (2004) indicates that the key question is not so much whether the intervention works, but what it is about the intervention that makes it work. Why does an intervention lead to a certain outcome in a specific context? This has been described in the Mechanisms. Outcomes are the result of the Interventions.

Figure 1. CIMO logic (based on van den Akker, 1999).

2.5 Research question

We believe taking a design proposition perspective in which interventions, outcomes, ánd mechanisms are investigated in relation to each other is rather unique, and will provide contextualized conclusions that have both practical and conceptual value. Therefore, this paper aims to address the question: “How and why can deep learning in higher education SPOCs be promoted using scalable feedback interventions?” Feedback interventions consist of information that is externally generated and includes tips for improvement (Kluger & DeNisi, 1996). In this study, only feedback interventions that are scalable have been included, which refers to the requirement that it must be possible to increase the number of students involved without increasing the workload of the instructors. By investigating the mechanisms, we aim to answer why the intervention will (not) promote deep learning.

3. Methods

3.1 Design

The study design was qualitative and exploratory and used individual interviews with instructors in SPOCs representing participants from different fields of study as well as students. Since this study focuses on the design propositions for instructors, the interviews with the students have solely been used to substantiate the interviews with the instructors. This triangulation of the findings supported multiple perspectives rather than only the instructors’ perspective. Moreover, a focus group representing experts from different disciplines was added. According to Powell and Single (1996), in cases where the existing knowledge of a subject is inadequate, as is the case here, the use of a focus group is especially useful and can be employed to gather diverse ideas about possible feedback interventions. The supportive, congenial, non-judgmental setting offered by the focus group enhanced the likelihood of collecting the diverse and spontaneous opinions that eluded the in-depth interviews (Powell & Single, 1996).

The study was approved by the Dutch Ethical Board for research in education (NVMO, the Netherlands Association for Medical Education, Approval No. 210). The NVMO is an independent association that carries out activities for anyone involved in medical and health care education in the Netherlands and Flanders (Belgium).

3.2 Participants

3.2.1 Individual interviews

The data used in this study were taken from the same dataset as a previous study (Filius et al., 2018) for the individual questions with the instructors. Each study used different parts of this dataset. Concerning the selection of participants, we aimed for maximal variation and theoretical sampling (Guba, 1981). Therefore, the first author asked the heads of the Education and IT departments at 4 institutions to recommend instructors and students from their institutions with experience in teaching or participating in SPOCs. From these recommendations we selected instructors and students in SPOCs with varying levels of experience in years of teaching in or following SPOCs. We expected age and experience to be relatively large influencers, more than for example backgrounds. In addition, we included instructors that we considered to be experts and who are known for being keynote speakers at relevant international conferences about online education. We expected them to have a broad view of developments among instructors and to increase the chance that we included as many experiences as possible. Both the participating instructors and students represent different universities and virtual learning environments. A maximum of 2 of the same universities and a maximum of 2 of the same virtual learning environments were represented, which resulted in 10 different universities and 8 different virtual learning environments. The number of the purposive sample sizes of instructors has been determined by data saturation as the collection of more data appeared to have no additional interpretive worth (Guest, Bunce, & Johnson, 2006). In the case of the students, we were looking for counter evidence for the findings of the interviews with the instructors. After four interviews we had not found any counter evidence and then we decided not conduct any additional interviews.

All of the 11 invited instructors and 4 invited students agreed to be interviewed. All instructors (4 female and 7 male) were involved in teaching online courses in higher education. The average age of instructors was 51.8 years (SD=20.0), average teaching experience was 15 years (SD=19.6) compared with their average experience with SPOCs of 10.4 years (SD=6.8). Six instructors had 2 years or less experience with SPOCs, the other 5 instructors had 10 years or more experience with SPOCs and online distance education. Two of the instructors are also researchers in the field of online education. Additionally, 4 students (3 female and 1 male) were involved, ranging in age from 28 to 52, with an average age of 39 years (SD=9). Two of them had participated in just one SPOC; the others had participated in more SPOCs, varying in duration and study load.

3.2.2 Focus group session

In total 10 professionals, other than the interviewed instructors, engaged in the focus group session. They were selected using specific-criterion sampling, which is a type of purposive sampling and selection method in which one concentrates on people with specific characteristics (Palys, 2008). We interviewed 10 professionals from multiple disciplines and areas of expertise who are known for their open-mindedness, to fill in the gaps with more unconventional interventions. Their ages ranged from 23 to 52 years of age. All of them work in art, technology, and/or education, a number of them being at the intersection of several disciplines. The three disciplines were evenly represented. Their job positions are: journalist, artist, product manager of MOOCs, researcher, educational platform manager, educational technologist, and game designer. Some of them were also students or instructors. All participants took part on a voluntary basis.

3.3 Procedure

3.3.1 Individual interviews

Participants were informed of the study’s purpose and approach both in the invitation e-mail and at the start of the interview. This included an explanation of the outcome ‘deep learning’ and ‘scalable interventions’. During the interviews, the interviewer asked each participant to name several examples of deep learning, to compare these with findings in literature to determine whether their understanding corresponded to our previous study. Hardly any differences emerged in this respect. Each participant signed a consent form. Interviews were based on an open interview scheme following a qualitative approach (Cohen, Manion, & Morrison, 2013; Cresswell, 2007). This was done so as to do justice to the complexity of the topic as well as to the nature of encapsulated expert knowledge (Berliner, 2001), since the in-depth nature of open interviewing allows the informants to answer from their own frame of reference (Bogdan & Biklen, 2003; Cohen et al., 2013). The interviews lasted an average of one hour.

The interview questions for instructors are shown in Table 1. The same questions were asked of students, but from their perspective. Questions were related to the CIMO method by asking for the context, intervention used, mechanism activated, and outcome achieved. The deep learning process has been operationalised as the initiation of critical thinking, integrating what the student is learning with what he or she already knows and creating new connections. These three deep learning activities are mental processes which, when initiated, are considered as 'deep learning outcome'. Specific attention was paid to what interventions have been used and which mechanisms triggered the deep learning activities. By subsequently asking for three statements or golden rules about providing feedback to promote deep learning, participants were encouraged to speak freely about their ideas on what might help to promote deep learning in SPOCs and why this might help. Other questions asked to all participants were to prompt and/or probe for additional information.

Table 1

Interview questions supplemented with probing questions

3.3.2 Focus group session

During the focus group session, a short introduction was provided to present the results of the interviews and to explain the definitions of SPOCs, feedback, and deep learning. Participants were informed about the summarized results of the interviews in terms of contexts, mechanisms, and desirables outcomes. Then they were asked to brainstorm in three rounds about the results of the interviews. Every round involved different group compositions. Following their suggestions in the small groups, they collaboratively discussed the interventions and mechanisms in more depth in order to conclude how feedback may promote deep learning in SPOCs.

3.4 Analysis

The analysis of the data proceeded in stages, using NVivo to code and retrieve the data. First, the interviews and focus group session were audio recorded and transcribed. To avoid misrepresentation and misinterpretation of interviewees’ statements, the transcript and a summary of the transcription were sent to each participant for member checking (Poortman & Schildkamp, 2012). The focus group participants received a report for verification, which was created based on the transcript and notes. All participants agreed with the transcribed content. Second, the transcripts of the interviews and focus group session were inductively coded into meaningful categories by the first author, using open coding (Cresswell, 2007). Fragments of all texts of which the corresponding code was debatable according to the first researcher, which came to less than 2.5% of all texts, were discussed by the full research team. Next, each meaningful category has been classified using the theme Intervention and the theme Mechanism, according to the CIMO-logic. Then the first author moved to more selective coding stages according to an iterative process. Based on the previous round of analysis, the codes have been revised. On the basis of the data, some codes have been merged, deleted or reformulated. Subsequently, all data were analyzed again, now with the new codes. Considering the open and grounded nature of this analysis (Bogdan & Biklen, 2003) at every coding stage, all different categories were discussed by the research team until agreement on the categories’ content, as well as the codes, was reached.

To enhance reliability in coding, an independent researcher also analyzed a random sample of approximately 10 percent of the data for calculating the inter-rater reliability. The percentage of agreement was 93%. Internal validity was further enhanced due to the description of the results, which were context-rich, meaningful, and thick. External validity was promoted by including respondents’ quotes and by describing the coherence with the theoretical framework.

Reasoning from the perspective of the CIMO-logic, the interventions that were derived from the data into meaningful categories are suggestions from respondents on how online feedback could overcome the problems mentioned in the specific context of a SPOC. Only interventions that are scalable, that is, without being very time-consuming, were selected as meaningful categories, in light of the constraints of shrinking staff budgets and expanding student numbers. For example, feedback interventions such as direct conversations with videoconferencing tools between instructor and student have been excluded for this reason, despite their potential in achieving deep learning.

The mechanisms that derived from the data shed light on why interventions lead to the desired outcome, which is deep learning. Each of the mechanisms was classified into one of the categories of Ke and Xie’s online learning interaction model (2009): 1) social interaction, 2) knowledge construction, and 3) regulation of learning.

To ensure quality in all of the steps described, an audit was conducted by an independent researcher concerning all steps of data gathering and analysis (Akkerman, Admiraal, Brekelmans, & Oost, 2008). The audit had both a formative and a summative function. As a consequence, the auditor assessed the steps taken several times during the study and at the end of the study. This resulted in an audit report with questions and answers, mostly about the analysis of the data. For that reason, there have been some adjustments in the description of the analysis in this article. Thereafter, the auditor reviewed the study again and affirmed it as being visible, comprehensible and transparent. According to the auditor, decisions are explicated and communicated, decisions have been substantiated and decisions are acceptable according to standard, values and norms.

4. Results

In the results of this study, we describe design propositions to overcome challenges in promoting deep learning in SPOCs according to the CIMO-logic. The design propositions consist of the Context (specific challenges in SPOCs) in which feedback Interventions will trigger student Mechanisms that will lead to the desired Outcome (deep learning). We start with describing four main student mechanisms by which deep learning (the desired outcome) can be achieved in SPOCs (context). After that we address how these mechanisms can be triggered by feedback interventions. The letter after each quote refers to either an instructor (I) or student (S). Where the suggestions of students were additional to those of instructors, it was explicitly mentioned that this originated from students.

4.1 Student Mechanisms

The mechanisms in this study are the processes that are internal to the student. They describe how students engage in learning activities, which largely determines the quality of the learning outcomes they attain (Vermunt & Verloop, 1999). Knowledge concerning the mechanisms sheds light on why interventions lead to the desired outcome, which is deep learning. Mechanisms are: 1) Feeling personally committed, 2) Asking and providing relevant feedback, 3) Probing back and forth, and 4) Understanding one’s own learning process. Each of the mechanisms has been categorized according to the Online Learning Interaction Model (Ke & Xie, 2009) as a) Social, b) Knowledge construction, or c) Regulation.

4.1.1 Mechanism 1: Feeling personally committed (category: social)

If students are personally addressed, they feel personally committed and accept the feedback more easily. According to the instructors, possibilities to do so in online education have not been optimally utilized yet. One of the instructors said: “One of the benefits of online learning, I think, is the transparency. Because students write assignments, receive and give feedback, it is easy to get the picture: he is there, they are there, and those guys over there still don’t get it” (I8). Another instructor explained, “Here’s what I find is the benefit: in a classroom situation you rarely have the opportunity to ask, to focus on what every single student thinks or what every student is thinking about that question. In a classroom you only have a limited amount of time and you may have three, four, five students answer that question, but you don’t know what every student is thinking. In an online course you have the opportunity to get that student to respond to that–every single student to respond to that question and you have the opportunity to provide one-on-one feedback and ask those questions. In a face-to-face classroom I would never know those students who weren’t thinking… You only see the stars, basically” (I7).

Students will prefer to choose a deep learning approach once they feel personally committed, which can be achieved through tailored feedback: “The individualization, the differentiation that you can give to students in an online environment is so much greater than you can do in a face-to-face classroom.” (I4).

4.1.2 Mechanism 2: Asking and providing relevant feedback (category: knowledge construction)

To learn how to focus on a deep learning approach, students indicate that it helps them to learn how to ask for feedback, but also how to provide peer feedback that promotes deep learning. Instructors confirm this. One of them adds: “I think it is very instructional for students to provide feedback, for themselves. That they learn how to grade such a piece of work, what criteria are being used. And they will have to keep doing so, later in their life, when they are working at the university or elsewhere” (I6). Students said that they haven’t been taught how to provide meaningful feedback and that it is hard to learn it oneself. Instruction will thus be useful.

Compared to face-to-face education, students tended to ask for feedback more frequently, just because it is easier since there seems to be an opportunity 24 hours per day. Both instructors and students also tend to provide feedback faster in online education because the virtual learning environment enables them to be very quick. Both instructors and students think that this fast way of asking for and providing feedback may promote more of a surface approach to learning. And because the number of feedback requests is so high, it is difficult for students all to get involved in a dialogue with the instructor. An instructor explains how he deals with the large number: “We selected the most important issues and also some examples, and that was what we discussed” (I5). Thus, according to the members of the focus group, it may help students to learn how to prioritize feedback requests.

4.1.3 Mechanism 3: Probing back and forth (category: knowledge construction)

In order for deep learning to occur, students and instructors experienced a need for a back-and-forth probing to take place. By presenting ideas and getting feedback on these ideas by ping-ponging back and forth with peers and/or instructor, students thought deeply and got the opportunity to combine what they already knew with new knowledge. It required an environment in which students felt safe and interacted comfortably with each other and with the instructor. According to a student: “You need to build a relationship with each other in order to be motivated and to be able to accept the feedback, so someone must be open to receiving feedback” (S1). Another instructor explained: “The feedback that works best is the feedback in which you can keep asking questions after each response from the student. As a dialogue. Because that really forces the student to think deeply” (I1). Back-and-forth probing can be either synchronous or asynchronous, but most respondents preferred it as synchronous: “It becomes snappier, it is easier to ask questions right away, to help the student to take the necessary steps and to think deeper” (I8).

4.1.4 Mechanism 4: Understanding one’s own learning process (category: learning regulation)

Both instructors and students expressed the view that deep learning can be promoted by letting students apply their knowledge, for example, in a scenario or case study. Students will have to try to apply new information in other contexts, which enables them to create new knowledge and to make connections with prior knowledge and new concepts. They will have to go through various steps and receive feedback on each step. By doing so, they engage themselves in meaningful ways that enable them to reflect deeply on the learning activity and the feedback they have received.

Creating the right feedback for each step to be taken requires forward thinking. One of the instructors explains: “I found that very little deep learning occurs online anyway, unless there is some type of a scenario, or they have to apply it in a case study. In other words, it’s forward thinking. I would call it that the deep learning occurs when you have opportunities for forward thinking, forward looking. ‘What would you do if…? What would happen if…? What’s the projection if this?’ And it’s a little bit of what-if/then kind of thinking, that I think precedes all of the other learning. And without that, I don’t think that it really progresses further” (I4). This mechanism helps students to be prepared for opportunities to develop the capacity to regulate their own learning as they progress through higher education.

4.2 Triggering mechanisms through feedback interventions

The mechanisms described above can be triggered by several feedback interventions, which are described in three following categories: 1) Feedback management 2) Peer feedback, and 3) Automatic feedback. The mechanisms and interventions are summarized in figure 2.

Figure 2. Interventions and mechanisms according to the instructors and students.

4.2.1 Feedback management Interventions

The Feedback management interventions describe how to manage online the monitoring and provision of feedback to and among students in such a way that deep learning is promoted. For each intervention, the dominant mechanisms that were identified in this study are indicated in italics.

Intervention A: Collect student information in advance

In order to make students feel personally committed and to estimate what feedback is needed, it helps to collect student data before the start of the course. Student data involved learning characteristics, such as the education level, results on a pre-test, information on expectations, personal learning objectives, and motivation. Collecting this data benefited the feedback provided, because it enabled instructors to adjust their feedback to the needs of the students and thus make it more specific. The relatively convenient availability of student data in SPOCs compared to face-to-face education may compensate in part for the lack of facial contact. For instructors in SPOCs, knowing more about their students helped them to formulate the feedback to make it more tailored to the student’s needs. Specific suggestions of how to implement this intervention mentioned in interviews and/or focus group have been added in Table 2.

Table 2

Specific suggestions of Intervention A <

Intervention B: Monitor progress using a dashboard

Instructors monitored students’ progress during their education using a dashboard. The dashboard provided the instructor with an analysis of student data such as their contributions to assignments and discussion forums, questions, completion rates, and grades. It enabled instructors to intervene during the course and provide specific personalized formative feedback, for example when students skipped certain necessary steps or when they tended to think in a wrong direction. According to instructors, receiving personalized feedback helps students to feel more personally committed and may help them to understand their own learning progress better–especially when the dashboard is visible to the students themselves, as members of the focus group suggested. Specific suggestions of how to implement this intervention mentioned in interviews and/or focus group have been added in Table 3.

Table 3

Specific suggestions of Intervention B

Intervention C: Bring requests back to the essentials

Participants in the focus group suggested reducing the number of feedback requests and letting students prioritize the issues they want to receive feedback on. Instructors suggested teaching students how to ask for the right feedback and guiding them during this learning process by reflecting on the type of feedback questions they ask. Instructors in SPOCs expect this to be useful in aligning the learning activities with the learning goals and the assessment goals so that they all promote deep learning. Moreover, it will help students to ask for (more) relevant feedback. Specific suggestions of how to implement this intervention mentioned in interviews and/or focus group have been added in Table 4.

Table 4

Specific suggestions of Intervention C

Intervention D: Discuss and rate the quality of the feedback

Students suggest that teaching them how to provide relevant feedback may promote deep learning. To do so, participants in the focus group suggested letting students discuss and rate the quality of the feedback they provide and receive. By discussing and rewarding the quality of the feedback provided, students aim to learn how to focus on deep learning and how to increase the quality of their feedback. This might also help to give students recognition for the effort they make to provide good feedback.

According to the interviewed respondents, feedback to promote deep learning should include many questions to elicit deep learning. A discussion could start with an instruction on, for example, what questions are helpful to promote deep learning, such as what-if/then questions; for example, “What would you do if…?” “What would happen if…?” “What’s the projection if this...?” Specific suggestions of how to implement this intervention mentioned in interviews and/or focus group have been added in Table 5.

Table 5

Specific suggestions of Intervention D

4.2.2 Peer feedback types

Amongst the participants, peer feedback is considered as an appropriate and scalable intervention to activate the mechanism “asking and providing relevant feedback” and, once delivered in dialogue form, the mechanism “probing back and forth.” However, it may also trigger other useful mechanisms. Dominant mechanisms have been indicated in italics below. Different types of peer feedback to promote deep learning can be distinguished.

Intervention E: Encourage asynchronous oral peer feedback (audio or video)

Instructors encouraged the involvement of peers in feedback processes and invited them to provide their feedback in spoken form. Even though nearly all interviewed instructors and students used only written feedback in online education, several instructors and students mentioned the expected potential of oral peer feedback. It was quicker and more personal, and using the voice and inflection made it easier to be critical, to deliver bad and good news, and to add nuances. In contrast to the written feedback, it added the richness of tone of voice, and, by using video even of facial expressions, it made students feel personally committed and more connected to the course material. And according to one of the instructors, students listened to it more, because they typically accessed it on their smartphones and their tablets. Specific suggestions of how to implement this intervention mentioned in interviews and/or focus group have been added in Table 6.

Table 6

Specific suggestions of Intervention E

Intervention F: Encourage written asynchronous peer feedback

Teaching students how to provide written peer feedback that is focused on deep learning and is provided as a dialogue was recommended by both instructors and students and confirmed by members of the focus group. This creates awareness about the type of feedback that can be given and stimulates critical thinking, questioning, and reflecting. When providing feedback in written form, there is more time to think about it thoroughly and to formulate it carefully. Doing so in a dialogue form by probing back and forth, students can ask each other questions, reflect, and respond to each other, which encourages deep learning. Compared to oral feedback, written peer feedback was found to promote deep learning even more effectively because of the more precise type of feedback students are able to provide.

According to both the interviewed instructors and students, students often learn more from providing feedback than from receiving feedback. Specific suggestions of how to implement this intervention mentioned in interviews and/or focus group have been added in Table 7.

Table 7

Specific suggestions of Intervention F

Intervention G: Support group work

Both instructors and students mentioned online group work as a learning method in which deep learning could be promoted through feedback. Instructors steered the students towards different group assignments and stimulated personal commitment and interaction. By doing so, students felt motivated and encouraged to be engaged, to reflect and to explicate what they have learned. The instructor taught students to suspend their opinions to create a dialogue and to construct questions in such a way that higher-order thinking is necessary for the others to answer the questions. This not only stimulated back and forth probing, but also made students feel personally committed, which may have motivated students to work just a little harder. Specific suggestions of how to implement this intervention mentioned in interviews and/or focus group have been added in Table 8.

Table 8

Specific suggestions of Intervention G

Intervention H: Provide organized synchronous feedback

Instructors organized sessions in which students discussed their work and their feedback. The synchronous character made students feel more personally committed than written feedback. The simultaneous communication enabled back and forth probing. The prompt feedback gave the students the opportunity to adjust their performance immediately. Both students and instructors said that they appreciated the opportunity to ask for immediate clarification in such a way that the feedback process became a dialogue. Specific suggestions of how to implement this intervention mentioned in interviews and/or focus group have been added in Table 9.

Table 9

Specific suggestions of Intervention H

4.2.3 Automatic feedback

Intervention I: Add scenario-based multiple choice questions

Add scenario-based multiple choice questions, aimed at deep learning, to the course design. Scenario-based multiple choice questions contain follow-up questions and may be represented by a tree structure. Questions should be asked in such a way that students are encouraged to synthesize information, draw conclusions, and support findings, and reflect on them.

Online students appreciate multiple choice questions because of the active method and the immediate feedback that provides them with understanding of their own learning process. Even though none of the respondents have experience with multiple choice questions specifically aimed at deep learning, most think it will be possible. It requires much precision and thinking very carefully about the questions and the responses, and will therefore be time-consuming during the development phase. However, if the number of students is large enough, the time investment will be worth it. A specific suggestion of how to implement this intervention mentioned in interviews and focus group have been added in Table 10.

Table 10

Specific suggestion of Intervention I

5. Discussion

Promoting deep learning is an important task for higher education, which is increasingly conducted online. SPOCs may be a form of online learning that has much potential for deep learning because of its small groups and relatively high number of interaction possibilities. In a previous study (Filius et al., 2018), we showed that instructors experience specific challenges when trying to promote deep learning in SPOCs. This previous study resulted in a description of five main challenges: alignment of learning activities, insights into student needs, adaptivity in teaching strategy, social cohesion, and creating dialogue. To meet these challenges, the incorporation of feedback may have significant potential.

Therefore, the aim of this study was to provide scalable design propositions for instructors in SPOCs to promote deep learning through online feedback. Design propositions have been formulated according to the CIMO-logic. Specific attention was paid to the mechanisms behind the interventions as they are central to the plausibility of a design principle (van Aken, 2004).

The results match with the categorization used by the Online Learning Interaction Model of Ke and Xie (2009), which also aims at deep learning. Their three categories could be extended by the mechanisms found in this study. We suggest that interaction in the category “social interaction” may promote deep learning if it makes students feel personally committed. To make them feel that way, it helps students to receive adapted and individualized feedback. Online learning interaction in the category “knowledge construction” should, in order to promote deep learning, be aimed at probing back and forth, as a dialogical process. This is in line with previous studies such as the work of Rakoczy, Harks, Klieme, Blum, and Hochweber (2013), who indicate that receiving feedback is just as important as providing feedback. To fully exploit the feedback, students should be actively engaged in the feedback dialogue. In that same “knowledge construction” category we argue that it is important for students to learn how and when to ask for relevant feedback. This is supported by Nicol (2010), who argues that getting students to request feedback, to respond to feedback, and to actively connect feedback to their assignments might result in students’ paying more attention to, and being able to use, instructor feedback. Geitz et al. (2015) suggest that this may be explained by the fact that learning how and when to ask for exactly what type of feedback helps students to be more in control and to add personal meaning to the feedback. Quality of feedback is important, but the quality of the interaction with the feedback may be even more important. Moreover, it helps instructors to manage their time effectively.

Regarding the third category, “regulation of learning,” interaction to promote deep learning is especially useful when it provides students with more insight into their own learning process. This has been confirmed by other research: students must be equipped with the skills to think for themselves, to set their own goals, and to make improvements to their work while it is being produced (Andrade, Du, & Mycek, 2010; Molloy & Boud, 2013; Narciss, 2013; D. R. Sadler, 2013). Students need to develop awareness and responsiveness so they can detect anomalies or problems for themselves (D. R. Sadler, 2013). According to Topping (1998), these self-regulation skills provide students with skills that they will need not only during their higher education, but also during their future life. Students who are more effective at self- regulation produce better feedback or are more able to use the feedback they generate to achieve their desired goals (Butler & Winne, 1995). Interestingly, it is shown that peer feedback helps students to obtain these self-regulation skills even better than instructor feedback does (Planar & Moya, 2016). And peer feedback may be useful for instructors to manage their time effectively.

With the current high student-staff ratio, it may be difficult for instructors to engage in dialogue with students. Therefore we specifically aimed for scalable interventions. This possibly excluded several instructor-student interventions. Results suggest that scalability occurs in three categories of interventions. The first category concerns feedback management, which seeks to reduce the range of tasks of the instructor or to better facilitate the instructor. As feedback should be adaptive in order to be effective (Nicol, 2010) and adaptive feedback is considered to be a challenge in the specific context of SPOCs (Filius et al., 2018), the interventions in this category offer possibilities to allow the provision of adaptive feedback to become more feasible. The second category concerns peer feedback, which has been shown to have much potential for promoting deep learning. Boud et al. (1999) suggested that working with peers rather than with the instructor may promote higher-order thinking. Anderson and Rourke (2002) confirmed that discussions by peers were useful in achieving higher-order, but not lower-order, learning objectives because the controversial perspectives offered by other peers disturbed students’ initial understanding of the content and therefore prompted them to process it thoroughly. Based on the results of this study, we suggest that the mechanisms found may play an important role in determining whether the peer feedback interventions will lead to deep learning. Automatic feedback is the third category. Although automatic feedback can be provided for most constructed response items (Benson, 2010), the use to specifically promote deep learning has, to the best of our knowledge, not yet been fully explored. Since both students and instructors have expectations that this may lead to deep learning, this paper may lead to further investigation of the use of automatic feedback to promote deep learning in SPOCs.

How might instructors use the findings in this paper? One practical proposal is that instructors in SPOCs examine current feedback practices in relation to the interventions and mechanisms as described above. Especially, we expect that combinations of several feedback interventions, triggering multiple mechanisms, may support deep learning in SPOCs. An examination of this kind might help identify where feedback practices might be strengthened. However, the design propositions presented here do not exhaust all interventions that instructors might perform to promote deep learning in SPOCs. They merely provide a starting point and emphasize the importance of framing feedback as a dialogical process with active engagement of students. The research challenge is to refine these design propositions, identify gaps, and gather further evidence about the potential of feedback to promote deep learning.

Learning in an online environment can constitute a positive springboard to the new role that instructors need to take on in an online education model where the student is at the center of the learning process (Planar & Moya, 2016). Given this new role, it is crucial to develop and analyze learning methods that enable a greater amount of dialogue among the students in the learning process (Planar & Moya, 2016). This present study provides relevant insights into how and why deep learning can be promoted in SPOCs. Since this study is exploratory in nature, we recommend to focus subsequent research on examining the findings on a larger scale. For a follow up study, we also recommend to aim at different methods for assessing deep learning, such as grades or academic performance in general. Further on, we chose deliberately to focus primarily on the perspective of the instructors. Therefore, the perspective of the students has been used only as a supplement and thus we have limited the number of students to four. A next study could include the perspective of the students and compare them with the findings in this study. Future research should be aimed at how feedback interventions are better suitable for promoting deep learning while also taking into account the specific learning mechanisms that should be activated within the different contexts and the workload that instructors experience. Future research could also include combinations with other instruments other than feedback, such as collaborative assignments and integrate earlier research into, for example, concept maps (Hay, 2015), cross-cultural chat (Osman & Herring, 2007), podcasting (Pegrum, Bartle, Longnecker, 2014) and online asynchronous discussions (Du, Harvard & Li, 2005).

As we explore the relatively young field of SPOCs, the results of this study show that feedback as a dialogical process may contribute to solving the current challenges of instructors in SPOCs to achieve deep learning with their students. Specific attention has been paid to the mechanisms that are internal to the student and can be triggered by feedback interventions. Findings concerning the mechanisms sheds light on why interventions lead to the desired outcome, which is deep learning. It is essential to continue this line of research and to explore systematically the implementation of the design principles, both on learning processes and on learning performance.

Acknowledgements

The authors would like to thank Rianne Bouwmeester PhD

Keypoints

References


Aharony, N. (2006). The use of deep and surface learning strategies among students learning English as a foreign language in an internet environment. British Journal of Educational Psychology, 76(4), 851-866. http://dx.doi.org/10.1348/000709905X79158
Akkerman, S., Admiraal, W., Brekelmans, M., & Oost, H. (2008). Auditing quality of research in social sciences. Quality & Quantity, 42 (2), 257-274. http://dx.doi.org/10.1007/s11135-006-9044-4
Allan, R., & Bentley, S. (2012, April). Feedback mechanisms: Efficient and effective use of technology or a waste of time and effort? Paper presented at the STEM Annual Conference,Imperial College, London.
Anderson, T., & Rourke, L. (2002). Using peer teams to lead online discussions. Journal of interactive media in education, 1, 1-21.
Andrade, H. L., Du, Y., & Mycek, K. (2010). Rubric-referenced self-assessment and middle school students’ writing. Assessment in Education: Principles, Policy & Practice, 17(2), 199-214. http://dx.doi.org/10.1080/09695941003696172
Askew, S., & Lodge, C. (2000). Gifts, ping-pong and loops–Linking feedback and learning. In S. Askew (Ed.) (1st Ed.)., Feedback for learning(pp. 1-17). London: Routledge, Falmer. http://dx.doi.org/10.4324/9780203017678
Athanassiou, N., McNett, J. M., & Harvey, C. (2003). Critical thinking in the management classroom: Bloom's taxonomy as a learning tool. Journal of Management Education, 27(5), 533-555. http://dx.doi.org/10.1177/1052562903252515
Benson, A. D. (2010). Assessing participant learning in online environments. Facilitating Learning in Online Environments: New Directions for Adult and Continuing Education , Number 100, 103, 69. http://dx.doi.org/10.1002/ace.120
Berliner, D. C. (2001). Learning about and learning from expert teachers. International Journal of Educational Research, 35(5), 463-482. http://dx.doi.org/10.1016/S0883-0355(02)00004-6
Biggs, J. (1999). What the student does: Teaching for enhanced learning. Higher Education Research & Development, 18(1), 57-75. http://dx.doi.org/10.1080/07294360.2012.642839
Biggs, J., Kember, D., & Leung, D. Y. (2001). The revised two-factor study process questionnaire: R-SPQ-2F. British Journal of Educational Psychology, 71(1), 133-149. http://dx.doi.org/10.1348/000709901158433
Biggs, J., & Tang, C. (2011). Teaching for Quality Learning at University, Berkshire: The society for Research into Higher Education and Open University Press.
Bogdan, R., & Biklen, S. K. (2003). Qualitative research for education: An introduction to theories and methods. New York: Pearson.
Booth, P., Luckett, P., & Mladenovic, R. (1999). The quality of learning in accounting education: The impact of approaches to learning on academic performance. Accounting Education, 8(4), 277-300. http://dx.doi.org/10.1080/096392899330801
Boud, D. (2001). Peer learning and assessment. In D. Boud, R. Cohen, & J. Sampson (Eds.), Peer learning in higher education(1 stEd., pp. 67-84). London: Kogan Page Limited.
Boud, D., Cohen, R., & Sampson, J. (1999). Peer learning and assessment. Assessment &
Evaluation in Higher Education, 24 (4), 413-426. http://dx.doi.org/10.1080/0260293990240405
Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698-712. http://dx.doi.org/10.1080/02602938.2012.691462
Bronkhorst, L. H., Meijer, P. C., Koster, B., & Vermunt, J. D. (2011). Fostering meaning-
oriented learning and deliberate practice in teacher education. Teaching and Teacher Education, 27(7), 1120-1130. http://dx.doi.org/10.1016/j.tate.2011.05.008
Brouwer, P., Brekelmans, M., Nieuwenhuis, L., & Simons, R. (2012). Fostering teacher community development: A review of design principles and a case study of an innovative interdisciplinary team. Learning Environments Research, 15(3), 319-344. http://dx.doi.org/10.1007/s10984-012-9119-1
Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65(3), 245-281. http://dx.doi.org/10.3102/00346543065003245
Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36 (4), 395-407. http://dx.doi.org/10.1080/03075071003642449
Cohen, L., Manion, L., & Morrison, K. (2013). Research methods in education. London: Routledge. http://dx.doi.org/10.4324/9781315456539
Cresswell, J. (2007). Qualitative inquiry and research design: Choosing among five perspectives. London: SAGE Publishing.
Crook, A., Mauchline, A., Maw, S., Lawson, C., Drinkwater, R., Lundqvist, K., ... & Park, J. (2012). The use of video technology for providing feedback to students: Can it enhance the feedback experience for staff and students? Computers & Education, 58(1), 386-396. http://dx.doi.org/10.1016/j.compedu.2011.08.025
Davies, R., & Berrow, T. (1998). An evaluation of the use of computer supported peer review for developing higher-level skills. Computers & Education, 30(1), 111-115. http://dx.doi.org/10.1016/S0360-1315(97)00086-9
Dennen, V. P., Aubteen Darabi, A., & Smith, L. J. (2007). Instructor-learner interaction in online courses: The relative perceived importance of particular instructor actions on performance and satisfaction. Distance Education, 28(1), 65-79. http://dx.doi.org/10.1080/01587910701305319
Denyer, D., Tranfield, D., & van Aken, J. E. (2008). Developing design propositions through research synthesis. Organization Studies, 29 (3), 393-413. http://dx.doi.org/10.1177/0170840607088020
Dobber, M., Akkerman, S. F., Verloop, N., Admiraal, W., & Vermunt, J. D. (2012). Developing designs for community development in four types of student teacher groups. Learning Environments Research, 15(3), 279-297. http://dx.doi.org/10.1007/s10984-012-9116-4
Dochy, F., Segers, M., & Sluijsmans, D. (1999). The use of self-, peer and co-assessment in higher education: A review. Studies in Higher Education, 24(3), 331-350. http://dx.doi.org/10.1080/03075079912331379935
Du, J., Havard, B., & Li, H. (2005). Dynamic online discussion: task‐oriented interaction for deep learning. Educational Media International, 42(3), 207-218. http://dx.doi.org/10.1080/09523980500161221
Entwistle, N. J. (1991). Approaches to learning and perceptions of the learning environment. Higher Education, 22(3), 201-204. http://dx.doi.org/10.1007/BF00132287
Fan, X., Miller, B. C., Park, K. E., Winward, B. W., Christensen, M., Grotevant, H. D., & Tai, R. H. (2006). An exploratory study about inaccuracy and invalidity in adolescent self-report surveys. Field Methods, 18(3), 223-244. http://dx.doi.org/10.1177/152822X06289161
Filius, R.M., de Kleijn, R.A.M., Uijl, S.G., Prins, F.J., van Rijen, H.V.M. and Grobbee, D.E. (2018). Challenges concerning deep learning in SPOCs. International Journal of Technology Enhanced Learning, 10(1-2), 111-127. http://dx.doi.org/10.1504/IJTEL.2018.088341
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. Internet and Higher Education, 2(2–3), 87−105. http://dx.doi.org/10.1016/S1096-7516(00)00016-6
Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education, 15(1), 7-23. http://dx.doi.org/10.1080/08923640109527071
Garrison, D. R., Anderson, T., & Archer, W. (1999). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2), 87-105. http://dx.doi.org/10.1016/S1096-7516(00)00016-6
Geitz, G., Brinke, D. J., & Kirschner, P. A. (2015). Goal orientation, deep learning, and sustainable feedback in higher business education. Journal of Teaching in International Business, 26(4), 273-292. http://dx.doi.org/10.1080/08975930.2015.1128375
Guba, E. G. (1981). Criteria for assessing the trustworthiness of naturalistic inquiries. ECTJ, 29(2), 75.
Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18(1), 59-82.
Hall, M., Ramsay, A., & Raven, J. (2004). Changing the learning environment to promote deep learning approaches in first-year accounting students.Accounting Education, 13(4), 489-505. http://dx.doi.org/10.1080/0963928042000306837
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112. http://dx.doi.org/10.3102/003465430298487
Hay, D.B. (2007). Using concept maps to measure deep, surface and non-learning outcomes. Studies in Higher Education, 32(1), 39-57. http://dx.doi.org/10.1080/03075070601099432
Hounsell, D. 1997. Contrasting Conceptions of Essay-Writing. In The Experience of Learning, edited by F. Marton, D. Hounsell, and N. Entwistle, 106–125. Edinburgh: Scottish Academic Press.
Ilgen, D. R., Fisher, C. D., & Taylor, M. S. (1979). Consequences of individual feedback on behavior in organizations. Journal of Applied Psychology, 64(4), 349. http://dx.doi.org/10.1037/0021-9010.64.4.349
Ke, F., & Xie, K. (2009). Toward deep learning for adult students in online courses. The Internet and Higher Education, 12(3), 136-145. http://dx.doi.org/10.1016/j.iheduc.2009.08.001
Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254-284. http://dx.doi.org/10.1037/0033-2909.119.2.254
Lin, S. S., Liu, E. Z., & Yuan, S. (2001). Web-based peer assessment: Feedback for students with various thinking-styles. Journal of Computer Assisted Learning, 17(4), 420-432. http://dx.doi.org/10.1046/j.0266-4909.2001.00198.x
Lindblom-Ylänne, S. (1999). Studying in a traditional medical curriculum-study success, orientations to studying and problems that arise. Helsinki: Printing House.
Lynch, R., McNamara, P. M., & Seery, N. (2012). Promoting deep learning in a teacher education programme through self-and peer-assessment and feedback.European Journal of Teacher Education, 35(2), 179-197. http://dx.doi.org/10.1080/02619768.2011.643396
Marton, F., & Saljo, R. (1997). Approaches to learning. In F. Marton, D. Hounsell, & N. J. Entwistle (Eds.), The experience of learning: Implications for teaching and studying in higher education (1stEd., pp. 39-58). Edinburgh: Scottish Academic Press.
Midgley, G. (2000). Systemic intervention. In Midgley, G. (Ed.),Systemic intervention: Philosophy, Methodology, and practice(1 stEd., pp. 113-133). New York: Springer US. http://dx.doi.org/10.1007/978-1-4615-4201-8
Molloy, E., & Boud, D. (2013). Changing conceptions of feedback. In E. Molloy & D. Boud (Eds.), Feedback in higher and professional education: Understanding it and doing it well (1stEd., pp. 11-33). London: Routledge.
Moon, J. A. (2013). Reflection in learning and professional development: Theory and practice. London: Routledge. http://dx.doi.org/10.4324/9780203822296
Narciss, S. (2013). Designing and evaluating tutoring feedback strategies for digital learning. Digital Education Review,23, 7-26.
Nicol, D. (2009). Assessment for learner self-regulation: Enhancing achievement in the first year using learning technologies. Assessment & Evaluation in Higher Education, 34(3), 335-352. http://dx.doi.org/10.1080/02602930802255139
Nicol, D. (2010). From monologue to dialogue: Improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education, 35(5), 501-517. http://dx.doi.org/10.1080/02602931003786559
Nicolls, G. (2002). Developing teaching and learning in higher education.London: Routeledge Falmer. http://dx.doi.org/10.4324/9780203469231
Osman, G., & Herring, S. C. (2007). Interaction, facilitation, and deep learning in cross-cultural chat: A case study. The Internet and Higher Education, 10(2), 125-141. http://dx.doi.org/10.1016/j.iheduc.2007.03.004
Palys, T. (2008). Purposive sampling.The Sage Encyclopedia of Qualitative Research Methods, 2, 697-698.
Pegrum, M., Bartle, E., & Longnecker, N. (2015). Can creative podcasting promote deep learning? The use of podcasting for learning content in an undergraduate science unit. British Journal of Educational Technology,46(1), 142-152. http://dx.doi.org/10.1111/bjet.12133
Planar, D., & Moya, S. (2016). The effectiveness of instructor personalized and formative feedback provided by instructor in an online setting: Some unresolved issues. Electronic Journal of E-Learning, 14(3), 196-203.
Plomp, T. & Nieveen, N. (Eds.) (2009). An introduction to educational design research: Proceedings of the seminar conducted at the East China Normal University, Shanghai. Enschede, The Netherlands: SLO – The Netherlands Institute for Curriculum Development.
Poortman, C., & Schildkamp, K. (2012). Alternative quality standards in qualitative research? Quality & Quantity, 46(6), 1727-1751. http://dx.doi.org/10.1007/s11135-011-9555-5
Powell, R. A., & Single, H. M. (1996). Focus groups. International Journal for Quality in Health Care, 8(5), 499-504. http://dx.doi.org/10.1093/intqhc/8.5.499
Rakoczy, K., Harks, B., Klieme, E., Blum, W., & Hochweber, J. (2013). Written feedback in mathematics: Mediated by students’ perception, moderated by goal orientation. Learning and Instruction, 27, 63-73. http://dx.doi.org/10.1016/j.learninstruc.2013.03.002
Ramsden, P. (1992). Learning to teach in higher education. London: Routledge. http://dx.doi.org/10.4324/9780203413937
Ramsden, P., & Entwistle, N. (1983). Understanding student learning.Kent: Croom Helm.
Ramsden, P., & Moses, I. (1992). Associations between research and teaching in Australian higher education. Higher Education, 23(3), 273-295. http://dx.doi.org/10.1007/BF00145017
Richardson, J. C., Koehler, A. A., Besser, E. D., Caskurlu, S., Lim, J., & Mueller, C. M. (2015). Conceptualizing and investigating instructor presence in online learning environments. The International Review of Research in Open and Distributed Learning, 16 (3), 256-297. http://dx.doi.org/10.19173/irrodl.v16i3.2123
Rushton, A. (2005). Formative assessment: A key to deep learning? Medical Teacher, 27(6), 509-513. http://dx.doi.org/10.1080/01421590500129159
Sadler, D. R. (2013). Opening up feedback. In S. Merry, M. Price, D. Carless, & M. Taras (Eds.), Reconceptualising feedback in higher education: Developing dialogue with students (1stEd., pp. 54-63). London: Routledge. http://dx.doi.org/10.4324/9780203522813
Sadler, P. M., & Good, E. (2006). The impact of self-and peer-grading on student learning. Educational Assessment, 11(1), 1-31. http://dx.doi.org/10.1207/s15326977ea1101_1
Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249-276. http://dx.doi.org/10.3102/00346543068003249
Trigwell, K., Prosser, M., & Waterhouse, F. (1999). Relations between teachers’ approaches to teaching and students’ approaches to learning. Higher Education, 37(1), 57-70.
Uijl, S., Filius, R., & Ten Cate, O. (2017). Student interaction in small private online courses. Medical Science Educator, 1-6. http://dx.doi.org/10.1007/s40670-017-0380-x
van Aken, J. E. (2004). Management research based on the paradigm of the design sciences: The quest for field-tested and grounded technological rules. Journal of Management Studies, 41(2), 219-246. http://dx.doi.org/10.1111/j.1467-6486.2004.00430.x
van Aken, J. E. (2007). Developing organization studies as an applied science using a triple learning approach. Paper presented at the third organization studies summer workshop, Greece.
van den Akker, J. J. H. (1999). Principles and methods of development research. In van den Akker, J. J. H., Branch, R. M. Gustafson, K., Nieveen, N. & Plomp, T. (Eds.),Design approaches and tools in education and training(1 stEd., pp. 1-14). Dordrecht: Springer Netherlands. http://dx.doi.org/10.1007/978-94-011-4255-7
Vermunt, J. D., & Verloop, N. (1999). Congruence and friction between learning and teaching. Learning and Instruction, 9(3), 257-280. http://dx.doi.org/10.1016/S0959-4752(98)00028-0