AJDE1904.vp Validating an Approach to Examining Cognitive Engagement Within Online Groups Peter K. Oriogun Department of Computing, Communications Technology and Mathematics London Metropolitan University Andrew Ravenscroft and John Cook Learning Technology Research Institute London Metropolitan University Tools for measuring cognitive engagement within online groups have been concerned only with measuring an individual participant’s cogni- tive engagement, without any concern for measuring cognitive en- gagement within groups. There remains a serious need for a scheme that measures cognitive engagement of groups and the validation of such a scheme against existing methods. The SQUAD (coding catego- ries that are being measured, a semistructured approach for scaffold- ing online groups’ engagement) approach to computer-mediated com- munication (CMC) discourse invites students within their respective groups to post messages based on five given categories: (a) suggestion, (b) question, (c) unclassified, (d) answer, and (e) delivery. In this arti- cle, the authors validated the SQUAD approach at the message level with an established framework called the practical inquiry model for assessing cognitive presence of CMC discourse. They adopted the alignments suggested by one of the developers of the Transcript Anal- ysis Tool at sentence level to assess students’ cognitive engagement within online groups in three case studies presented in this article. The authors argue that the cognitive presence attributed to the SQUAD ap- proach has been empirically validated with respect to cognitive en- gagement within groups online. 197 THE AMERICAN JOURNAL OF DISTANCE EDUCATION, 19(4), 197–214 Copyright © 2005, Lawrence Erlbaum Associates, Inc. Correspondence should be sent to Peter K. Oriogun, London Metropolitan University, Department of Computing, Communications Technology and Mathematics, Learning Tech- nology Research Institute, 2-16 Eden Grove, London N7 8EA, England. E-mail: p.oriogun@londonmet.ac.uk The three case studies illustrate the authors’ approach to negotiating and reconciling problem-solving task requirements for software engi- neering online. The three groups of students made effective use of all the message categories for cognitive engagement within groups online. It has been suggested that the process of collaborative learning that occurs while learners interact to create a collective solution to a given task or prob- lem is a form of cognitive benefit (Johnson and Johnson 1996). In such situa- tions, learners may be encouraged to foster positive social interdependences, such as helping each other within the group to realize their potential through continuous and sustained feedback. Consequently, a collaborative, prob- lem-based learning process can help create an atmosphere where learners are able to reflect on their own progress made within the group and in the context of a collective dedicated to completing a given task. Such a group communi- cation medium can provide learners with the opportunity to exchange ideas related to one another and receive feedback from their peers. One way of engaging learners in online collaborative learning is to cre- ate an environment in which knowledge emerges and is shared. The onus is therefore on the tutor/instructor to (1) create an environment in which knowledge emerges and is shared through the collaborative work within a group of students and (2) facilitate sharing of information and knowledge among members of a learning team instead of controlling the delivery and pace of course content. The SQUAD (suggestion/question/unclassified/an- swer/delivery) approach (Oriogun 2003b, 2005) to online discourse adopts a problem-based learning approach (Barrows 1996; Bridges 1992; Oriogun, French, and Haynes 2002) and allows groups of learners to inter- act for the purpose of creating a collective solution to a given task or prob- lem and provides a way of measuring students’ online learning levels of en- gagement (Oriogun 2003b) by • creating the atmosphere that will motivate students to learn in a group setting online (where students are able to trigger a discussion within their respective groups); • promoting group interactions and participation over the problem to be solved by the group online (where students can explore various possi- bilities within the group by actively contributing to the group); • helping learners to build up a knowledge base of relevant facts about the problem to be solved online (where students can begin to integrate their ideas to influence others within their group); 198 COGNITIVE ENGAGEMENT ONLINE • allowing the newly acquired knowledge to be shared by the group on- line with the aim of solving the given problem collaboratively and col- lectively (where students can resolve issues relating to the assigned work to be completed collectively); and • delivering various artifacts leading to a solution or a number of solu- tions to the problem to be solved online (where students can both inte- grate and resolve aspects of the problem to be solved collectively). Garrison, Anderson, and Archer’s (2001) definition and use of trigger, exploration, integration, and resolution is in line with the SQUAD ap- proach usage of these same terms. This is why we have opted to validate the SQUAD with Garrison et al.’s (2001) framework. An examination of the existing literature to date has revealed that there are no tools for measuring the cognitive elements of groups of people working on a particular task or problem online, such as a group’s coursework for a module or course. There are tools available for investigat- ing cognitive elements of individuals working online (Fahy 2002; Garri- son, Anderson, and Archer 2001; Hara, Bonk, and Angeli 2000; Henri 1992; Oriogun 2003a; Oriogun and Cook 2003). In this article, we adopt the theoretical framework of two recently developed tools, commonly used for analyzing students’ cognitive elements online (Fahy 2002; Garrison, Anderson, and Archer 2000, 2001) at the individual level to validate at the group level the cognitive engagement of groups of students working within the SQUAD approach. We adopted Fahy’s (2002) suggested three different alignments of the Transcript Analysis Tool (TAT) categories with Garrison, Anderson, and Archer’s (2001) model as a framework to realize the cognitive presence in the SQUAD approach (Oriogun 2003b, 2005). We used three case studies from three groups of master’s computing students who used the SQUAD environment (software tool supporting this new approach) to negotiate and reconcile software requirements online during the two semesters of the 2003–2004 academic year at London Metropolitan University. Each of the three case studies covered a period of twelve consecutive weeks. The first group of students posted a total of 725 messages, the sec- ond group posted 143 messages, and the third group posted 171 messages. The unit of transcript analysis for the SQUAD approach was at message level. By message level we mean a unit of online transcript analysis that is objectively identifiable; unlike other units of online transcript analysis, the message-level unit allows multiple coders to agree consistently on the total number of cases. It also produces a manageable set of cases. If the cognitive 199 ORIOGUN, RAVENSCROFT, COOK presence realized in this article for the SQUAD approach is accepted, using Fahy’s (2002) alignments within Garrison et al.’s (2001) framework to- gether with the case studies we present in support of our argument, we have provided a way of empirically validating Oriogun’s (2003b) SQUAD ap- proach with respect to cognitive engagement within online groups. Cognitive Presence in Fahy et al.’s (2000) Transcripts Analysis Tool A number of researchers have developed analytical tools for measuring online transcripts. Fahy et al. (2000) used the TAT based on Zhu’s (1996) ear- lier work, which operates at a sentence level of analysis for the comparison of the frequencies and proportions of five categories or sentence types in a par- ticular data set. Fahy et al.’s five coding categories are shown in Figure 1. When Fahy (2002) examined the cognitive presence model, he realized that the categories of the TAT might be capable of being aligned with the phases in Garrison, Anderson, and Archer’s (2001) model, with the resulting alignments reflecting different assumptions about the linguistic and social behavior associated with the model’s phases. From three such alignments an analysis was produced, allowing a comparison of both the analytic processes involved and the resulting richness of the insights provided. In aligning the TAT with the phases of the cognitive presence model, interpretation was re- quired. Garrison, Anderson, and Archer (2001, 14) found that elements fit multiple categories; three different alignments of the TAT categories with the model were produced, based on different assumptions about what interactive behavior is apparent in the four phases of cognition (Fahy 2002). Transcript Analysis Tool alignments with the phases of the model are shown in Table 1; also, the equivalent mapping of the SQUAD is shown in Table 3. These align- ments are the basis of this article. Cognitive Indicators in Oriogun’s (2003b) Squad Approach to CMC Discourse The SQUAD approach (Oriogun 2003b) to computer-mediated commu- nication (CMC) discourse provides a means through which statistics com- piled from students’ online discourse can be used to generate objective esti- mations of their degree of learning engagement. The cognitive indicators of the SQUAD approach are based on Henri’s (1992) cognitive indicators. The cognitive descriptors adapted from Hara, Bonk, and Angeli (2000) are shown in Table 2. 200 COGNITIVE ENGAGEMENT ONLINE Mapping the TAT Categories to the Squad Categories Our use of mapping in this article refers to the tools being equivalent for measurement purposes. The following section explains how we have mapped the SQUAD within Fahy’s (2002) TAT alignments to realize our SQUAD alignments to Garrison, Anderson, and Archer’s (2001) framework. The TAT category 1A includes vertical questions, which assumes a “correct” answer exists and that the question can be answered if the appro- priate individual is asked or the right source contacted. The TAT category 201 ORIOGUN, RAVENSCROFT, COOK Figure 1. Fahy et al.’s (2000) Transcript Analysis Tool Coding Categories Reprinted by permission of the Alberta Journal of Educational Research, from Patrick J. Fahy, Gail Crawford, Mohamed Ally, Peter Cookson, Verna Keller, and Frank Prosser, “The Development and Testing of a Tool for Analysis of Computer Mediated Conferencing Transcripts,” Alberta Journal of Educational Research, Vol. 46, No. 1, 2000, pp. 85–88. 1B comprises horizontal questions—there may not be one right answer; others are invited to help provide a plausible or alternative answer or to help shed light on the question (Fahy 2002). The SQUAD category Q is a form of words addressed to a person to elicit information or evoke a response. An example of a question within the SQUAD framework is when students seek clarification from the tutor or other students in order to make appropriate decisions relating to the group coursework (Oriogun 2003b). We can, therefore, comfortably infer that the horizontal and vertical questions from the TAT model equate to the definition offered for category Q within the SQUAD framework. The TAT category 2A includes non-referential statements, which con- tain little self-revelation and usually do not invite response or dialogue; the main intention is to impart facts or information. The speaker may take a matter-of-fact, a didactic, or even a pedantic stance, providing information or correction to an audience that he or she appears to assume is uninformed or in error, but curious and untested or otherwise open to information or correction. This type of statement may contain implicit values or beliefs, but usually these are inferred and are not as explicit as they are in TAT type 3 reflections (Fahy 2002). The SQUAD category U is normally not in the list of categories of messages stipulated by the instigator of the task at hand. This tends to happen at the start of the online postings. Students may be un- sure of what the message is supposed to convey. In most cases, it falls within one of the four classified categories (Oriogun 2003b). It is, there- fore, reasonable to infer that the U category within the SQUAD framework has a direct mapping with the 2A category within the TAT model. The TAT category 2B referential statements comprises direct answers to questions or comments that refer to specific preceding statements (Fahy 2002). The SQUAD category A is a reply, either spoken or written, as to a question, request, letter, or article. Students are expected to respond to this 202 COGNITIVE ENGAGEMENT ONLINE Table 1. Alignments of Cognitive Presence (Garrison, Anderson, and Archer 2000, 2001) Model With the Transcript Analysis Tool Categories (Fahy 2002) Alignment Triggers Exploration Integration Resolution 1 1A, 1B 2A, 4 2B, 5A, 5B 3 2 1A, 1B, 2B 2A 4, 5A, 5B 3 3 1A, 1B, 2B 2A, 4 3 5A, 5B From “Assessing Critical Thinking Processes in a Computer Conference,” by P. J. Fahy, 2002. Used by permission. 203 Table 2. The SQUAD Approach: Cognitive Indicators Coding Categories Descriptors (Oriogun 2003b) Message Category Description Example Cognitive Indicators S Suggestion The process whereby the mere presentation of an idea to a receptive individual leads to the acceptance of the idea. Students engage with other students within their coursework groups by offering advice, a viewpoint, or an alternative viewpoint to a current one. Elementary classification In-depth classification Inferencing Judgment Application of strategies Q Question A form of word address to a person to elicit information or evoke a response. Students may seek clarification from the tutor or other students to make appropriate decisions relating to the group coursework. Elementary classification In-depth classification U Unclassified Not in the list of categories of messages stipulated by the instigator of the task at hand. This tends to happen at the start of the online postings. Students may be unsure of what the message is supposed to convey. In most cases, it falls within one of the four classified categories. Elementary classification A Answer Reply, either spoken or written, as to a question, request, letter, or article. Students are expected to respond to this type of message with a range of possible solutions/ alternatives. Elementary classification In-depth classification Inferencing Judgment D Delivery The act of distribution of goods, mail, and so on. Students are expected to produce a piece of software at the end of the semester. They all have to participate in delivering aspects of the artifacts making up the software. Elementary classification In-depth classification Inferencing Judgment Application of strategies Reprinted by permission from “Towards Understanding Online Learning Levels of Engage- ment Using the SQUAD Approach to CMC Discourse,” by P. K. Oriogun, Australian Journal of Educational Technology, Vol. 19, No. 3, 2003, pp. 371–387. Available online at http://www.ascilite.org.au/ajet/ajet19/oriogun.html type of message with a range of possible solutions/alternatives. Also, the SQUAD category S is the process whereby the mere presentation of an idea to a receptive individual leads to the acceptance of the idea, and students engage with other students within their coursework groups by offering ad- vice, a viewpoint, or an alternative viewpoint to a current one (Oriogun 2003b). It is reasonable to accept that the SQUAD categories A and S equate to the TAT category 2B. The TAT category 3, reflections, shows the speaker expressing thoughts, judgments, opinion, or information that are personal and are usually guarded or private. The speaker may also reveal personal values, beliefs, doubts, convictions, and ideas acknowledged as personal. The lis- tener/reader receives both information about some aspect of the world (in the form of opinions) and insights into the speaker. Listeners are assumed to be interested in and empathic toward these personal revelations and are expected to respond with understanding and acceptance. The speaker im- plicitly welcomes questions (even personal ones), as well as self-revela- tions in turn, and other supportive responses (Fahy 2002). The SQUAD cat- egory S described earlier is focused on what the group has to deliver for their group coursework and does not necessarily deal with significant per- sonal revelation with reference to the TAT definition. However, an individ- ual’s personal thoughts on the group’s coursework deliverables is part of what is dealt with here. The SQUAD S category also encourages what is described within the TAT model category 4, scaffolding/engaging. Students are expected to ini- tiate, continue, or acknowledge interpersonal interaction, and/or “warm” and personalize the discussion. They do this by agreeing with, thanking, or otherwise recognizing someone else and encouraging or recognizing the helpfulness, ideas and comments, capabilities, and experience of others. The SQUAD category D is the act of distribution of goods, mail, and other items. This is where students are expected to produce a piece of software at the end of the semester. They all have to participate in delivering aspects of the artifacts making up the software (Oriogun 2003b). At this point, stu- dents may show their appreciation to part of the group coursework deliver- able by responding with comments with real substantive meaning (phatic communion, elevator/weather talk, salutation/greetings, and closings/sig- natures), and devices such as obvious rhetorical questions and emoticons (Fahy 2002). The TAT category 5A and 5B deals with quotations/citations. This re- lates to quotations or fairly direct paraphrases of sources and citations or attributions of quotations or paraphrases. Within the SQUAD framework, 204 COGNITIVE ENGAGEMENT ONLINE category S deals with quotations/citations in exactly the same way as in the TAT model. Table 3 shows our proposed alignments of cognitive presence (Garrison, Anderson, and Archer 2000, 2001) in Oriogun’s (2003b) SQUAD approach by adopting the TAT model (Fahy 2002) cod- ing categories based on the TAT mapping articulated earlier. Please note that the SQUAD alignments with TAT are such that, for each alignment, it is possible to have more than one of the categories of SQUAD within the four phases of the practical inquiry model we are considering for this article. Table 3 is our proposed alignment of the cognitive presence (Gar- rison, Anderson, and Archer 2000, 2001) model with the SQUAD frame- work by adopting Fahy’s (2002) TAT model coding template. Method A second version of a tool supporting the SQUAD approach has now been developed: SQUAD v 2.0 (Oriogun and Ramsay 2005). In this article, we report on a pilot study that was conducted to investigate the application of the TAT alignment to the SQUAD approach with the practical inquiry (Garrison, Anderson, and Archer 2001) models. The purpose of this under- taking was to develop a framework capable of describing group-level cog- nitive engagement. The first study corpus used was the transcript of two groups of software engineering students in a master’s program in comput- ing in the first semester of 2004–2005. By the end of the study, in week 12, the first group had posted a total of 725 messages, and the second group had posted a total of 143 messages. The second study corpus consisted of five part-time evening master’s computing students. During the second semes- ter of 2004–2005, they posted a total of 171 messages during the first twelve weeks of the study. The three case studies over the year and their contributions to SQUAD message categories are shown in Table 4. A total of 1,039 messages were posted throughout the academic year. Table 5 205 ORIOGUN, RAVENSCROFT, COOK Table 3. Proposed Alignment of Cognitive Presence (Garrison, Anderson, and Archer 2000, 2001) in Oriogun’s SQUAD Approach by Adopting the Transcript Analysis Tool Model (Fahy 2002) Coding Categories Alignment Triggers Exploration Integration Resolution 1 Q U, S A, S S, D 2 Q, A U S, D S, D 3 Q, A U, S S, D S Note: SQUAD = Suggestion, Question, Unclassified, Answer, Delivery. shows the results of applying the TAT alignment to the SQUAD approach with the phases of the practical inquiry model for Case Study 1; Table 6 shows the results of the same for Case Study 2. Since the recommendation of his TAT alignments (Fahy 2002), Fahy (2005) has published detailed results in a study consisting of 462 postings, comprising 3,126 sentences containing approximately 54,000 words, gen- erated by a group of thirteen students and an instructor/moderator, engaged in a thirteen-week distance education graduate credit course delivered to- tally at a distance. We have seized the opportunity to compare Fahy’s (2005) findings with our TAT alignment of Oriogun’s (2003b) SQUAD ap- proach as described earlier (see Table 3) using the two methods for assess- ing critical thinking in CMC transcript (Fahy 2005). Table 7 shows our Case Study 1, with students from Group 1. These students posted a total of 725 messages over a period of twelve weeks using the SQUAD approach. Table 8 shows the results for our Case Study 2, Group 2, posting a total of 143 messages over the twelve weeks of the study. Table 9 shows the results for our Case Study 3, Group 3, posting a total of 171 messages over the first twelve weeks of the second semester in 2004–2005. 206 COGNITIVE ENGAGEMENT ONLINE Table 4. Total Number of SQUAD Postings by Master’s Computing Students (2004–2005 Academic Semesters) Case Study S Q U A D Total 1 132 105 243 157 88 725 2 21 14 66 10 32 143 3 55 18 27 26 45 171 Note: SQUAD = Suggestion, Question, Unclassified, Answer, Delivery. Table 5. Case Study 1 Results Applying Transcript Analysis Tool Alignment to the SQUAD Approach Using the Practical Inquiry Model Phases of Practical Inquiry Model SQUAD No. 1 SQUAD No. 2 SQUAD No. 3 Triggers 14.5 36.1 36.1 Exploration 51.7 33.5 51.7 Integration 39.9 30.3 30.3 Resolution 30.3 30.3 18.2 Note: All table values are percentages. SQUAD = Suggestion, Question, Unclassified, Answer, Delivery. Discussion When we compare phases of the practical inquiry model with Fahy’s (2005) practical inquiry/TAT results and our three case studies’ SQUAD TAT alignments (see Tables 7, 8, and 9), we observe more favorable results. Because the SQUAD is a semistructured approach to CMC discourse at the message level, it helps to scaffold students’ online learning. There is no need to perform an interrater reliability measure with the SQUAD ap- proach, as the students had to use one of the SQUAD message categories. In our first case study, with a total of 725 message postings, the SQUAD results applying TAT alignments SQUAD No. 2 shows that the group’s over- all average contribution to each phase was 32.6% (the average of percentages in Table 7, column 6). This is indeed an ideal result, on the basis that this par- ticular group of students made effective use of all the message categories. In our second case study, with a total of 143 message postings, the SQUAD results applying the TAT alignments SQUAD No. 2 shows that the group’s overall average contribution to each phase was 34.3% (the average of percentages in Table 8, column 6). In our third case study, with a total of 171 message postings, the SQUAD results applying TAT alignments SQUAD No. 1 shows that the group’s overall average contribution to each phase was 41.1% (the average of per- centages in Table 9, column 5). Overall, Case Study 3 implies that this group of students contributed, on average, 40.6% postings to each of the phases of the practical inquiry model (the average of percentages in Table 9, columns 5–7). This is indeed a much better result than the results from the first semester of 2004–2005. One of the reasons the groups of students in our three studies (a total of thirteen in the three groups) made effective use of the SQUAD categories at the message level is that, out of the total marks awarded to the group 207 ORIOGUN, RAVENSCROFT, COOK Table 6. Case Study 2 Results Applying Transcript Analysis Tool Alignment to SQUAD Approach Using the Practical Inquiry Model Phases of Practical Inquiry Model SQUAD No. 1 SQUAD No. 2 SQUAD No. 3 Triggers 9.8 16.8 16.8 Exploration 60.8 46.2 60.8 Integration 21.7 37.1 37.1 Resolution 37.1 37.1 14.7 Note: All table values are percentages. SQUAD = Suggestion, Question, Unclassified, Answer, Delivery. 208 Table 7. Comparison of Phases of the Practical Inquiry Model With the Present Fahy (2005) Practical Inquiry/TAT Results and Case Study 1 TAT Alignments Phases of the Practical Inquiry Model Practical Inquiry Model Results, Garrison, Anderson, and Archer (2001) Initial Pilot Practical Inquiry Model Results, Fahy (2005) Present Study TAT Results, Fahy (2005) SQUAD Results Applying TAT Alignments SQUAD No. 1 SQUAD Results Applying TAT Alignments SQUAD No. 2 SQUAD Results Applying TAT Alignments SQUAD No. 3 Triggers 12.5 9.4 6.4 14.5 36.1 36.1 Exploration 62.5 74.2 76.4 51.7 33.5 51.7 Integration 18.8 14.6 14.7 39.9 30.3 30.3 Resolution 6.3 1.8 2.5 30.3 30.3 18.2 Note: All table values are percentages. TAT = Transcript Analysis Tool; SQUAD = Suggestion, Question, Unclassified, Answer, Delivery. 209 Table 8. Comparison of Phases of the Practical Inquiry Model With the Present Fahy (2005) Practical Inquiry/TAT Results and Case Study 2 TAT Alignments Phases of the Practical Inquiry Model Practical Inquiry Model Results, Garrison, Anderson, and Archer (2001) Initial Pilot Practical Inquiry Model Results, Fahy (2005) Present Study TAT Results, Fahy (2005) SQUAD Results Applying TAT Alignments SQUAD No. 1 SQUAD Results Applying TAT Alignments SQUAD No. 2 SQUAD Results Applying TAT Alignments SQUAD No. 3 Triggers 12.5 9.4 6.4 9.8 16.8 16.8 Exploration 62.5 74.2 76.4 60.8 46.2 60.8 Integration 18.8 14.6 14.7 21.7 37.1 37.1 Resolution 6.3 1.8 2.5 37.1 37.1 14.7 Note: All table values are percentages. TAT = Transcript Analysis Tool; SQUAD = Suggestion, Question, Unclassified, Answer, Delivery. 210 Table 9. Comparison of Phases of the Practical Inquiry Model With the Present Fahy (2005) Practical Inquiry/TAT Results and Case Study 3 TAT Alignments Phases of the Practical Inquiry Model Practical Inquiry Model Results, Garrison, Anderson, and Archer (2001) Initial Pilot Practical Inquiry Model Results, Fahy (2005) Present Study TAT Results, Fahy (2005) SQUAD Results Applying TAT Alignments SQUAD No. 1 SQUAD Results Applying TAT Alignments SQUAD No. 2 SQUAD Results Applying TAT Alignments SQUAD No. 3 Triggers 12.5 9.4 6.4 10.5 25.7 25.7 Exploration 62.5 74.2 76.4 47.9 15.8 47.9 Integration 18.8 14.6 14.7 47.4 58.5 58.5 Resolution 6.3 1.8 2.5 58.5 58.5 32.2 Note: All table values are percentages. TAT = Transcript Analysis Tool; SQUAD = Suggestion, Question, Unclassified, Answer, Delivery. coursework for collaborating and negotiating software requirements dur- ing the semester, 7.5% of the marks were for using the SQUAD approach (extrinsic motivation). In fact, at the end of the semester the students re- ported that if no marks had been attached to adopting the SQUAD ap- proach, they would most probably have used other forms of communica- tion, including publicly available online collaborative systems. Results from a quantitative analysis of the 1,039 total message postings showed that the three groups contributed an average of 32.6% (Case Study 1), 34.3% (Case Study 2) and 41.1% (Case Study 3) of their postings to each phase of the practical inquiry model. On the basis of these and related findings, we conclude that the three groups of students made effective use of all the message categories for cognitive engagement within online groups. Conclusion The results from the initial pilot of the practical inquiry model of Gar- rison, Anderson, and Archer’s (2001) study, the practical inquiry results from Fahy’s (2005) study, and SQUAD results applying TAT alignments all showed that exploration was clearly the most common type of posting (see Tables 7, 8, and 9). The TAT result and the initial practical inquiry model results showed that the next most common type of posting was in- tegration. This is where the SQUAD approach proved to have shown much better results, in that if one looks at the average posting within each of the phases of the practical inquiry model one sees that, on aver- age, each group contributed approximately the same number of postings to each of the categories. The main reason for this could be that both the critical inquiry model and the SQUAD TAT alignments use the message as a unit of measurement. Furthermore, the SQUAD approach does not require an interrater reliability measure as it is a semistructured method for scaffolding students’ learning. Although we do not have similar concern in this study regarding the category of “other” within the practical inquiry model, this category warrants further investigation. It is worth noting that, in Fahy’s (2002) suggested TAT alignments, multiple message categories were not per- mitted (e.g., in the case of TAT No. 1, the sum total of all the categories is 100% under triggers, exploration, integration, and resolution; see Ta- ble 1). However, because of the cognitive indicators governing the SQUAD framework, multiple message categories are permitted (e.g., in the case of SQUAD No. 1, message category S appeared under explora- 211 ORIOGUN, RAVENSCROFT, COOK tion, integration, and resolution; see Table 3). Perhaps Fahy’s (2002) alignments are too restrictive at sentence level. Further testing of the practical inquiry model is required to ascertain its robustness and valid- ity. There is a real need to develop Garrison et al.’s (2001) framework, especially empirically testing it in relation to actual transcripts of online communications. We believe that through the theorizing and empirical work described herein, we have substantially supported our argument that the cognitive presence realized in this article for the SQUAD approach, using Fahy’s (2002) three alignment within Garrison et al.’s (2001) framework to- gether with our three case studies using master’s computing students at London Metropolitan University, is a way of empirically validating the cognitive engagement of the SQUAD approach to CMC discourse within groups. References Barrows, H. 1996. Problem-based learning in medicine and beyond: A brief overview. In Bringing problem-based learning to higher educa- tion: Theory and practice, ed. L. Wilkerson and W. Gijselaers, 3–11. San Francisco: Jossey-Bass. Bridges, E. M. 1992. Problem-based learning for administrators. Eugene, OR: ERIC Clearinghouse on Educational Management. ERIC, ED 347617. Fahy, P. J. 2002. Assessing critical thinking processes in a computer con- ference. Centre for Distance Education, Athabasca University, Athabasca, Alberta, Canada. Unpublished manuscript. Available online at http://cde.athabascau.ca/softeval/reports/mag4.pdf ———. 2005. Two methods for assessing critical thinking in com- puter-mediated communications (CMC) transcripts. International Jour- nal of Instructional Technology and Distance Learning. Available on- line at http://www.itdl.org/Journal/Mar_05/article02.htm Fahy, P. J., G. Crawford, M. Ally, P. Cookson, V. Keller, and F. Prosser. 2000. The development and testing of a tool for analysis of computer mediated conferencing transcripts. Alberta Journal of Educational Re- search 46 (1): 85–88. Garrison, R., T. Anderson, and W. Archer. 2000. Critical inquiry in a text-based environment: Computer conferencing in higher education. Internet and Higher Education 11 (2): 1–14. 212 COGNITIVE ENGAGEMENT ONLINE ———. 2001. Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Edu- cation 15 (1): 7–23. Hara, N., C. Bonk, and C. Angeli. 2000. Content analysis of online discus- sion in an applied educational psychology course. Instructional Science 28 (2): 115–152. Henri, F. 1992. Computer conferencing and content analysis. In Online ed- ucation: Perspectives on a new environment, ed. A. R. Kaye, 115–136. New York: Praeger. Johnson, D. W., and R. T. Johnson. 1996. Cooperation and the use of tech- nology. In Handbook of research for educational telecommunications and technology, ed. D. H. Jonassen, 1017–1044. New York: Simon & Schuster. Oriogun, P. K. 2003a. Content analysis of online inter-rater reliability using the transcript reliability cleaning percentage: A software engineering case study. Paper presented at the ICEIS 2003 conference, April, An- gers, France. ———. 2003b. Towards understanding online learning levels of engage- ment using the SQUAD approach to CMC discourse. Australian Journal of Educational Technology 19 (3): 371–387. Available online at http://www.ascilite.org.au/ajet/ajet19/oriogun.html ———. 2005. Introducing a dedicated tool for facilitating a semi-struc- tured approach to CMC messaging, Workshop on e-Learning Online Communities, eLOC 2005, CD-ROM Proceedings of the 3rd ACS/IEEE International Conference on Computer Systems and Applications (AICCSA-05), January, Cairo, Egypt. Oriogun, P. K., and J. Cook. 2003.Transcript Reliability Cleaning Percent- age: An alternative interrater reliability measure of message transcripts in online learning. American Journal of Distance Education 17 (4): 221–234. Oriogun, P. K., F. French, and R. Haynes. 2002. Using the enhanced Prob- lem-Based Learning Grid: Three multimedia case studies. Paper pre- sented at the ASCILITE conference, December, Auckland, New Zea- land. Available online at http://www.ascilite.org.au/conferences/ auckland02/proceedings/papers/040.pdf Oriogun, P. K., and E. Ramsay. 2005. Introducing a dedicated prototype ap- plication tool for measuring students’ online learning levels of engage- ment in a problem-based learning context. Paper presented at the IASTED, special session, International Conference on Education and Technology (ICET 2005), July, Calgary, Alberta, Canada. 213 ORIOGUN, RAVENSCROFT, COOK Zhu, E. (1996). Meaning negotiation, knowledge construction, and mentoring in a distance learning course. In Proceedings of selected re- search and development presentations at the 18th National Conference of the Association for Educational Communications and Technology, Indianapolis, IN. ERIC, ED 397849. 214 COGNITIVE ENGAGEMENT ONLINE