February 2007 Index
 

Home Page


Editor’s Note
: Researchers are not content to know what works, they explore the mechanisms to determine why and how cognition, processing and learning takes place. Higher levels of learning, such as critical thinking, are the source of many studies. The present research relates an experiment using Peter Oriogun’s SQUAD method to the Practical Inquiry model used by Garrison and Fahy’s Cognitive Presence model.

Assessing Critical Thinking in a New Approach
to Computer-Mediated Communication (CMC) Transcripts

Peter K. Oriogun
United Kingdom

Abstract

Critical thinking involves analysis, critique and some evaluation of the information gathered in order to make a reflective and well founded conclusion from the same.  It is therefore very important to understand that critical thinking ultimately affects all forms of communication, including speaking, writing, listening and reading.  Critical thinking in online communication is particularly challenging as it puts emphasis on students’ comprehension and knowledge of elements of an argument, as such, interacting with different ideas and one another.  In this article, the author assesses critical thinking in a new semi-structured approach to computer-mediated communication, the SQUAD approach (Oriogun, 2003; Oriogun, Ravenscroft and Cook, 2005) using the practical inquiry (PI) model (Garrison, Anderson, & Archer, 2001) as a framework. The phases of the SQUAD approach are mapped directly (Oriogun, Ravenscroft and Cook, 2006) with the practical inquiry model’s cognitive presence phases.  From such mapping, the author then compares Fahy’s (2005) latest study of the cognitive presence model with the initial pilot study by Garrison et al. (2001) in the three SQUAD case studies presented.  It is argued in this article that the SQUAD approach is superior to using interrater reliability measurement of online transcripts when using the Practical Inquiry (PI) model to assess critical thinking or cognitive presence of online groups.  It is further argues that there was an insufficient number of posting (24) by the four students over a period of one week, in the initial pilot study by Garrison et al. (2001) to draw any concrete conclusion from the study.  The author, however, concurs with Garrison et al’s (2001) conclusion ‘that the practical inquiry model could serve as a framework for future research in a quest to better understand the cognitive nature of the teaching and learning transaction in an asynchronous text-based conferencing environment’.

Keywords: CMC; community of inquiry; cognitive presence; critical thinking; practical inquiry model; Transcript Analysis Tool (TAT); SQUAD Approach; content analysis

Introduction

The challenge in online learning and distance education is for educators to develop educational strategies to meet a diversity of students needs in this modern world.  New technologies afford students an opportunity to experience a number of new learning environments where they are able to communicate irrespective of time or geographical location. Some critics argue that the type of learning that occurs in distance education is insufficient to develop critical thinking, and furthermore, that learners should be empowered to critically examine and construct meaning through their own prior experiences (Garrison, 1993; Lauzon, 1992).  Consequently, Lauzon (1992) suggests that distance educators should positively promote dialogue in order for learners to take ownership of the knowledge gained in such learning environment.  A common learning style adopted by a number of higher education institutions offering online learning and/or distance education is the use of discussion forums to foster critical thinking skills.  Unfortunately, research has shown that not all students involved in online discussion forums have the necessary latent projective skills to fully participate and interact within such teaching and learning environment. 

Critical Thinking Skills Online

Critical thinking involves analysis; critique and some evaluation of the information gathered in order to make a reflective and well founded conclusion.  It is important to understand that critical thinking ultimately affects all forms of communication, including speaking, writing, listening and reading.  According to Bullen (1998), critical thinking skills during online discussion involve four components; namely, cognitive maturity, teaching style of the lecturer/instructor, the students prior experiences and the degree of understanding of the critical thinking process.  Jones (1996) summarised Meyers (1986) suggestion that critical thinking across the disciplines have features in common, namely:

1.     Critical thinking is a learnable skill with teachers and peers serving as resources.

2.     Problems, questions, and issues serve as source of motivation for the learner.

3.     Courses are assignment-centred rather than text or lecture oriented.

4.     Goals, methods, and evaluation emphasize using content rather than simply acquiring it.

5.     Students collaborate to learn and enhance their thinking

Oriogun (2003) adapted Henri’s (1992) classification of critical thinking in what he called cognitive indicators, when he developed his semi-structured approach to online discourse called SQUAD.  These cognitive indicators are categorized as Elementary, In-depth, Inferencing, Judgement and Application of Strategies (see Table 1)

Table 1
Cognitive Indicators of the SQUAD Approach (Oriogun, 2003)

Reasoning skills

Definitions

Elementary clarification

Observing or studying a problem, identifying its elements, and observing their linkages in order to come to a basic understanding.

In depth clarification

Analysing and understanding a problem to come to an understanding which sheds light on the values, beliefs, and assumptions which underlie the statement of the problem.

Inferencing

Induction and deduction, admitting or proposing an idea on the basis of its link with propositions already admitted as true.

Judgement

Making decisions, statements, appreciations, evaluations and criticisms. Sizing up.

Application of strategies

Proposing coordinated actions for the application of a solution, or following through on a choice or a decision.

In a more recent article, Oriogun, Ravenscroft and Cook (2006) mapped the cognitive indicators of the SQUAD approach with the method proposed by Garrison et al. (2001) for detecting triggering events, exploration, integration and resolution called Cognitive PresenceCognitive Presence can be summarised as having four phases of critical thinking, namely, a Triggered Event deals with starting, inviting or soliciting a particular discussion; the Exploration phase is when information is exchanged between the learning participants; the Integration phase is when participant learners construct meaning and propose possible solutions; and finally, the Resolution phase is when proposed solution(s) is/are tested out (Garrison et al., 2001:11).

Recent Tools Developed at London Metropolitan University for Measuring Students Critical Thinking within a CMC Environment

In this modern technology driven society, online communication is exceptionally challenging for students and educators.  In recent years, we have seen the widespread adoption of computer mediated communication (CMC) in education, including extensive interest in using online communications to facilitate asynchronous dialogues, e.g. online teamwork. Consequently, recent research, for example on dialogue analysis, has attempted to explore the relationship between online dialogue features (e.g. roles, strategies, form and content) and learning (Pilkington, 2001). Such an analysis can provide useful insights into the nature of the learning processes from the perspective of, for example, a speaker's intention in a transmitted message and what the receiver perceives has been communicated by the message. However, a problem arises if we wish to investigate specific categories or variables of the learning process, e.g. participation, interaction, social, cognitive and metacognitive (Henri, 1992).  It is hoped that recent tools developed at London Metropolitan University will assist educators in engaging their students online as well as aid in measurement of students critical thinking skills or what Oriogun, Ravenscroft and Cook (2005) have termed as cognitive engagement.

The Learning Technology Research Institute at London Metropolitan University recently developed a tool called ‘InterLoc’ (http://www.interloc.org/).  This tool supports digital dialogue games for learning. Its structure, scaffold and support multimedia dialogues are highly engaging and foster ‘reasoned’ discourse and critical thinking through ‘live’ peer interaction. It incorporates an environment that supports a multi-phased activity (e.g. preparation, interaction and summary) and the use of particular dialogue games (e.g. critical discussion and reasoning, exploratory dialogue and creative thinking) that foster 'academic' discourse and thinking. The approach is particularly suited to groups of 4-8 students. The activities and dialogue games can be reused or adapted to address particular educational problems and contexts. The tool is highly flexible, and can support a range of pedagogical approaches, from informal student-centred learning activities to more formal course-related exercises. The key features of the tool are (http://www.interloc.org/):

§        A game design that promotes motivation, confidence and engagement

§        Integration of multimedia artefacts

§        Structured interaction through coordinating activities, dialogues, conversations and replies

§        Message openers (e.g. 'I think...', 'I disagree because...', 'Is there any evidence?...') that promote coherent dialogue, thinking and deep learning

§        Reusable and adaptable learning activity and dialogue game templates

Recently, Oriogun (2006) used content analysis of online transcripts to study quality of interaction, participation, and cognitive engagement. New tools developed by the London Metropolitan University were used to improve inter-rater reliability.  One of the tools is a software prototype supporting the SQUAD approach.  The SQUAD approach to CMC discourse invites students to post messages based on five given categories, namely, Suggestion, Question, Unclassified, Answer and Delivery. The approach to online discourse adopts problem-based learning (Barrows, 1996; Bridges, 1992; Oriogun et al, 2002) as an instructional method with the goal of solving real problems by (Oriogun, 2003):

§        Create an atmosphere that will motivate students to learn in a group setting online;

§        Promote group interactions and participation over the problem to be solved by the group online;

§        Help learners to build a knowledge base of relevant facts about the problem to be solved online;

§        Share newly acquired knowledge with a group online with the aim of solving the given problem collaboratively and collectively;

§        Deliver various artifacts’ leading to a solution or a number of solutions to the problem to be solved online.

Related Work

In order to enhance students participation, interaction and cognitive engagement online, Oriogun, Ravenscroft and Cook (2005), suggested, that “one way of engaging learners in online collaborative learning is to create an environment in which knowledge emerges and is shared. The onus is therefore on the tutor/instructor to (1) create an environment in which knowledge emerges and is shared through the collaborative work within a group of students, and (2) facilitate sharing of information and knowledge among members of a learning team instead of controlling the delivery and pace of course content”.  A methodological framework, developed by Oriogun (2003), called the SQUAD approach was used to develop their argument in the article (Oriogun, Ravenscroft and Cook, 2005) when they validated the cognitive engagement of postgraduate software engineering students at the London Metropolitan University during the two academic semesters of 2004-2005. 

Existing literature at the time of the study (Oriogun, Ravenscroft and Cook, 2005) revealed that there are no tools for measuring the cognitive engagement of groups of people working on a particular task/problem online, such as a group’s course work for a module or course. There are tools available for investigating cognitive elements of individuals working online (Henri 1992; Hara, Bonk, and Angeli 2000; Fahy 2002; Garrison et al. 2001; Oriogun 2003; Oriogun and Cook 2003).  In the article (Oriogun, Ravenscroft and Cook, 2005) we adopted the theoretical framework of two recently developed tools, commonly used for analyzing students’ cognitive elements online (Fahy 2002; Garrison, Anderson, and Archer 2000, 2001) at an individual level in order to validate at group level the cognitive engagement of groups of students working within the SQUAD approach. 

The Study

In this article, the author will use the SQUAD statistics gather from two groups of Masters Software Engineering students and one group of Masters Computing students from 20th June 2006 until 31st August 2006, a total of 73 days, to measure the cognitive engagement of the students according to the mapping of the SQUAD approach to the Cognitive Presence model (Oriogun, Ravenscroft and Cook, 2006).  The first group of Masters Software Engineering students was composed of 4 students.  They posted a total of 23 messages over the 73 days of the study.  The second group, also of Master Software Engineering student had 5 members.  They posted a total of 80 messages over the 73 days of this study.  The third and final group, the Masters Computing students had 5 members.  Table 2 shows the SQUAD statistics for Master Software engineering Students (Group1).

Table 2
Case Study 1 -Masters Software Engineering Students (Group 1)

Student

S

Q

U

A

D

Total

S1

1

1

3

1

0

6

S2

6

0

0

4

0

10

S3

2

0

0

0

0

2

S4

3

2

0

0

0

5

TOTAL

12

3

3

5

0

23


These students were completing a group assignment in a module called Software Project Management, a designated or optional module on both Masters courses.  This component of the module is very practical, and students were given a practical Project Management problem to solve using PRINCE 2 as a methodology, template or vehicle by which to solve the problem.  If they pass the module, it will count towards the total of 6 taught modules and a dissertation, which is also worth an equivalent of 3 core or compulsory modules.  Out of the 6 taught modules, 4 are core.   The group assignment is 50% of the Software Project Management module; the other 50% is an open-book test, which is more theoretical in nature. These students were, at the time of the study, working from England, India, Nigeria and Pakistan.  These were all full-time mature students.  Table 3 shows the SQUAD statistics for Master Software engineering Students (Group2).

Table 3
Case Study 2 -Masters Software Engineering Students (Group 2)

Student

S

Q

U

A

D

Total

S5

1

0

1

0

1

3

S6

7

2

0

7

7

23

S7

4

2

1

6

0

13

S8

5

2

1

10

21

39

S9

1

1

0

0

0

2

TOTAL

18

7

3

23

29

80


The purpose of using the SQUAD environment to facilitate these students group coursework online was because all of the students were full-time students, sharing the same designated or optional module on their Masters programmes.  Another reason for getting the students to use the tool was that they have already used the SQUAD environment from September 2005 until January 2006 when they first enrolled on the module, as such they should know the way around the software tool.  The final rationale for using getting the students to use the tool was to actually evaluate their collaborative group effort spent on the assignment, as well as obtaining some qualitative measure of each student’s cognitive engagement when mapped to Garrison et al’s Cognitive Presence categories.  Table 4 shows the SQUAD statistics for Master Computing Students (Group3).

Table 4
Case Study 3 -Masters Computing Students (Group 3)

Student

S

Q

U

A

D

Total

S10

24

0

8

4

7

43

S11

5

0

1

0

0

6

S12

5

0

19

6

7

37

S13

32

0

0

0

0

32

S14

7

1

66

13

1

88

TOTAL

73

1

94

23

15

206

 

Mapping Phases of the Practical Inquiry Model’s Cognitive Presence directly onto the Phases of the SQUAD Approach

The SQUAD approach (Oriogun, 2003) to CMC discourse provides a means through which statistics compiled from students’ online discourse can be used to generate objective estimations of their degree of learning engagement.  The cognitive indicators of the SQUAD approach are based on Henri’s (1992) cognitive indicators.    The following section explains how we have mapped the SQUAD approach with Garrison et al’s (2001) framework. Our use of mapping in this article refers to the tools being equivalent for measurement purposes. 

The SQUAD category S described above is focused on what the group has to deliver for their group coursework, and does not necessarily deal with significant personal revelation.  It also encourages students to initiate, continue or acknowledge interpersonal interaction, and or “warm” and personalize the discussion by scaffolding/engaging comments connects or agree with, thank or otherwise recognize someone else, and encourage or recognize the helpfulness, ideas and comments, capabilities and experience of others.  The phases of the Practical Inquiry model capable of being mapped to SQUAD category S are Triggers and Exploration (see Table 5).

The SQUAD category Q is a form of words addressed to a person in order to elicit information or evoke a response.  An example of a question within the SQUAD framework is when students seeks clarification from the tutor or other students in order to make appropriate decisions relating to the group coursework (Oriogun, 2003).  The phases of the Practical Inquiry model capable of being mapped to SQUAD category Q are Triggers and Exploration (see Table 5).

The SQUAD category U is normally not in the list of categories of messages stipulated by the instigator of the task at hand.  This tends to happen at the start of the online postings. Students may be unsure of what the message is supposed to convey. In most cases, it falls within one of the four classified categories (Oriogun, 2003). The phase of the Practical Inquiry model capable of being mapped to SQUAD category U is other.  Results of analysis of 24 message transcripts by Garrison et al’s (2001) showed that one-third (8) of the postings did not relate to any of the four phases of the critical thinking model (p.19), as such, they categorised this phase as Other (see Table 5).

Table 5
Matrix for mapping 5 Phases of the Practical Inquiry Model’s Cognitive Presence to Phases of the SQUAD Approach (Oriogun, Ravenscroft and Cook, 2006)

Phases of the Practical Inquiry Model

Phases of the SQUAD Approach to CMC Discourse

S -

Suggestion

Q -

Question

U  -

Unclassified

A  - 

Answer

D - 

Delivery

Triggers

x

x

 

x

 

Exploration

x

x

 

 

 

Integration

 

 

 

 

x

Resolution

 

 

 

x

x

Other

 

 

x

 

 

The SQUAD category A is a reply, either spoken or written, to a question, request, letter or article.  Students are expected to respond to this type of message with a range of possible solutions / alternatives.  Also, the SQUAD category S is the process whereby the mere presentation of an idea to a receptive individual leads to the acceptance of the idea, and, students engage with other students within their coursework groups by offering advice, a viewpoint, or an alternative viewpoint to a current one (Oriogun, 2003).  The phases of the Practical Inquiry model capable of being mapped to SQUAD category A are Triggers and Resolution (see Table 5).

The SQUAD category D is the act of distribution of goods, mail etc.  This is where students are expected to produce a piece of software at the end of the semester. Al the students have to participate in delivering aspects of the artifacts making up the software (Oriogun, 2003).  At this point students may show their appreciations to part of the group coursework deliverable by responding with comments with real substantive meaning. The phases of the Practical Inquiry model capable of being mapped to SQUAD category S are Integration and Resolution (Table 5).  Table 6 shows Oriogun’s consolidation of the cognitive elements of the SQUAD approach using the Practical Inquiry model as a framework.

Table 6
Consolidation of Cognitive Elements of the SQUAD using the
Practical Inquiry
Model’s Cognitive Presence as a Framework
(Oriogun, Ravenscroft and Cook, 2006)

 

Oriogun’s SQUAD Mapping

Trigger

(S+Q+A)/2

Exploration

(S+Q)/2

Integration

D/2

Resolution

(A+D)/2

Other

U

The author will also compare the result of an established researcher on CMC transcripts also using the PI model as a framework at message level (Fahy, 2005), using the three case studies from master’s computing students at London Metropolitan University as described above.  The study corpus used in Fahy’s (2005) most recent study was a transcript of 462 message postings, generated by thirteen students and an instructor/moderator, engaged in a 13-week distance education graduate credit course delivered totally at a distance.  All of the students were experienced online users, and the instructor was an experienced distance educator who had used online to instruct graduate courses at a distance over five years.  In Fahy’s present study the whole posting was coded into one of the Practical Inquiry (PI) model’s five phases.  It was reported Fahy (2005) that an overall code-recode reliability of 86% was achieved with the PI model.

Table 7
Total Number of SQUAD Posting by Master’s Software Engineering and Computing Students (20th June 2006 – 31st August 2006)

Case Study

S

Q

U

A

D

Total

1

12

3

3

5

0

23

2

18

7

3

23

29

80

3

73

1

94

23

15

206

 

Findings

Tables 8 & 9 shows the comparison of the initial pilot study Garrison et al. (2001) with Oriogun’s SQUAD current study and Fahy’s (2005) present study using the Practical Inquiry model as a framework for three case studies referred to in Table 5 above.

It is worth noting at this point that the Initial Pilot Study by Garrison et al. (2001) and Practical Inquiry Results, Oriogun’s SQUAD Current Study –Case Study 2 are both at message-level, whilst Practical Inquiry Results, Fahy (2005) Latest Study operates at sentence-level.  Fahy’s latest study of 462 message postings has 3126 sentences and 54000words.  Both the Initial Pilot Study by Garrison et al. (2001) and Fahy’s latest study required interrater reliability measure.  In the case of Garrison et al’s (2001), coefficient of reliability of 83.33 with a Cohen (1969) Kappa (k) value of 0.74 was achieved in their third transcript analysis after learning from the possible errors that could have been generated with the first two separate transcript analysis reported in Garrison et al (2001).   Fahy (2005) on the other hand adopted the code-recode method before finally generating a CR of 85%.

It was noted by Fahy (2005) that ‘the iterative nature of the PI model and the conceptual interconnectedness of the model’s phases, provide a promising conceptual guide for researchers studying the “sociocognitive process” (Garrison, et al., 2001, p.11) of interpretation through CMC’.  Furthermore, (Oriogun, Ravenscroft and Cook, 2005, p.212) suggested that  ‘further testing of the practical inquiry model is required to ascertain its robustness and validity’ and that ‘there is a real need to develop Grarrison et al.’s (2001) framework, especially empirically testing it in relation to actual transcripts of online communications’.  The empirical study contained in this article is a way of further testing the PI model in order to ascertain it robustness and validity.

 Table 8

Case Study 1 & Case Study 2: Comparison of Oriogun’s SQUAD Current Study and Fahy’s (2005) Latest Study using the Practical Inquiry

Phases of the Practical Inquiry Model

Initial Pilot Study by Garrison et al

 (2001)

Practical Inquiry Results, Fahy (2005)

Latest Study

Practical Inquiry Results, Oriogun’s SQUAD Current Study –Case Study 1

Practical Inquiry Results, Oriogun’s SQUAD Current Study –Case Study 2

Trigger

8%

9.1%

43%

30%

Exploration

42%

71.6%

33%

16%

Integration

13%

14.1%

0%

18%

Resolution

4%

1.7%

11%

32%

Other

33%

3.5%

13%

4%

 

Totals

No of message postings =24

 4 Students

1 weeks (Online)

 

Coefficient of Reliability (CR) 83.33%

No of message postings =462

13 Students

13 weeks (Online)

 

 Code Recode (CR) 85%

No of message postings =23

 4 Students

 10 Weeks 3 Days

(Online)

Categorized by students (no requirement for CR)

No of message postings =80

 5 Students

 10 weeks 3 Days

(Online)

Categorized by students (no requirement for CR)

 

Course Module

Graduate-level course in Health Promotions (Instructor led)

Graduate-level course in Distance Education

(Instructor led)

Postgraduate-level course in Software Project Management

(Student led)

Postgraduate-level course in Software Project Management

(Student led)

In Tables 8 and 9, we have compared three different courses for the purpose of this study.  The initial pilot study by Garrison et al (2001) had 4 students on a graduate-level course in Health Promotions.  They posted 24 messages over a one-week duration (the whole of week 9 of the course).  The interrater reliability (or Coefficient of Reliability, or Code-Recode, or CR) was just over 83%.  For Fahy’s (2005) latest study, 13 students on a graduate-level course in Distance Education posted 462 messages over thirteen-weeks, with interrater reliability of 85%.  Both Garrison et al.’s (2001) and Fahy’s (2005) was instructor led.

In Oriogun’s current study, the three cases presented are from a Masters course in Software Project Management.  In Case Study 1, four students posted 23 messages online over 10weeks and 3 days in total.  For Case Study 2, five students posted a total of 80 messages over the same period, and, finally, In Case Study 3 five students posted 206 messages in the period in question. There was no need for interrater reliability in the case of the SQUAD approach as posted messages were by the students beforehand. Oriogun’s current study was student led.

Table 9
Case Study 3: Comparison of Oriogun’s SQUAD Current Study
and Fahy’s (2005) Latest Study using the Practical Inquiry

Phases of the Practical Inquiry Model

Initial Pilot Study by Garrison et al.
(2001)

Practical Inquiry Results, Fahy (2005)
Latest Study

Practical Inquiry Results, Oriogun’s SQUAD Current Study –Case Study 2

Trigger

8%

9.1%

24%

Exploration

42%

71.6%

18%

Integration

13%

14.1%

4%

Resolution

4%

1.7%

9%

Other

33%

3.5%

45%

 

Totals

No of message

postings =24

 4 Students

 1 Week (Online)

 

Coefficient of Reliability (CR) 83.33%

No of message

postings =462

 13 Students

13 Weeks (Online)

 

Code Recode (CR) 85%

No of message

postings =206

 5 Students

 10 Weeks 3 Days (Online)

 

Categorized by students (no requirement for CR)

 

Course (Module)

Graduate-level course in Health Promotions

(Instructor led)

Graduate-level course in Distance Education

(Instructor led)

Postgraduate-level course in Software Project Management

(Student led)

In Fahy’s (2005) latest study and Garrison et al.’s (2001) initial pilot study, the proportions of postings in the categories of trigger, integration, and integration/resolution are remarkably similar.  However, exploration was affected by the large number of the fact that 33% of the initial pilot study Garrison et al.’s (2001) was categorised as other whilst 3.5% of Fahy’s (2005) latest study was categorised as other.  In Garrison et al.’s study, one of the students acted as a coder of the transcript, and the second coder was hired specifically for coding task.  A Coefficient of Reliability (CR) of 83.33% was achieved.  In the case of Fahy’s latest study, he carried out the initial coding.  He then recoded (Code-Recode) again more than two months later achieving 85% CR.

Fahy (2005) noted that most triggers originated with the instructor/moderator.  This is in line with Garrison et al.’s (2001) study where 74% of the initial study postings were made by the instructor/moderator and 26% by students.  In this current Oriogun’s SQUAD study, if we discount Case Study 1 because there was no integration recorded, Case Study 2 and 3 had all the categories of the PI model recorded.  However, Case Study 2 appears to give much better results compared with Case Study 3.  The main reason for having the SQUAD categories is that students will be relating more to the first four phases of the PI model, namely trigger, exploration, integration and resolution.  It is expected that the other phase of the PI model will probably be used at the very early stage of students’ online discourse, and once they are confident as to how to use the SQUAD tool, they will only be using the first four phases.  Case Study 2, typifies the appropriate usage of the SQUAD approach with only 4% postings categorised as other.

Case Study 3 however, had 45% of it message postings categorised as other, a very large proportion of the 206 message postings overall.  This suggests that a number of the students in Case Study 3 were not critically thinking about the problem they were supposed to be solving collaboratively and collectively online for the group’s common goal.  On the other hand, however, students in Case Study 2 were able to trigger discussion (30%), explore different ideas and possibility within the group (16%), and consequently were able to integrate these different ideas and possibilities in finding solution(s) or resolution to the collective problem that they had to solve online (32%).  This also tells us that students in this group must have all participated in delivery of various artefacts making up the final deliverable or solution (s) to the software project management problem given to the group to solve in the first place.  Table 10 below shows some of the actual messages sent by the five students from Case Study 2.  See Appendix for these messages.

Table 10
Actual Messages Sent by Students in Case Study 2

Student

SQUAD Message Number / Category

S7

37 (Question)

S8

39 (Answer)

S6

44 (Delivery)

S5

48 (Suggestion)

S9

64 (Question)

S7

65 (Answer)

S7

69 (Unclassified)

Conclusion

Garrison et al. (2001) concluded that their findings are encouraging, and that they anticipate the PI model could be a valuable tool for researchers and teachers to assess and confirm the nature of the discourse according to their desired learning outcomes.  They also stated in their conclusion that they ‘remain challenged by the difficulty of measuring latent variables…and by the need to develop tools that effectively deal with large numbers of messages generated during long-term computer conferencing course’. 

This current Oriogun’s study is one way of addressing Garrison et al.’s conclusion.  Two of the three SQUAD case studies presented in this article (Case Study 2 and Case Study 3) both consisted of five students each, posting large number of messages, 80 and 206 respectively over 73 consecutive days (10 weeks and 3 days)  The SQUAD results are very encouraging indeed, especially with the consolidation of the cognitive elements of the SQUAD using the Practical Inquiry model’s cognitive presence as a framework (Oriogun, Ravenscroft and Cook, 2006).

It is argued in this article that a semi-structured approach to online discourse such as the SQUAD framework is more superior to using interrater reliability measurement of online transcripts when using the Practical Inquiry (PI) model to assess critical thinking or cognitive presence of online groups.  It is further argues that there was insufficient number of posting (24) by the four students over a period of one week, in the initial pilot study by Garrison et al. (2001) to make any concrete conclusion from the study.  The author concurs with Garrison et al’s (2001) conclusion ‘that the practical inquiry model could serve as a framework for future research in a quest to better understand the cognitive nature of the teaching and learning transaction in an asynchronous text-based conferencing environment’.

References

Barrows, H. (1996). Problem-based learning in medicine and beyond: A brief overview. In L. Wilkerson and W. Gijselaers (Eds), Bringing Problem-Based Learning to Higher Education: Theory and Practice. New Directions for Teaching and Learning, 68, 3-11. San Francisco: Jossey-bass Publishers.

Bridges, E. M. (1992). Problem-based learning for administrators. ERIC Clearing House, University of Oregon.

Bullen, M. (1998). Participation and critical thinking in online university distance education. Journal of Distance Education. 13(2). Available: http://cade.icaap.org/vol13.2/bullen.html

Fahy, P.J. (2002). Assessing critical thinking processes in a computer conference. Centre for Distance Education, Athabasca University, Athabasca, Canada. Unpublished manuscript.

Fahy, P. J. (2005). Two Methods for Assessing Critical Thinking in Computer-Mediated Communications (CMC) Transcripts, International Journal of Instructional Technology and Distance Education, 2 (3) 2005. http://www.itdl.org/Journal/Mar_05/article02.htm

Garrison, R., T. Anderson, and W. Archer (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education 15 (1): 115-152.

Garrison, D. R. (1993). A cognitive constructivist view of distance education: An analysis of teaching-learning assumptions. Distance Education, 14, 199-211.

Hara, N., Bonk, C. & Angeli, C. (2000). Content analysis of online discussion in an applied educational psychology course. Instructional Science, 28(2), 115-152.

Henri, F. (1992). Computer conferencing and content analysis. In A. Kaye (Ed), Collaborative learning through computer conferencing: The Najaden papers, pp 117-136. London: Springer-Verlag.

Jones, D. (1996). Critical thinking in an online world, Untangling the Web, Available at: http://www.library.ucsb.edu/untangle/jones.html

Lauzon, A. C. (1992). Integrating computer-based instruction with computer conferencing: An evaluation of a model for designing online education. American Journal of Distance Education, 6(2), 32-46.

Meyers, C. (1985). Teaching students to think critically. Jossey Bass, San Francisco.

Oriogun, P. K., French, F. & Haynes, R. (2002). Using the enhanced Problem-Based Learning Grid: Three multimedia case studies. In A. Williamson, C. Gunn, A. Young & T. Clear (Eds), Winds of Change in the Sea of Learning: Proceedings of the ASCILITE Conference. Auckland, New Zealand: UNITEC Institute of Technology, 8-11 December 2002, pp495-504. http://www.ascilite.org.au/conferences/auckland02/
proceedings/papers/040.pdf

Oriogun P. K (2003). "Towards understanding online learning levels of engagement using the SQUAD approach. Australian Journal of Educational Technology, 19(3), 371-388. http://www.ascilite.org.au/ajet/ajet19/ajet19.html

Oriogun P K, Ravenscroft A and Cook J (2005). "Validating an Approach to Examining Cognitive Engagement within Online Groups", The American Journal of Distance Education, ISSN 0892-3647, volume 19, No. 4, December 2005.

Oriogun P K, Ravenscroft A and Cook J. (2006). "Towards understanding critical thinking processes in a semi-structured approach to computer-mediated communication", Proceedings of Ed-Media 2006 World Conference on Educational Media, Hypermedia and Telecommunications, 26th –30th June 2006, Orlando, Florida, ISBN: 1-880094-59-2, pp2390-2397.

Oriogun P K. (2006). Content Analysis of Online Transcripts: Measuring Quality of Interaction,

Participation and Cognitive Engagement within CMC Groups by Cleaning of Transcripts.  International Journal of Instructional Technology & Distance Learning (March 2006).  Vol. 3. No. 3. ISSN 1550-6908. http://www.itdl.org/Journal/Mar_06/article03.htm

Pilkington, R. (2001). Analysing educational dialogue interaction: Towards models that support learning (Introduction to the IJAIED Special Issue on Analysing Educational Dialogue). International Journal of Artificial Intelligence in Education, 12, 1-7.

 

Appendix 1

Messages Sent by Students

 

Student S7 Message 37

Q-Question regarding SDLC? – S7

Wed Jul 19 18:28:31 BST 2006

Hi,

First of all, I would like to say sorry for not keeping in touch. I've got question for S8 regarding the SDLC. According to the project brief, the company specializes in Extreme Programming and RAD methodologies. So I was just wondering if Incremental Model justifies the selection.

Please try to brief us with your justification with respect to the selection of the Incremental Process Model and not any other approach. Because there are various other process models that support XP.

 

Student S8 Message 39

A-Re: Question regarding SDLC? – S8

Wed Jul 19 19:29:24 BST 2006

Hello S7,

Let me explain key points in SDLC. Company is specializes in RAD and XP so we need to use their strong points when selecting a SDLC. And you know both SDLC and XP favours incremental development. When you go through PFD you can see integration are done incremental. That’s why I select an incremental and justify that

S8

 

Student S6 Message 44

D-Excel file for CPM analysis – S6

Wed Jul 19 20:29:57 BST 2006

Hello

I also uploaded the excel file from which I created the tables in the word file for CPM analysis, if you want to do some small changes S8, but please let me know if there is some large changes from what we have at the moment.

Regards

S6

 

Student S5 Message 48

S-Amendments to the Business Case – S5

Thu Jul 20 18:35:58 BST 2006

Hi all

There are some changes that I have noted in the business case. I have done them. I think that some areas in the risk analysis should be adjusted and rewritten and some lines should go under cost analysis and other parts. Have done the necessary adjustments.

Hope you won't mind. I have uploaded the adjusted Business case. Please let me know your feedback on that ASAP. Are the changes ok???????!

Please note the changes let me know ASAP.

regards

S5

 

Student S9 Message 64

Q-Project plan and XP – S9

Sat Aug 05 13:40:33 BST 2006

Dear S7,

I just went through the project plan and budget, and seen that high amount is being spent on requirement analysis. As we are using the XP approach so, is it reasonable to spend such an amount on the specification?

cheerz

S9

 

Student S7 Message 65

A-Regarding Project Budget and XP – S7

Sat Aug 05 13:46:18 BST 2006

Dear S9,

Thanks for making me aware on the fact. You are right. As we are developing the project using the XP approach, it is not appropriate to spend such a huge sum on the requirements analysis. I'll do something and try to minimise the amount of money being spent there.

Thanks for your expert investigation. Meanwhile, I would also like the other members to put their detective minds at work and investigate the other weak areas that can be improved to minimise the budget.

Thanks for informing me,

Regards,

S7

 

Student S7 Message 69

U-Business Case – S7

Wed Aug 09 18:38:19 BST 2006

Hi S8,

I've uploaded the business case so that you can append it to the existing PID after careful examination by others and after receiving comments from them.

I would like everyone to go through it quickly and give their expert comments as soon as possible so that we can finish our coursework quickly.

And yeah, best of luck to everyone giving exam.

Cheers,

S7

 

About the Author

Peter K. Oriogun

 

Dr. Peter K. Oriogun is currently a Senior Lecturer in Software Engineering at London Metropolitan University, United Kingdom.  He is the Course Director of the MSc Computing programme offered by London Metropolitan University. His current research interests are in semi-structured approaches to online learning, CMC transcript analysis, software life cycle process models, problem-based learning in computing and cognitive engagement in online learning.  He is a chartered member of the British Computer Society.  He has over 20 years teaching experience in software engineering, computing and online collaborative learning within Further and Higher Education institutions in the UK, and has extensive publication in this area of expertise.  The title of his PhD thesis by prior output is “Towards understanding and improving the process of small group collaborative learning in software engineering education”.

Peter K. Oriogun
London Metropolitan University
Department of Computing, Communications Technology
and Mathematics
Learning Technology Research Institute
166-220 Holloway Road
London N7 8DB

Tel: +44 (0) 20 7133 7065

Email: p.oriogun@londonmet.ac.uk
 

go top

February 2007 Index

Home Page