Assessing Critical Thinking in a New Approach |
Reasoning skills | Definitions |
Elementary clarification | Observing or studying a problem, identifying its elements, and observing their linkages in order to come to a basic understanding. |
In depth clarification | Analysing and understanding a problem to come to an understanding which sheds light on the values, beliefs, and assumptions which underlie the statement of the problem. |
Inferencing | Induction and deduction, admitting or proposing an idea on the basis of its link with propositions already admitted as true. |
Judgement | Making decisions, statements, appreciations, evaluations and criticisms. Sizing up. |
Application of strategies | Proposing coordinated actions for the application of a solution, or following through on a choice or a decision. |
In a more recent article, Oriogun, Ravenscroft and Cook (2006) mapped the cognitive indicators of the SQUAD approach with the method proposed by Garrison et al. (2001) for detecting triggering events, exploration, integration and resolution called Cognitive Presence. Cognitive Presence can be summarised as having four phases of critical thinking, namely, a Triggered Event deals with starting, inviting or soliciting a particular discussion; the Exploration phase is when information is exchanged between the learning participants; the Integration phase is when participant learners construct meaning and propose possible solutions; and finally, the Resolution phase is when proposed solution(s) is/are tested out (Garrison et al., 2001:11).
In this modern technology driven society, online communication is exceptionally challenging for students and educators. In recent years, we have seen the widespread adoption of computer mediated communication (CMC) in education, including extensive interest in using online communications to facilitate asynchronous dialogues, e.g. online teamwork. Consequently, recent research, for example on dialogue analysis, has attempted to explore the relationship between online dialogue features (e.g. roles, strategies, form and content) and learning (Pilkington, 2001). Such an analysis can provide useful insights into the nature of the learning processes from the perspective of, for example, a speaker's intention in a transmitted message and what the receiver perceives has been communicated by the message. However, a problem arises if we wish to investigate specific categories or variables of the learning process, e.g. participation, interaction, social, cognitive and metacognitive (Henri, 1992). It is hoped that recent tools developed at
The Learning Technology Research Institute at
§ A game design that promotes motivation, confidence and engagement
§ Integration of multimedia artefacts
§ Structured interaction through coordinating activities, dialogues, conversations and replies
§ Message openers (e.g. 'I think...', 'I disagree because...', 'Is there any evidence?...') that promote coherent dialogue, thinking and deep learning
§ Reusable and adaptable learning activity and dialogue game templates
Recently, Oriogun (2006) used content analysis of online transcripts to study quality of interaction, participation, and cognitive engagement. New tools developed by the
§ Create an atmosphere that will motivate students to learn in a group setting online;
§ Promote group interactions and participation over the problem to be solved by the group online;
§ Help learners to build a knowledge base of relevant facts about the problem to be solved online;
§ Share newly acquired knowledge with a group online with the aim of solving the given problem collaboratively and collectively;
§ Deliver various artifacts’ leading to a solution or a number of solutions to the problem to be solved online.
In order to enhance students participation, interaction and cognitive engagement online, Oriogun, Ravenscroft and Cook (2005), suggested, that “one way of engaging learners in online collaborative learning is to create an environment in which knowledge emerges and is shared. The onus is therefore on the tutor/instructor to (1) create an environment in which knowledge emerges and is shared through the collaborative work within a group of students, and (2) facilitate sharing of information and knowledge among members of a learning team instead of controlling the delivery and pace of course content”. A methodological framework, developed by Oriogun (2003), called the SQUAD approach was used to develop their argument in the article (Oriogun, Ravenscroft and Cook, 2005) when they validated the cognitive engagement of postgraduate software engineering students at the London Metropolitan University during the two academic semesters of 2004-2005.
Existing literature at the time of the study (Oriogun, Ravenscroft and Cook, 2005) revealed that there are no tools for measuring the cognitive engagement of groups of people working on a particular task/problem online, such as a group’s course work for a module or course. There are tools available for investigating cognitive elements of individuals working online (Henri 1992; Hara, Bonk, and Angeli 2000; Fahy 2002; Garrison et al. 2001; Oriogun 2003; Oriogun and Cook 2003). In the article (Oriogun, Ravenscroft and Cook, 2005) we adopted the theoretical framework of two recently developed tools, commonly used for analyzing students’ cognitive elements online (Fahy 2002; Garrison, Anderson, and Archer 2000, 2001) at an individual level in order to validate at group level the cognitive engagement of groups of students working within the SQUAD approach.
In this article, the author will use the SQUAD statistics gather from two groups of Masters Software Engineering students and one group of Masters Computing students from 20th June 2006 until 31st August 2006, a total of 73 days, to measure the cognitive engagement of the students according to the mapping of the SQUAD approach to the Cognitive Presence model (Oriogun, Ravenscroft and Cook, 2006). The first group of Masters Software Engineering students was composed of 4 students. They posted a total of 23 messages over the 73 days of the study. The second group, also of Master Software Engineering student had 5 members. They posted a total of 80 messages over the 73 days of this study. The third and final group, the Masters Computing students had 5 members. Table 2 shows the SQUAD statistics for Master Software engineering Students (Group1).
Student | S | Q | U | A | D | Total |
S1 | 1 | 1 | 3 | 1 | 0 | 6 |
S2 | 6 | 0 | 0 | 4 | 0 | 10 |
S3 | 2 | 0 | 0 | 0 | 0 | 2 |
S4 | 3 | 2 | 0 | 0 | 0 | 5 |
TOTAL | 12 | 3 | 3 | 5 | 0 | 23 |
These students were completing a group assignment in a module called Software Project Management, a designated or optional module on both Masters courses. This component of the module is very practical, and students were given a practical Project Management problem to solve using PRINCE 2 as a methodology, template or vehicle by which to solve the problem. If they pass the module, it will count towards the total of 6 taught modules and a dissertation, which is also worth an equivalent of 3 core or compulsory modules. Out of the 6 taught modules, 4 are core. The group assignment is 50% of the Software Project Management module; the other 50% is an open-book test, which is more theoretical in nature. These students were, at the time of the study, working from
Student | S | Q | U | A | D | Total |
S5 | 1 | 0 | 1 | 0 | 1 | 3 |
S6 | 7 | 2 | 0 | 7 | 7 | 23 |
S7 | 4 | 2 | 1 | 6 | 0 | 13 |
S8 | 5 | 2 | 1 | 10 | 21 | 39 |
S9 | 1 | 1 | 0 | 0 | 0 | 2 |
TOTAL | 18 | 7 | 3 | 23 | 29 | 80 |
The purpose of using the SQUAD environment to facilitate these students group coursework online was because all of the students were full-time students, sharing the same designated or optional module on their Masters programmes. Another reason for getting the students to use the tool was that they have already used the SQUAD environment from September 2005 until January 2006 when they first enrolled on the module, as such they should know the way around the software tool. The final rationale for using getting the students to use the tool was to actually evaluate their collaborative group effort spent on the assignment, as well as obtaining some qualitative measure of each student’s cognitive engagement when mapped to Garrison et al’s Cognitive Presence categories. Table 4 shows the SQUAD statistics for Master Computing Students (Group3).
Student | S | Q | U | A | D | Total |
S10 | 24 | 0 | 8 | 4 | 7 | 43 |
S11 | 5 | 0 | 1 | 0 | 0 | 6 |
S12 | 5 | 0 | 19 | 6 | 7 | 37 |
S13 | 32 | 0 | 0 | 0 | 0 | 32 |
S14 | 7 | 1 | 66 | 13 | 1 | 88 |
TOTAL | 73 | 1 | 94 | 23 | 15 | 206 |
The SQUAD approach (Oriogun, 2003) to CMC discourse provides a means through which statistics compiled from students’ online discourse can be used to generate objective estimations of their degree of learning engagement. The cognitive indicators of the SQUAD approach are based on Henri’s (1992) cognitive indicators. The following section explains how we have mapped the SQUAD approach with Garrison et al’s (2001) framework. Our use of mapping in this article refers to the tools being equivalent for measurement purposes.
The SQUAD category S described above is focused on what the group has to deliver for their group coursework, and does not necessarily deal with significant personal revelation. It also encourages students to initiate, continue or acknowledge interpersonal interaction, and or “warm” and personalize the discussion by scaffolding/engaging comments connects or agree with, thank or otherwise recognize someone else, and encourage or recognize the helpfulness, ideas and comments, capabilities and experience of others. The phases of the Practical Inquiry model capable of being mapped to SQUAD category S are Triggers and Exploration (see Table 5).
The SQUAD category Q is a form of words addressed to a person in order to elicit information or evoke a response. An example of a question within the SQUAD framework is when students seeks clarification from the tutor or other students in order to make appropriate decisions relating to the group coursework (Oriogun, 2003). The phases of the Practical Inquiry model capable of being mapped to SQUAD category Q are Triggers and Exploration (see Table 5).
The SQUAD category U is normally not in the list of categories of messages stipulated by the instigator of the task at hand. This tends to happen at the start of the online postings. Students may be unsure of what the message is supposed to convey. In most cases, it falls within one of the four classified categories (Oriogun, 2003). The phase of the Practical Inquiry model capable of being mapped to SQUAD category U is other. Results of analysis of 24 message transcripts by Garrison et al’s (2001) showed that one-third (8) of the postings did not relate to any of the four phases of the critical thinking model (p.19), as such, they categorised this phase as Other (see Table 5).
Phases of the Practical Inquiry Model | Phases of the SQUAD Approach to CMC Discourse | ||||
S - Suggestion | Q - Question | U - Unclassified | A - Answer | D - Delivery | |
Triggers | x | x | | x | |
Exploration | x | x | | | |
Integration | | | | | x |
Resolution | | | | x | x |
Other | | | x | | |
The SQUAD category A is a reply, either spoken or written, to a question, request, letter or article. Students are expected to respond to this type of message with a range of possible solutions / alternatives. Also, the SQUAD category S is the process whereby the mere presentation of an idea to a receptive individual leads to the acceptance of the idea, and, students engage with other students within their coursework groups by offering advice, a viewpoint, or an alternative viewpoint to a current one (Oriogun, 2003). The phases of the Practical Inquiry model capable of being mapped to SQUAD category A are Triggers and Resolution (see Table 5).
The SQUAD category D is the act of distribution of goods, mail etc. This is where students are expected to produce a piece of software at the end of the semester. Al the students have to participate in delivering aspects of the artifacts making up the software (Oriogun, 2003). At this point students may show their appreciations to part of the group coursework deliverable by responding with comments with real substantive meaning. The phases of the Practical Inquiry model capable of being mapped to SQUAD category S are Integration and Resolution (Table 5). Table 6 shows Oriogun’s consolidation of the cognitive elements of the SQUAD approach using the Practical Inquiry model as a framework.
| Oriogun’s SQUAD Mapping |
Trigger | (S+Q+A)/2 |
Exploration | (S+Q)/2 |
Integration | D/2 |
Resolution | (A+D)/2 |
Other | U |
The author will also compare the result of an established researcher on CMC transcripts also using the PI model as a framework at message level (Fahy, 2005), using the three case studies from master’s computing students at
Case Study | S | Q | U | A | D | Total |
1 | 12 | 3 | 3 | 5 | 0 | 23 |
2 | 18 | 7 | 3 | 23 | 29 | 80 |
3 | 73 | 1 | 94 | 23 | 15 | 206 |
Tables 8 & 9 shows the comparison of the initial pilot study Garrison et al. (2001) with Oriogun’s SQUAD current study and Fahy’s (2005) present study using the Practical Inquiry model as a framework for three case studies referred to in Table 5 above.
It is worth noting at this point that the Initial Pilot Study by Garrison et al. (2001) and Practical Inquiry Results, Oriogun’s SQUAD Current Study –Case Study 2 are both at message-level, whilst Practical Inquiry Results, Fahy (2005) Latest Study operates at sentence-level. Fahy’s latest study of 462 message postings has 3126 sentences and 54000words. Both the Initial Pilot Study by Garrison et al. (2001) and Fahy’s latest study required interrater reliability measure. In the case of Garrison et al’s (2001), coefficient of reliability of 83.33 with a Cohen (1969) Kappa (k) value of 0.74 was achieved in their third transcript analysis after learning from the possible errors that could have been generated with the first two separate transcript analysis reported in Garrison et al (2001). Fahy (2005) on the other hand adopted the code-recode method before finally generating a CR of 85%.
It was noted by Fahy (2005) that ‘the iterative nature of the PI model and the conceptual interconnectedness of the model’s phases, provide a promising conceptual guide for researchers studying the “sociocognitive process” (Garrison, et al., 2001, p.11) of interpretation through CMC’. Furthermore, (Oriogun, Ravenscroft and Cook, 2005, p.212) suggested that ‘further testing of the practical inquiry model is required to ascertain its robustness and validity’ and that ‘there is a real need to develop Grarrison et al.’s (2001) framework, especially empirically testing it in relation to actual transcripts of online communications’. The empirical study contained in this article is a way of further testing the PI model in order to ascertain it robustness and validity.
Table 8
Phases of the Practical Inquiry Model | Initial Pilot Study by Garrison et al | Practical Inquiry Results, Fahy (2005) | Practical Inquiry Results, Oriogun’s SQUAD Current Study –Case Study 1 | Practical Inquiry Results, Oriogun’s SQUAD Current Study –Case Study 2 | |
Trigger | 8% | 9.1% | 43% | 30% | |
Exploration | 42% | 71.6% | 33% | 16% | |
Integration | 13% | 14.1% | 0% | 18% | |
Resolution | 4% | 1.7% | 11% | 32% | |
Other | 33% | 3.5% | 13% | 4% | |
| Totals | No of message postings =24 4 Students 1 weeks (Online) Coefficient of Reliability (CR) 83.33% | No of message postings =462 13 Students 13 weeks (Online) Code Recode (CR) 85% | No of message postings =23 4 Students 10 Weeks 3 Days (Online) Categorized by students (no requirement for CR) | No of message postings =80 5 Students 10 weeks 3 Days (Online) Categorized by students (no requirement for CR) |
| Course Module | Graduate-level course in Health Promotions (Instructor led) | Graduate-level course in Distance Education (Instructor led) | Postgraduate-level course in Software Project Management (Student led) | Postgraduate-level course in Software Project Management (Student led) |
In Tables 8 and 9, we have compared three different courses for the purpose of this study. The initial pilot study by Garrison et al (2001) had 4 students on a graduate-level course in Health Promotions. They posted 24 messages over a one-week duration (the whole of week 9 of the course). The interrater reliability (or Coefficient of Reliability, or Code-Recode, or CR) was just over 83%. For Fahy’s (2005) latest study, 13 students on a graduate-level course in Distance Education posted 462 messages over thirteen-weeks, with interrater reliability of 85%. Both Garrison et al.’s (2001) and Fahy’s (2005) was instructor led.
In Oriogun’s current study, the three cases presented are from a Masters course in Software Project Management. In Case Study 1, four students posted 23 messages online over 10weeks and 3 days in total. For Case Study 2, five students posted a total of 80 messages over the same period, and, finally, In Case Study 3 five students posted 206 messages in the period in question. There was no need for interrater reliability in the case of the SQUAD approach as posted messages were by the students beforehand. Oriogun’s current study was student led.
Phases of the Practical Inquiry Model | Initial Pilot Study by Garrison et al. | Practical Inquiry Results, Fahy (2005) | Practical Inquiry Results, Oriogun’s SQUAD Current Study –Case Study 2 | |
Trigger | 8% | 9.1% | 24% | |
Exploration | 42% | 71.6% | 18% | |
Integration | 13% | 14.1% | 4% | |
Resolution | 4% | 1.7% | 9% | |
Other | 33% | 3.5% | 45% | |
| Totals | No of message postings =24 4 Students 1 Week (Online) Coefficient of Reliability (CR) 83.33% | No of message postings =462 13 Students 13 Weeks (Online) Code Recode (CR) 85% | No of message postings =206 5 Students 10 Weeks 3 Days (Online) Categorized by students (no requirement for CR) |
| Course (Module) | Graduate-level course in Health Promotions (Instructor led) | Graduate-level course in Distance Education (Instructor led) | Postgraduate-level course in Software Project Management (Student led) |
In Fahy’s (2005) latest study and Garrison et al.’s (2001) initial pilot study, the proportions of postings in the categories of trigger, integration, and integration/resolution are remarkably similar. However, exploration was affected by the large number of the fact that 33% of the initial pilot study Garrison et al.’s (2001) was categorised as other whilst 3.5% of Fahy’s (2005) latest study was categorised as other. In Garrison et al.’s study, one of the students acted as a coder of the transcript, and the second coder was hired specifically for coding task. A Coefficient of Reliability (CR) of 83.33% was achieved. In the case of Fahy’s latest study, he carried out the initial coding. He then recoded (Code-Recode) again more than two months later achieving 85% CR.
Fahy (2005) noted that most triggers originated with the instructor/moderator. This is in line with Garrison et al.’s (2001) study where 74% of the initial study postings were made by the instructor/moderator and 26% by students. In this current Oriogun’s SQUAD study, if we discount Case Study 1 because there was no integration recorded, Case Study 2 and 3 had all the categories of the PI model recorded. However, Case Study 2 appears to give much better results compared with Case Study 3. The main reason for having the SQUAD categories is that students will be relating more to the first four phases of the PI model, namely trigger, exploration, integration and resolution. It is expected that the other phase of the PI model will probably be used at the very early stage of students’ online discourse, and once they are confident as to how to use the SQUAD tool, they will only be using the first four phases. Case Study 2, typifies the appropriate usage of the SQUAD approach with only 4% postings categorised as other.
Case Study 3 however, had 45% of it message postings categorised as other, a very large proportion of the 206 message postings overall. This suggests that a number of the students in Case Study 3 were not critically thinking about the problem they were supposed to be solving collaboratively and collectively online for the group’s common goal. On the other hand, however, students in Case Study 2 were able to trigger discussion (30%), explore different ideas and possibility within the group (16%), and consequently were able to integrate these different ideas and possibilities in finding solution(s) or resolution to the collective problem that they had to solve online (32%). This also tells us that students in this group must have all participated in delivery of various artefacts making up the final deliverable or solution (s) to the software project management problem given to the group to solve in the first place. Table 10 below shows some of the actual messages sent by the five students from Case Study 2. See Appendix for these messages.
Student | SQUAD Message Number / Category |
S7 | 37 (Question) |
S8 | 39 (Answer) |
S6 | 44 (Delivery) |
S5 | 48 (Suggestion) |
S9 | 64 (Question) |
S7 | 65 (Answer) |
S7 | 69 (Unclassified) |
Garrison et al. (2001) concluded that their findings are encouraging, and that they anticipate the PI model could be a valuable tool for researchers and teachers to assess and confirm the nature of the discourse according to their desired learning outcomes. They also stated in their conclusion that they ‘remain challenged by the difficulty of measuring latent variables…and by the need to develop tools that effectively deal with large numbers of messages generated during long-term computer conferencing course’.
This current Oriogun’s study is one way of addressing Garrison et al.’s conclusion. Two of the three SQUAD case studies presented in this article (Case Study 2 and Case Study 3) both consisted of five students each, posting large number of messages, 80 and 206 respectively over 73 consecutive days (10 weeks and 3 days) The SQUAD results are very encouraging indeed, especially with the consolidation of the cognitive elements of the SQUAD using the Practical Inquiry model’s cognitive presence as a framework (Oriogun, Ravenscroft and Cook, 2006).
It is argued in this article that a semi-structured approach to online discourse such as the SQUAD framework is more superior to using interrater reliability measurement of online transcripts when using the Practical Inquiry (PI) model to assess critical thinking or cognitive presence of online groups. It is further argues that there was insufficient number of posting (24) by the four students over a period of one week, in the initial pilot study by Garrison et al. (2001) to make any concrete conclusion from the study. The author concurs with Garrison et al’s (2001) conclusion ‘that the practical inquiry model could serve as a framework for future research in a quest to better understand the cognitive nature of the teaching and learning transaction in an asynchronous text-based conferencing environment’.
Barrows, H. (1996). Problem-based learning in medicine and beyond: A brief overview. In L. Wilkerson and
Bridges, E. M. (1992). Problem-based learning for administrators. ERIC Clearing House,
Bullen, M. (1998). Participation and critical thinking in online university distance education. Journal of Distance Education. 13(2). Available: http://cade.icaap.org/vol13.2/bullen.html
Fahy, P.J. (2002). Assessing critical thinking processes in a computer conference. Centre for Distance Education,
Fahy, P. J. (2005). Two Methods for Assessing Critical Thinking in Computer-Mediated Communications (CMC) Transcripts, International Journal of Instructional Technology and Distance Education, 2 (3) 2005. http://www.itdl.org/Journal/Mar_05/article02.htm
Garrison, R., T. Anderson, and W. Archer (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education 15 (1): 115-152.
Garrison, D. R. (1993). A cognitive constructivist view of distance education: An analysis of teaching-learning assumptions. Distance Education, 14, 199-211.
Hara, N., Bonk, C. & Angeli, C. (2000). Content analysis of online discussion in an applied educational psychology course. Instructional Science, 28(2), 115-152.
Henri, F. (1992). Computer conferencing and content analysis. In A. Kaye (Ed), Collaborative learning through computer conferencing: The Najaden papers, pp 117-136.
Jones, D. (1996). Critical thinking in an online world, Untangling the Web, Available at: http://www.library.ucsb.edu/untangle/jones.html
Lauzon, A. C. (1992). Integrating computer-based instruction with computer conferencing: An evaluation of a model for designing online education. American Journal of Distance Education, 6(2), 32-46.
Meyers, C. (1985). Teaching students to think critically. Jossey Bass,
Oriogun, P. K., French, F. & Haynes, R. (2002). Using the enhanced Problem-Based Learning Grid: Three multimedia case studies. In A. Williamson, C. Gunn, A. Young & T. Clear (Eds), Winds of Change in the
proceedings/papers/040.pdf
Oriogun P. K (2003). "Towards understanding online learning levels of engagement using the SQUAD approach. Australian Journal of Educational Technology, 19(3), 371-388. http://www.ascilite.org.au/ajet/ajet19/ajet19.html
Oriogun P K, Ravenscroft A and Cook J (2005). "Validating an Approach to Examining Cognitive Engagement within Online Groups", The American Journal of Distance Education, ISSN 0892-3647, volume 19, No. 4, December 2005.
Oriogun P K, Ravenscroft A and Cook J. (2006). "Towards understanding critical thinking processes in a semi-structured approach to computer-mediated communication", Proceedings of Ed-Media 2006 World Conference on Educational Media, Hypermedia and Telecommunications, 26th –30th June 2006, Orlando, Florida, ISBN: 1-880094-59-2, pp2390-2397.
Oriogun P K. (2006). Content Analysis of Online Transcripts: Measuring Quality of Interaction,
Participation and Cognitive Engagement within CMC Groups by Cleaning of Transcripts. International Journal of Instructional Technology & Distance Learning (March 2006). Vol. 3. No. 3. ISSN 1550-6908. http://www.itdl.org/Journal/Mar_06/article03.htm
Pilkington, R. (2001). Analysing educational dialogue interaction: Towards models that support learning (Introduction to the IJAIED Special Issue on Analysing Educational Dialogue). International Journal of Artificial Intelligence in Education, 12, 1-7.
Hi, First of all, I would like to say sorry for not keeping in touch. I've got question for S8 regarding the SDLC. According to the project brief, the company specializes in Extreme Programming and RAD methodologies. So I was just wondering if Incremental Model justifies the selection. Please try to brief us with your justification with respect to the selection of the Incremental Process Model and not any other approach. Because there are various other process models that support XP. |
Hello S7, Let me explain key points in SDLC. Company is specializes in RAD and XP so we need to use their strong points when selecting a SDLC. And you know both SDLC and XP favours incremental development. When you go through PFD you can see integration are done incremental. That’s why I select an incremental and justify that S8 |
Hello I also uploaded the excel file from which I created the tables in the word file for CPM analysis, if you want to do some small changes S8, but please let me know if there is some large changes from what we have at the moment. Regards S6 |
Hi all There are some changes that I have noted in the business case. I have done them. I think that some areas in the risk analysis should be adjusted and rewritten and some lines should go under cost analysis and other parts. Have done the necessary adjustments. Hope you won't mind. I have uploaded the adjusted Business case. Please let me know your feedback on that ASAP. Are the changes ok???????! Please note the changes let me know ASAP. regards S5 |
Dear S7, I just went through the project plan and budget, and seen that high amount is being spent on requirement analysis. As we are using the XP approach so, is it reasonable to spend such an amount on the specification? cheerz S9 |
Dear S9, Thanks for making me aware on the fact. You are right. As we are developing the project using the XP approach, it is not appropriate to spend such a huge sum on the requirements analysis. I'll do something and try to minimise the amount of money being spent there. Thanks for your expert investigation. Meanwhile, I would also like the other members to put their detective minds at work and investigate the other weak areas that can be improved to minimise the budget. Thanks for informing me, Regards, S7 |
Hi S8, I've uploaded the business case so that you can append it to the existing PID after careful examination by others and after receiving comments from them. I would like everyone to go through it quickly and give their expert comments as soon as possible so that we can finish our coursework quickly. And yeah, best of luck to everyone giving exam. Cheers, S7 |
Peter K. Oriogun | Dr. Peter K. Oriogun is currently a Senior Lecturer in Software Engineering at Peter K. Oriogun Tel: +44 (0) 20 7133 7065 Email: p.oriogun@londonmet.ac.uk |