Editor’s Note. The research conclusions give cause for reflection. Where is a teacher more effective than a computer? And where is a computer likely to be more successful? Each have their own particular values, and teachers, even students, may have the option to choose. This study suggests we need to reconsider what is the most effective feedback to support learning.
Computer–Based Feedback vs. Instructor– Provided Feedback and Second Language Learners' Reading Comprehension
This study investigated the effects of computer-based feedback and paper-based (teacher-provided) feedback with multiple-choice questions on English reading comprehension scores. The purpose of this study is to assess the potential of computer-based feedback for improving second language reading comprehension. Thus, our goal is to investigate whether computer-based feedback has an advantage for the reading comprehension of second language by elementary Iranian learners. Eighty participants were divided into two groups: a computer-based group and a teacher-based group. The proficiency test confirmed that there was no significant difference between two groups. The same reading comprehension tests were administered on both groups during treatments and final tests. Results indicated that students who received the computer-based feedback improved their reading comprehension significantly compared to their peers who received paper-based (teacher-provided) feedback.
One of the challenges for assessment of today's education is that students are expecting better, more frequent, and quicker feedback. Research on how students perceive feedback, and what aspects of feedback are most valued by students, is providing insight into how best to provide feedback to maximize its usefulness in evaluation and in transforming learning. (Orsmond, Merry, & Reiling, 2005; Peat & Franklin, 2002).
For feedback to be most effective, it should be appropriate and timely (Ramsden, 1992). In the context of feedback on assessment tasks, this means within a timeframe that allows students to recall their responses and the understanding that informed their decisions. Shute (2008) defined feedback as the information communicated to the learner to modify his or her thinking or behavior for the purpose of improving learning, and then agreed that providing students with timely feedback is important.
Now, with our electronic age, most feedback is converted to digital and online environments. Mory (2003) said the feedback mechanisms that are used by students have changed with the advances and growth of web-based learning systems. Although most teachers throughout the world, and especially in our country, still use chalk and blackboard and paper/pencil, the computer is used routinely in language instruction, in highly developed countries, to provide supplementary practice in four skills. There is growing increase in the use of computers for assessment purposes within higher educational institutions globally (Sim, Holifield, & Brown, 2004). As Al-Segheyer (2001) says: “in the realm of second language acquisition, the most recent effort to enhance the process of language learning has involved computer technology which is referred to as “CALL:” (Computer-Assisted Language Learning).
Compared to a traditional textbook or workbook, a CALL program can provide immediate feedback on the correctness of the learner's response. In web-based learning systems feedback presented by computer is usually aimed to replace feedback given to the student by the teacher and to improve student performance (Mory, 2003). Despite the fact that models and guidelines recommending pedagogically sound practices for incorporating Internet-based materials exist (Brandl, 2002; Chun & Plass, 2000 cited in Murphy, 2007), a major concern is that the number of such examples remains limited.
Likewise, guidelines for offering a reading course via the Internet (Caverly & McConald, 1998; Jones & Wolf, 2001; Mikulecky, 1998 cited in Murphy, 2007) are similarly few. However, evidence exists to support the assumption that integrating reading with computer-mediated support improves ESL students' reading skills (Chun & Plass, 1996; Hong, 1997; Stakhnevich, 2002; Williams & Williams, 2000 cited in Murphy, 2007).
Reading comprehension exercises have always been neglected by both teachers and learners. The teachers think that grammar, speaking and listening are much more important and students find reading comprehension test boring.
There is general agreement that reading is essential to success in our society. The National Research Council (1998; 17) states that "reading is essential to success in our society". The ability to read is highly valued and important for social and economic advancement (Snow, Burns & Griffin, 1998). Computer-Assisted Instruction (CAI) is among the range of strategies being used to improve student achievement in school subjects, including reading. However, readers and printed texts cannot literally interact, printed text cannot respond to a reader, nor do printed texts invite modification by a reader. Electronic texts, on the other hand can effect a literal interaction between texts and readers (Daniel & Reinking, 1987).
Computer- Mediated Feedback
Clariana (2000), who published extensively on the topics of computer-mediated feedback, provides a succinct summary of the traditionally investigated types of feedback in CALL: Knowledge of response (KR) that states "right" or "wrong" which replicates traditional paper-based answer sheets by providing correct answers; Knowledge of correct response
(KCR) that states or indicates the correct response; and Elaborative feedback that includes several more complex formed of feedback that explain, direct, or monitors (Smith, 1988 cited in Clariaina, 1990). Answer-Until-Correct (AUC, Pressey, 1926) is a common form of elaborative feedback where the learner is directed to respond until correct.
Answer until correct feedback also known as multiple try feedback (MFT). MFT requires students to make multiple tries at answering the same item if and with the added knowledge that their previous or initial response was incorrect.
Using the Answer-Until-Correct Methodology
To gain a picture of readers' understanding of a text researchers and instructors measures comprehension after the reading is complete, and some of the most widely used comprehension assessment measures are multiple choice questions, written recalls, close tests, sentence completion, and open ended questions. The most common comprehension tests is multiple-choice questions.
While most multiple-choice testing requires test takers to select one answer and move on to the next question, the answer-until-correct method forces learners to select answer choices until the correct answer is chosen. This method can provide learners with greater score when they utilize fewer guesses, (for example, learners can get full score for correct first choices, 75% score for second choices, 50% score for third choices, and so forth).
The Previous Study
Many studies found significant differences between computer-administered testing and traditional paper and pencil testing. These studies and articles attributed achievement differences to several factors. Russell and Haney (1996) found significant differences in the performance of students on the National Assessment of Educational Progress computerized tests when compared to traditional paper and pencil tests. They compared 42 students tested on a computer-administered test with scores of 47 students tested on a traditional paper and pencil test. Examinations of learning or comprehension, measured in terms of correct answers, have tended not to find differences between materials presented in the two forms (e. g. Mason et al. 2001, Mayes et al. 2001, Noyes and Garland 2003, Bodmann and Robinson 2004, Garland and Noyes 2004).
The Present Study
The present study is an attempt to investigate the effects of computer-based vs. paper-based (instructor-provided) feedbacks on the reading performance of second language students. The results of this study will be of crucial importance in EFL teaching by equipping teachers and students with computer-based feedback knowledge to promote learning process.
The research was designed to answer the following question:
RQ: What are the effects of computer-based feedback vs. paper-based (instructor-provided) feedbacks on the reading performance of second language learners?
The participants were 80 third-year high school students in Ardabil. All of the students in this study were females. Students were selected for participation in the program based on the recommendations of their school' counselors and teachers. They had the option not to participate. These participants were then randomly assigned either to the computer-based group or to the traditional paper-based group. The high school was located in an inner city region of Ardabil and was equipped with 30 computers. The majority of students were from middle class families.
In order to accomplish the purpose of the research, the following tests were administrated:
The test focuses primarily on grammar as the clearest indicator of a student’s ability in the language.
Reading Comprehension Tests
On the basis of readability index, 8 reading passages were retrieved from www. Mr.nussban.com, each with 7 to 10 questions for elementary level. (Note Appendix B).
These reading comprehension tests were chosen because of the quality of the questions and related text passages, the quality of their distracters, the familiarity of the tests to the subjects and the researchers, the motivational potential of the reading text for subjects, the availability of the materials, and the assumed appropriate reading level of the materials.
To investigate the effects of computer-based and paper-based feedback on reading comprehension, the final tests were administered.
Readability of Reading Passage
Microsoft Word was used to display information about the reading level of the reading passage. Readability of the passage was administered in order to be able to determine the appropriate readability of the passage for the elementary level. The readabilities within the ranges of 60-70 were considered as appropriate for the participants on the basis of the readability level of their English book.
Twenty computers were used in this study to administer the online tests.
At the beginning of the study, since the students could not be assumed to be at the same proficiency level in English, the subjects were required to take Longman Placement Test (Dawson, 2005). The treatment sessions took 30 minutes and were held twice a week. To avoid the possibility of environmental differences in testing conditions, the same room was used to administer both computer-based and paper-based tests.
Paper-Based (Teacher-Provided) Test
Students read each question and then wrote the letter (A, B, C or D) of the answer choice on a separate sheet. Allocated time for completing test and receiving feedback is 30 minutes. In tradition paper-based tests of reading comprehension typically students answer multiple-choice questions after they have read a passage and answers were recorded with a pencil on the answer sheet. Feedback was given promptly by providing answer keys or reviewing the examination immediately after completion of the examination. Then the students checked their answers to the test with answer keys provided by teacher. The allocated sessions for treatment were 7 days.
Computer-Based (On Line) Test
A thirty minute training session was held prior to the main research to familiarize this group with the process of taking computer-based tests. Since the number of participants was twice the number of the computers, the computer-based group was divided into two groups. Every session 20 participants were gathered in the computer laboratory to take a computerized reading comprehension test. The format of this type of test was the same as that for a paper-based, multiple-choice test. The differences were that as the students click on an option, feedback was provided automatically. In the computer-based reading questions, students answered all questions by receiving item by item online feedback in the computer. Feedback was delivered during comprehension, item by item, not after the test. In the computer-based test, “answer-until-correct feedback” was utilized. The AUC feedback treatment provided for an incorrect answer, "No, try again" and for the correct answer, "That's correct". This feedback was displayed at the bottom of the screen. After the third try, the learner was told "Right” if correct or "Wrong" if incorrect, and then the student was shown the correct answer by means of an arrow. Students are often correct on the first or second try. This method can provide learners with greater credit when they utilize fewer guesses. For example, learners can full credit for correct first choices, 75% credit for second choices, 50% credit for third choices, and so forth).
After 7 sessions, for investigating the effects of computer-based and paper-based feedback on the comprehension of the texts during treatment tests, all students were given 30 minutes to complete a final comprehension test. This time all students received correct and wrong answers.
All of the 80 participants were at the elementary level, based on the results of Longman Placement Test administered by the researcher. (See Appendix A) In addition, the homogeneity of participants was determined by calculating the means of two groups. (See Table 4.1).
Means and Standard Deviations obtained in Proficiency Test
Figure 4.1. Comparison of means obtained in proficiency test by two groups.
Results obtained by participants in the final test were compared for the traditional paper- based and computer-based feedback in order to determine each of their effects of on reading comprehension outcomes. A t-test was run to test the alternative hypothesis. The data were the scores of two groups after the two types of feedback (computer-based and paper-based).
Means and Standard Deviation Obtained in Final Tests
Table 4.2 shows group statistics. From this we can see that x= 8.02 and SD = 1.64
Independent Samples T-Test in final tests
Table 4.3 indicates the result of the t-test. In this row, we can see that tobs is 7.84 with df = 78. Since the two-tailed significance value of .00 is less than alpha = .05, we can support the alternative hypothesis, that is there is significant difference between the two groups.
Figure 4.2. Comparison of means obtained in final tests by two groups
The mean differences indicate the magnitude of the difference between the two groups.
Discussion and Result
In contrast to Mory's (1992) research, but consistent with findings by Bangert-Drowns, et al. (1991) and Nagata (1996), a quantitative analysis of the results in this research shows that the main affect of type of feedback was statically significant.
It is clear from these results, therefore, that simply providing students with correct answers to questions in all situations may not necessarily be the most effective way to promote reading comprehension. In this study, the researcher found that feedback can be valuable tool for supporting student learning when used properly. Research stresses the need to provide timely and appropriate feedback that can help a student improve reading comprehension. A computer, which allows instructors to provide immediate feedback in a variety of ways may be used to future enhance instructor's ability to provide useful and timely feedback to students.
The answer-until-correct examination format allows students to re-work or re-think their mistakes, potentially resulting in deeper earning. Students tend to enjoy this examination format, although some students experience anxiety.
The question is how we can fully utilize the result of this research. Since the invasion of computers into our classroom is an important future trend, we must try to accept and prepare for it. CALL programs present the learner with a novelty. They teach the language in different and more interesting, attractive ways. As a result even tedious drills become more interesting.
Many students need additional time and individualized practice to meet learning objectives. The computer acts as a tutor and guides each learner towards the correct answer while adapting the material to the student’s performance. It is clear that detailed, constructive and individualized feedback is an important aspect of good teaching and effective learning. However, providing feedback to students in the traditional form, that is by reading the students' answers, evaluating them, and writing comments can be very time consuming, especially with language classes.
It is impractical for a teacher to write comments for each student. Often with more than thirty pupils in one class, many with different ability levels, this presents the teacher with a constant challenge. Learners receive maximum benefit from feedback only when it is supplied immediately.
The use of computers in the English reading classroom enhanced learning by providing more opportunities for exposure to and use of a variety of learning materials and tasks.
The computer encourages such students to try and become active. There is no time allotted for all the students to read the text, so the students who need more time to read the text can take their time and work at their own pace.
Limitations of the Study
One of the limitations of this study was a small sample size and short duration of the experiment. The impact of feedbacks may not be visible in an eight-day period. Furthermore, 20 computers were not adequate. In order to obtain simple and concise results in this study, the researcher chose to concentrate on the impact of only one variable on L2 reading comprehension (i.e. type of feedback), however other relevant factors might have influenced the result, such as motivation, personality type, and/or previous English experiences.
Suggestions for Further Research
Certain questions remained unanswered with this research: The impact of the computer-based feedback on participants and teachers should be explored. In addition, to maximize the benefits of computer-based feedback, further research needs to be conducted taking another group of participants: larger in size, mixed in gender and at a different study level. Also, there is a very important question that needs to be answered: What effect does computer-based feedback have on reading comprehension for a special population; e.g. disadvantaged students.
Al–Seghayer, K. (2001). The effect of multimedia modes on L2 vocabulary acquisition: A Comparative study' language learning & technology, 5(1), Retrieved from http://llt.Msu.edu/vol5 num1/alseghayer/defaul.htm. 202–232.
Bodmann, S.M. and Robinson, D.H., 2004. Speed and performance differences among computerbased and paper-pencil tests. Journal of Educational Computing Research, 31, 51–60.
Clariana, R. B. (1990). A comparison of answer-until-correct feedback and knowledge-of-correct-response feedback under two conditions of contextualization. Journal of Computer-Based Instruction, 17(4), 125–129.
Clariana, R. B. (2000). Feedback in computer-assisted learning. NETg University of Limerick Lecture Series. Retrieved September 28, 2007, from
Daniel, D. B., & Reinking, D. (1987). The construct of legibility in electronic reading environments. In D. Reinking (Ed.), Reading and computers: Issues for theory and practice (pp. 24–39). New York: Teachers College Press.
Dawson, N. (2005). New opportunities placement tests. London: Longman.
Garland, K.J. and Noyes, J.M., 2004. CRT monitors: Do they interfere with learning? Behaviour & Information Technology, 23, 43–52.
Kulhavy, R. W., & Stock, W. A. (1989). Feedback in written instruction: The place of response certitude. Educational Psychology Review, 1(4), 279 –308.
Mason, B.J., Patry, M., and Bernstein, D.J., 2001. An examination of the equivalence between nonadaptive computer-based and traditional testing. Journal of Educational Computing Research, 24, 29–39.
Mayes, D.K., Sims, V.K., and Koonce, J.M., 2001. Comprehension and workload differences for VDT and paper-based reading. International Journal of Industrial Ergonomics, 28, 367-378.
Mory, E. H. (2003). Feedback research revisited, in Jonassen, J. H. (Ed.):Handbook of Research on Educational Communications and Technology, Macmillan Library Reference, New York, pp. 745–783.
Murphy, P. (2007).Reading comprehension exercises online: The effects of feedback, proficiency and inter-action. Language Learning &Technology, 11(3), 107–129.
National Research Council. (1998). Preventing reading difficulties in young children. Washington, DC: National Academy Press.
Noyes, J.M. and Garland, K.J., 2003. VDT versus paper-based text: Reply to Mayes, Sims and Koonce. International Journal of Industrial Ergonomics, 31, 411–423.
Orsmond, P., Merry, S., & Reiling, K. (2005). Biology Students' Utilization of Tutors' Formative Feedback: A Qualitative Interview Study. Assessment and Evaluation in Higher Education, 30(4), 369–386.
Peat, M., & Franklin, S. (2002). Supporting Student Learning: The Use of Computer-based Formative Assessment Modules. British Journal of Educational-Technology, 33(5), 515-523.
Pressey, S.L. (1926). A simple apparatus which gives tests and scores- and teaches. School and Society, 23, 373–376.
Ramsden, P. (1992). Learning to teach in higher education. London: Routledge. Reinking, D. (1987). Computers, reading, and a new technology of print. In D.Reinking (Ed.), Reading and computers: Issues for theory and practice (pp. 3–23). NewYork: Teachers College Press.
Russell, M., & Haney, W. (1996). Testing writing on computers: Results of a pilot study student writing test performance via computer or via paper and pencil. Paper presented at the Mid-Atlantic Alliance for Computers and Writing Conference, Chestnut Hill, MA.
Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research,
Sim, G., Holifield, P., & Brown, M. (2004). Implementation of computer assisted assessment: lessons from the literature. ALT-J, 12(3), 215–229.
Snow, C. E., Burns, M. S., & Griffin, P. (1998). Preventing reading difficulties in young children. Washington, DC: National Research Council.
About the Author
Malahat Yousefzadeh is at the Islamic Azad University (Ardabil Branch) in Iran.
Longman Placement Test
Name: ___________________________ Class: _________________________
Placement 1: Beginner to Elementary
Choose the best option and underline A, B, C or D as in the example 0.
[Total 80 marks]
Reading Comprehension Tests
Butterflies are some of the most interesting insects on the planet Earth. There are more than seventeen thousand different kinds of butterflies! Butterflies come in all shapes and sizes
Butterflies go through four main stages of life. The first stage is the egg stage followed by the pupa stage. As a pupa, or caterpillar, the future butterfly eats as much as possible. As it grows, it sheds it outer skin, or exoskeleton. This may happen four or five times. After a few weeks, the caterpillar enters the next stage of its life, the chrysalis stage. In the chrysalis, the caterpillar will liquefy into a soup of living cells. Then, it will reorganize into a butterfly and the metamorphosis is complete. In later parts of the chrysalis stage, you can see the forming butterfly through the chrysalis.
When the butterfly emerges from the chrysalis, it pumps its wings to send blood through them so that it can fly. Most butterflies only live a couple of weeks, just enough time to drink flower nectar and to mate. Some, like the Monarch Butterfly, however, may live many months.
1. Which is true?
A. There are less than a thousand different kinds of butterflies in the world.
2. What is the second stage of life for a butterfly?
3. What is the third stage of life for a butterfly?
4. In what stage does the metamorphosis happen?
5. Which of the following is NOT true?
A. Caterpillars turn into a liquid in the chrysalis
6. Select ALL of the things that a butterfly does.
A. Go through metamorphosis
7. Why does the butterfly shed its skin?
A. It is growing
|July 2010 Index|