Dr. Liu provides additional building blocks that support areas of significant difference and areas with no significant difference. The latter is as important as the former, because it gives assurance that little or nothing is lost if we implement learning programs online using appropriate pedagogy and technology. The flip side of the coin is ability to serve many learners who could not otherwise participate, and know that losses are not significant and significant gains can be achieved. Effects of Online Instruction vs. Traditional Instruction |
| Groups | N | Mean | Standard Deviation | Standard Error Mean |
ch1 quiz | experimental group | 22 | 96.82 | 5.68 | 1.21 |
| control group | 21 | 83.10 | 10.18 | 2.22 |
ch2 quiz | experimental group | 22 | 92.73 | 4.81 | 1.03 |
| control group | 21 | 88.33 | 8.56 | 1.87 |
ch3 quiz | experimental group | 22 | 91.36 | 7.27 | 1.55 |
| control group | 21 | 86.67 | 5.99 | 1.31 |
ch4 quiz | experimental group | 22 | 90.23 | 7.48 | 1.59 |
| control group | 21 | 83.10 | 7.33 | 1.60 |
ch5 quiz | experimental group | 22 | 85.23 | 9.19 | 1.96 |
| control group | 21 | 81.19 | 8.79 | 1.92 |
ch6 quiz | experimental group | 22 | 89.77 | 8.66 | 1.85 |
| control group | 21 | 86.90 | 4.60 | 1.01 |
ch13 quiz | experimental group | 22 | 90.23 | 8.38 | 1.79 |
| control group | 21 | 84.76 | 8.87 | 1.94 |
Pre-assessment | experimental group | 22 | 41.4545 | 12.07 | 2.57 |
| control group | 21 | 44.9524 | 8.66 | 1.89 |
Final test | experimental group | 22 | 87.6364 | 7.24 | 1.54 |
| control group | 21 | 77.7143 | 9.68 | 2.11 |
Final grade | experimental group | 22 | 4.0000 | .00 | .00 |
| control group | 21 | 3.8095 | .40 | .09 |
| t-test for Equality of Means | |||||||
t | df | Sig. | Mean Difference | Std. Error Difference | 95% Confidence Interval of the Difference | |||
|
|
|
|
| Lower | Upper | ||
ch1 quiz | Equal variances assumed | 5.491 | 41 | .000 | 13.72 | 2.50 | 8.68 | 18.77 |
| Equal variances not assumed | 5.423 | 31.033 | .000 | 13.72 | 2.53 | 8.56 | 18.88 |
ch2 quiz | Equal variances assumed | 2.087 | 41 | .043 | 4.39 | 2.11 | .142 | 8.65 |
| Equal variances not assumed | 2.061 | 31.178 | .048 | 4.39 | 2.13 | .05 | 8.74 |
ch3 quiz | Equal variances assumed | 2.307 | 41 | .026 | 4.70 | 2.04 | .59 | 8.81 |
| Equal variances not assumed | 2.318 | 40.159 | .026 | 4.70 | 2.03 | .60 | 8.79 |
ch4 quiz | Equal variances assumed | 3.157 | 41 | .003 | 7.13 | 2.26 | 2.57 | 11.69 |
| Equal variances not assumed | 3.159 | 40.969 | .003 | 7.13 | 2.26 | 2.57 | 11.69 |
ch5 quiz | Equal variances assumed | 1.471 | 41 | .149 | 4.04 | 2.75 | -1.51 | 9.58 |
| Equal variances not assumed | 1.472 | 41.000 | .149 | 4.04 | 2.74 | -1.50 | 9.57 |
ch6 quiz | Equal variances assumed | 1.347 | 41 | .185 | 2.87 | 2.13 | -1.43 | 7.17 |
| Equal variances not assumed | 1.365 | 32.307 | .182 | 2.87 | 2.10 | -1.41 | 7.15 |
ch13 quiz | Equal variances assumed | 2.078 | 41 | .044 | 5.47 | 2.63 | .15 | 10.78 |
| Equal variances not assumed | 2.075 | 40.555 | .044 | 5.47 | 2.63 | .14 | 10.79 |
pre-assessment | Equal variances assumed | -1.087 | 41 | .283 | -3.4978 | 3.22 | -9.99 | 3.00 |
| Equal variances not assumed | -1.096 | 38.129 | .280 | -3.4978 | 3.19 | -9.96 | 2.96 |
Final test | Equal variances assumed | 3.818 | 41 | .000 | 9.9221 | 2.60 | 4.67 | 15.17 |
| Equal variances not assumed | 3.792 | 37.013 | .001 | 9.9221 | 2.62 | 4.62 | 15.22 |
Final Grade | Equal variances assumed | 2.222 | 41 | .032 | .1905 | .09 | .02 | .36 |
| Equal variances not assumed | 2.169 | 20.000 | .042 | .1905 | .09 | .007 | .37 |
Results in Table 2 revealed that between online and traditional sections, no significant differences were found in chapter 5 (t (41) = 1.47, p = .15) or chapter 6 quizzes (t (41) = 1.35, p = .18). However, significant differences between both sections were found in all other five quizzes, including chapters 1, 2, 3, 4, 13, and the final test. Specifically, in chapter 1 quiz, t (41) = 5.49, p = .00; in chapter 2 quiz, t (41) = 2.09, p = .04; in chapter 3 quiz, t (41) = 2.31, p = .03; in chapter 4 quiz, t (41) = 3.16, p = .00; in chapter 13 quiz, t (41) = 2.08, p = .04; in the final test, t (41) = 3.82, p = .00. In terms of learners’ final grades, t (41) = 2.22, p = .03. Thus, overall, the null research hypothesis described previously in this study was not supported.
In addition, regarding the students’ perceptions and satisfactions of the course, the same students’ evaluation form including 18 evaluation items was used by the lead investigator’s department in both sections at the end of the course. Students’ quantitative evaluation results revealed that the average in both sections was about the same (4.5 on a 5-point scale). However, student’s qualitative comments indicate that students in the online section were more motivated than those in the traditional section. For instance, a few students in the traditional section complained about the content and frequency of chapter quizzes while those in the online section did not. In addition, students in the online section expressed greater satisfaction of the effectiveness of their learning in this course. A majority of students in the online section thought they had learned more in this course than from a traditional section. It was clear that such students’ qualitative comments were consistent with the research findings described previously.
The results in this study indicate that there is a significant difference in learning outcomes between online and traditional learners. This study did not support the “non-significant phenomenon” described by Russell (1999). This finding is a surprise to the lead investigator due to various reasons. As described previously, the instructional requirements, activities, and content were attempted to be kept the same in both online and traditional sections. In addition, in the traditional section, the teacher also used various technologies such as using PowerPoint to present the course content in class and allowing students to access/print the teacher’s chapter notes in Acrobat (.pdf) format from WebCT before the class. However, the results are consistent with the line of research called “significant phenomenon” described by Russell (1999). That is, this study supports prior research line called “significant phenomenon” in this area and indicates that online instruction can be a viable alternative for higher education since it can achieve better student learning or at least as well as the traditional instruction.
Results of this study are inconsistent with some prior research. This may be related to various reasons:
First, a variety of samples were used. The samples in most such studies in this area were convenience samples. The sample in this study was also a convenience sample and participants were not randomly selected. Some studies involved undergraduate students while this study involved graduate students.
Second, a variety of subjects were involved in such studies including accounting, nursing, and construction. In this study, a graduate educational research course was involved.
Third, a variety of online instructional strategies were used. Some studies only used online writing assignments while this study used a combination of assessment techniques such as online quizzes/tests, writing, peer critiques, and group projects.
Fourth, a variety of online technologies were used. Some studies used the normal course web site while other studies used specialized course management and delivery systems such as Blackboard and WebCT. This study primarily used WebCT for online course delivery. Care should be taken in generalizing results to other environments without further investigation..
This study supports some previous research that (a) there is a significant difference in learning outcomes between online and traditional learners and (b) online instruction can be a viable alternative for higher education. This study has significant practical implications for higher education since many institutions are offering more online courses/programs. It also contributes to the current literature in the area of online instruction and e-learning. If online instruction is found to enhance student learning, more online courses/programs can be proposed. For example, embedded online courses may be used in place of more lengthy/costly traditional courses.
Due to various limitations of the study, care should be taken in generalization of results to other environments.
__________
* An earlier version of this paper was presented at the International Congress of Psychology in Beijing, China, in August 2004.
**Acknowledgement: This project was partially sponsored by the Illinois Century Content Development Grant from Illinois Board of Higher Education from April 2002 to June 2003.
Al-Jarf, A. & Sado, R. (2002). Effect of online learning on struggling ESL college writers. San Antonio, TX: National Educational Computing Conference Proceedings. (ERIC Document Reproduction Service No. ED 475 920).
CEO Forum (2000). The CEO forum: School technology and readiness report [Online]. DC: CEO Forum. Available: http://www.ceoforum.org/.
Clark, R.E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53, 445-459.
Clark, R. E. (1994). Media will never influence learning. Educational Technology, Research and Development, 42(2), 21-29.
Day, T., Raven, M. R. & Newman, M. E. (1998). The effects of world wide web instruction and traditional instruction and learning styles on achievement and changes in student attitudes in a technical writing in an agricommunication course. Journal of Agricultural Education, 39(4), 65-75.
Dick, W., Carey, L., & Carey, J. O. (2001). The systematic design of instruction (5th Edition). New York: Addison-Wesley Educational Publishers, Inc.
Gagne, M. & Shepherd, M. (2001). Distance learning in accounting. T. H. E. Journal, 29(9), 58-62.
Johnson, M. (2002). Introductory biology online: Assessing outcomes of two student populations. Journal of College Science Teaching, 31(5), 312-317.
Johnson, S. D., Aragon, S. R., Shaik, N., & Palma-Rivas, N. (2000). Comparative analysis of learner satisfaction and learning outcomes in online and face-to-face learning environments. Journal of Interactive Learning Research, 11(1), 29-49.
Jones, E. (1999). A comparison of all web-based class to a traditional class. Texas, USA. (ERIC Document Reproduction Service ED 432 286).
Kearsley, G. (2000). Online education: Learning and teaching in no cyberspace. Belmont, CA: Wadsworth.
Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development, 42, 7-19.
Liu, Y. (2003a). Improving online interactivity and learning: A constructivist approach. Academic Exchange Quarterly, 7(1), 174-178.
Liu, Y. (2003b). Taking educational research online: Developing an online educational research course. Journal of Interactive Instruction Development,
16(1), 12-20.
McCollum, K. (1997). A professor divides his class in two to test value of online instruction. Chronicle of Higher Education, 43, 23.
Navarro, P., & Shoemaker, J. (1999). The power of cyberlearning: An empirical test. Journal of Computing in Higher Education, 11(1), 33.
Nesler, M. S., Hanner, M. B., Melburg, V., & McGowan, S. (2001). Professional socialization of baccalaureate nursing students: Can students in distance nursing programs become socialized? Journal of Nursing Education, 40(7), 293-302.
Phipps R. & Merisotis J. (1999). What's the difference? A review of contemporary research on the effectiveness of distance learning in higher education. Washington, DC, USA: The Institute for Higher Education Policy.
Russell, T. L. (1999). The no significant difference phenomenon. Office of Instructional Telecommunications, North Carolina State University, USA.
Ryan, R. C. (2000). Student assessment comparison of lecture and online construction equipment and methods classes. T. H. E. Journal, 27(6), 78-83.
Schulman, A. H., & Sims, R. L. (1999). Learning in an online format vs. an in-class format: An experimental study. T. H. E. Journal, 26(11), 54-56.
The Institute for Higher Education Policy (2000). Quality on the line: Benchmarks for success in Internet-based distance education. Washington, DC, USA.
Wade, W. (1999). Assessment in distance learning: What do students know and how do we know that they know it? T.H.E. Journal, 27(3), 94-100.
Waits, T., & Lewis L. (2003). Distance education at degree-granting postsecondary institutions: 2000-2001. U.S. Department of Education. Washington, DC, USA: National Center for Education Statistics (NCES Pub 2003-017).
Dr. Yuliang Liu is assistant professor of instructional technology in the Department of Educational Leadership at Southern Illinois University Edwardsville. His major research interest is in distance education, online instruction, and research methodology.
Contact Data:
Yuliang Liu, Ph. D.
Department of Educational Leadership
Southern Illinois University Edwardsville
Edwardsville, Illinois 62026-1125 USA
Phone: (618) 650-3293 Fax: (618) 650-3808 E-mail: yliu@siue.edu