| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
A Comparison Study of |
| Groups | N | Mean | Std. Deviation | Std. Error Mean |
Item 1 | experimental | 22 | 4.50 | .598 | .127 |
| control | 21 | 4.33 | .577 | .126 |
Item 2 | experimental | 22 | 4.50 | .598 | .127 |
| control | 21 | 4.33 | 1.017 | .222 |
Item 3 | experimental | 22 | 4.32 | .568 | .121 |
| control | 21 | 4.10 | 1.179 | .257 |
Item 4 | experimental | 22 | 4.23 | .869 | .185 |
| control | 21 | 3.71 | 1.146 | .250 |
Item 5 | experimental | 22 | 4.91 | .294 | .063 |
| control | 21 | 4.95 | .218 | .048 |
Item 6 | experimental | 22 | 4.73 | .456 | .097 |
| control | 21 | 4.81 | .402 | .088 |
Item 7 | experimental | 22 | 4.55 | .596 | .127 |
| control | 21 | 4.38 | 1.024 | .223 |
Item 8 | experimental | 22 | 4.73 | .456 | .097 |
| control | 21 | 4.48 | .814 | .178 |
Item 9 | experimental | 22 | 4.50 | .598 | .127 |
| control | 21 | 3.90 | 1.261 | .275 |
Item 10 | experimental | 22 | 4.64 | .581 | .124 |
| control | 21 | 4.33 | 1.065 | .232 |
Item 11 | experimental | 22 | 4.59 | .503 | .107 |
| control | 21 | 4.67 | .483 | .105 |
Item 12 | experimental | 22 | 4.05 | .844 | .180 |
| control | 21 | 3.90 | .700 | .153 |
Item 13 | experimental | 22 | 4.55 | .596 | .127 |
| control | 21 | 4.05 | 1.071 | .234 |
Item 14 | experimental | 22 | 4.14 | .774 | .165 |
| control | 21 | 4.38 | 1.024 | .223 |
Item 15 | experimental | 22 | 4.77 | .429 | .091 |
| control | 21 | 4.29 | 1.007 | .220 |
Item 16 | experimental | 22 | 4.55 | .596 | .127 |
| control | 21 | 4.14 | 1.153 | .252 |
Item 17 | experimental | 22 | 4.55 | .510 | .109 |
| control | 21 | 4.33 | .658 | .144 |
|
| Levene's Test for Equality of Variances |
|
|
|
|
| |
|
| F | Sig. | t | df | Sig. | Mean Difference | Std. Error Difference |
Item 1 | Equal variances assumed | .281 | .599 | .929 | 41 | .358 | .167 | .179 |
| Equal variances not assumed |
|
| .930 | 40.993 | .358 | .167 | .179 |
Item 2 | Equal variances assumed | 2.188 | .147 | .659 | 41 | .514 | .167 | .253 |
| Equal variances not assumed |
|
| .652 | 32.051 | .519 | .167 | .256 |
Item 3 | Equal variances assumed | 4.341 | .043 | .796 | 41 | .431 | .223 | .280 |
| Equal variances not assumed |
|
| .784 | 28.506 | .440 | .223 | .284 |
Item 4 | Equal variances assumed | 1.640 | .207 | 1.658 | 41 | .105 | .513 | .309 |
| Equal variances not assumed |
|
| 1.648 | 37.279 | .108 | .513 | .311 |
Item 5 | Equal variances assumed | 1.227 | .274 | -.546 | 41 | .588 | -.043 | .079 |
| Equal variances not assumed |
|
| -.550 | 38.686 | .586 | -.043 | .079 |
Item 6 | Equal variances assumed | 1.603 | .213 | -.626 | 41 | .535 | -.082 | .131 |
| Equal variances not assumed |
|
| -.628 | 40.760 | .534 | -.082 | .131 |
Item 7 | Equal variances assumed | 2.381 | .130 | .648 | 41 | .521 | .165 | .254 |
| Equal variances not assumed |
|
| .640 | 31.856 | .527 | .165 | .257 |
Item 8 | Equal variances assumed | 5.264 | .027 | 1.256 | 41 | .216 | .251 | .200 |
| Equal variances not assumed |
|
| 1.241 | 31.121 | .224 | .251 | .202 |
Item 9 | Equal variances assumed | 6.822 | .013 | 1.993 | 41 | .053 | .595 | .299 |
| Equal variances not assumed |
|
| 1.963 | 28.256 | .060 | .595 | .303 |
Item 10 | Equal variances assumed | 4.764 | .035 | 1.166 | 41 | .250 | .303 | .260 |
| Equal variances not assumed |
|
| 1.151 | 30.634 | .259 | .303 | .263 |
Item 11 | Equal variances assumed | .966 | .331 | -.503 | 41 | .618 | -.076 | .151 |
| Equal variances not assumed |
|
| -.504 | 40.998 | .617 | -.076 | .150 |
Item 12 | Equal variances assumed | 2.821 | .101 | .593 | 41 | .556 | .141 | .237 |
| Equal variances not assumed |
|
| .596 | 40.240 | .555 | .141 | .236 |
Item 13 | Equal variances assumed | 1.203 | .279 | 1.895 | 41 | .065 | .498 | .263 |
| Equal variances not assumed |
|
| 1.871 | 30.982 | .071 | .498 | .266 |
Item 14 | Equal variances assumed | .673 | .417 | -.886 | 41 | .381 | -.245 | .276 |
| Equal variances not assumed |
|
| -.881 | 37.237 | .384 | -.245 | .278 |
Item 15 | Equal variances assumed | 12.805 | .001 | 2.080 | 41 | .044 | .487 | .234 |
| Equal variances not assumed |
|
| 2.046 | 26.761 | .051 | .487 | .238 |
Item 16 | Equal variances assumed | 5.336 | .026 | 1.449 | 41 | .155 | .403 | .278 |
| Equal variances not assumed |
|
| 1.429 | 29.665 | .164 | .403 | .282 |
Item 17 | Equal variances assumed | 1.356 | .251 | 1.185 | 41 | .243 | .212 | .179 |
| Equal variances not assumed |
|
| 1.178 | 37.684 | .246 | .212 | .180 |
In addition, the last additional item from both sets of data described previously found that a majority of students in both online and traditional sections indicated this course as either “Moderately Difficult” or “Very Difficult” among the five options. The numeric averages in two online sections and two traditional sections were all between 4 (Moderately Difficult) and 5 (Very Difficult) (see the last item in Tables 1 and 3). This result was not surprising for the author of this study since this is a graduate course that requires rigorous instruction. In addition, this seems to be one of the most difficult and challenging course in all educational graduate programs. Results in this study showed that the research hypothesis 1 was supported. This is consistent with findings in other studies. Recent studies have consistently found no systematic differences between online and traditional paper-based student evaluations of instruction (e. g., Carini, et al., 2003; Hardy, 2003; Thorpe, 2002), even when different incentives such as grade were offered to the students for the completion of online evaluations (Dommeyer et al., 2004).
| Groups | N | Mean | Std. Deviation | Std. Error Mean |
Item 1 | experimental | 19 | 4.16 | 1.302 | .299 |
| control | 21 | 4.48 | .873 | .190 |
Item 2 | experimental | 19 | 4.58 | 1.261 | .289 |
| control | 21 | 4.67 | .796 | .174 |
Item 3 | experimental | 19 | 4.53 | 1.264 | .290 |
| control | 21 | 4.67 | .966 | .211 |
Item 4 | experimental | 19 | 4.42 | .961 | .221 |
| control | 21 | 4.19 | 1.030 | .225 |
Item 5 | experimental | 19 | 4.00 | 1.247 | .286 |
| control | 21 | 4.19 | 1.078 | .235 |
Item 6 | experimental | 19 | 4.42 | 1.017 | .233 |
| control | 21 | 4.19 | 1.123 | .245 |
Item 7 | experimental | 19 | 4.42 | 1.170 | .268 |
| control | 21 | 4.57 | .926 | .202 |
Item 8 | experimental | 19 | 4.74 | .933 | .214 |
| control | 21 | 4.67 | .966 | .211 |
Item 9 | experimental | 19 | 4.58 | .961 | .221 |
| control | 21 | 4.57 | .978 | .213 |
Item 10 | experimental | 19 | 4.58 | .961 | .221 |
| control | 21 | 4.62 | .921 | .201 |
Item 11 | experimental | 19 | 4.26 | .991 | .227 |
| control | 21 | 4.52 | .981 | .214 |
Item 12 | experimental | 19 | 4.47 | .964 | .221 |
| control | 21 | 4.43 | 1.028 | .224 |
Item 13 | experimental | 19 | 4.47 | 1.264 | .290 |
| control | 21 | 4.48 | 1.030 | .225 |
Item 14 | experimental | 19 | 4.37 | .955 | .219 |
| control | 21 | 4.29 | 1.102 | .240 |
Item 15 | experimental | 19 | 4.37 | 1.012 | .232 |
| control | 21 | 4.33 | .966 | .211 |
Item 16 | experimental | 19 | 4.42 | .961 | .221 |
| control | 21 | 3.95 | 1.203 | .263 |
Item 17 | experimental | 19 | 4.68 | .946 | .217 |
| control | 21 | 4.43 | .978 | .213 |
Item 18 | experimental | 19 | 4.37 | 1.012 | .232 |
| control | 21 | 4.57 | .870 | .190 |
Item 19 | experimental | 19 | 4.2632 | .93346 | .21415 |
| control | 21 | 4.1429 | .65465 | .14286 |
|
| Levene's Test for Equality of Variances |
|
|
|
|
| |
|
| F | Sig. | t | df | Sig. | Mean Difference | Std. Error Difference |
Item 1 | Equal variances assumed | 1.662 | .205 | -.916 | 38 | .365 | -.318 | .347 |
| Equal variances not assumed |
|
| -.898 | 30.998 | .376 | -.318 | .354 |
Item 2 | Equal variances assumed | .710 | .405 | -.266 | 38 | .792 | -.088 | .330 |
| Equal variances not assumed |
|
| -.260 | 29.822 | .797 | -.088 | .337 |
Item 3 | Equal variances assumed | .683 | .414 | -.397 | 38 | .694 | -.140 | .354 |
| Equal variances not assumed |
|
| -.392 | 33.614 | .698 | -.140 | .358 |
Item 4 | Equal variances assumed | .228 | .636 | .729 | 38 | .470 | .231 | .316 |
| Equal variances not assumed |
|
| .732 | 37.958 | .469 | .231 | .315 |
Item 5 | Equal variances assumed | .076 | .784 | -.518 | 38 | .607 | -.190 | .368 |
| Equal variances not assumed |
|
| -.514 | 35.824 | .610 | -.190 | .370 |
Item 6 | Equal variances assumed | .278 | .601 | .678 | 38 | .502 | .231 | .340 |
| Equal variances not assumed |
|
| .681 | 37.999 | .500 | .231 | .338 |
Item 7 | Equal variances assumed | .800 | .377 | -.453 | 38 | .653 | -.150 | .332 |
| Equal variances not assumed |
|
| -.448 | 34.276 | .657 | -.150 | .336 |
Item 8 | Equal variances assumed | .164 | .687 | .233 | 38 | .817 | .070 | .301 |
| Equal variances not assumed |
|
| .234 | 37.823 | .817 | .070 | .301 |
Item 9 | Equal variances assumed | .021 | .887 | .024 | 38 | .981 | .008 | .307 |
| Equal variances not assumed |
|
| .024 | 37.726 | .981 | .008 | .307 |
Item 10 | Equal variances assumed | .032 | .860 | -.135 | 38 | .894 | -.040 | .298 |
| Equal variances not assumed |
|
| -.134 | 37.210 | .894 | -.040 | .298 |
Item 11 | Equal variances assumed | .007 | .935 | -.835 | 38 | .409 | -.261 | .312 |
| Equal variances not assumed |
|
| -.835 | 37.518 | .409 | -.261 | .312 |
Item 12 | Equal variances assumed | .207 | .652 | .143 | 38 | .887 | .045 | .316 |
| Equal variances not assumed |
|
| .143 | 37.944 | .887 | .045 | .315 |
Item 13 | Equal variances assumed | .103 | .750 | -.007 | 38 | .995 | -.003 | .363 |
| Equal variances not assumed |
|
| -.007 | 34.831 | .995 | -.003 | .367 |
Item 14 | Equal variances assumed | .475 | .495 | .252 | 38 | .802 | .083 | .328 |
| Equal variances not assumed |
|
| .254 | 37.939 | .801 | .083 | .325 |
Item 15 | Equal variances assumed | .024 | .877 | .112 | 38 | .911 | .035 | .313 |
| Equal variances not assumed |
|
| .112 | 37.179 | .911 | .035 | .314 |
Item 16 | Equal variances assumed | 2.943 | .094 | 1.351 | 38 | .185 | .469 | .347 |
| Equal variances not assumed |
|
| 1.367 | 37.458 | .180 | .469 | .343 |
Item 17 | Equal variances assumed | 1.134 | .294 | .838 | 38 | .407 | .256 | .305 |
| Equal variances not assumed |
|
| .840 | 37.820 | .406 | .256 | .304 |
Item 18 | Equal variances assumed | .161 | .691 | -.682 | 38 | .499 | -.203 | .298 |
| Equal variances not assumed |
|
| -.677 | 35.747 | .503 | -.203 | .300 |
Item 19 | Equal variances assumed | .544 | .465 | .476 | 38 | .637 | .12030 | .25296 |
| Equal variances not assumed |
|
| .467 | 31.899 | .643 | .12030 | .25743 |
In addition, the high means in the author’s student evaluation for both the online and traditional sections may be related to various reasons. First, the author of this study used the constructivist learning theory as the major foundation for instructional strategy in both the online and traditional sections (see Liu, 2003a, 2003b). Second, the author took students’ learning styles and needs into account during the instructional process. He conducted a student background survey during the first week and a midterm course feedback survey in the midterm week. The results of those surveys were very helpful for the instructor to adapt to students’ learning needs. This finding is consistent with the findings reported by other researchers. Spencer and Schmelkin (2002) found that responding to students about instructional adaptations as a result of midterm feedback has positive effects.
Results also indicated that no statistically significant differences existed between the online section and the traditional section in the summer semester of 2003 or in the fall semester of 2004. All students in both online and traditional sections participated in the study. Thus, research hypothesis 2 was not supported. This result was surprising to the author since no incentives were used for completing the online and traditional student evaluation of instruction in either section. Students in both online and traditional sections were only requested to complete the student evaluations during the last two weeks. This finding is not consistent with findings in other recent studies. According to Dommeyer et al. (2004), students’ response rate to the online student evaluation was generally lower than that of the traditional paper-based survey. In order to increase the student response rate in the online evaluation, various approaches have been used in recent research. These include the use of the grade incentive (e.g, Dommeyer et al., 2004) and the sweepstakes approach (e.g., Bosnjak & Tuten, 2003; Cobanoglu & Cobanoglu, 2003).
The numeric results in Table 5 indicate that there was a significant difference in terms of both the number and the details of students’ qualitative comments. In terms of the number, in the summer semester of 2003, there was a significant difference in the number of qualitative comments (X2 = 4.17, p = .04) and words in those comments ((X2 = 433.95, p = .00) between the online and traditional sections. In the online section 20 students (91%) wrote qualitative comments which had a total of 1233 words while in the traditional section only 9 students wrote qualitative comments which had a total of 393 words. Meanwhile, during the fall semester of 2004, there was a significant difference in the number of qualitative comments (X2 = 6.00, p = .01) and words in those comments ((X2 = 835.28, p = .00) between the online and traditional sections. In the online section, 18 students (95%) wrote qualitative comments which had a total of 1192 words while in the traditional section only 6 students wrote qualitative comments which had a total of 138 words.
|
| Number of students | Number of qualitative comments | Number of words in all qualitative comments | Percentages of students who wrote comments |
Summer 2003 | Online section | 22 | 20 | 1233 | 91% |
Traditional section | 21 | 9 | 393 | 43% | |
X2 |
| 4.17 | 433.95 |
| |
p |
| .04 | .00 |
| |
Fall 2004 | Online section | 19 | 18 | 1192 | 95% |
Traditional section | 21 | 6 | 138 | 29% | |
X2 |
| 6.00 | 835.28 |
| |
p |
| .01 | .00 |
|
In addition, students’ qualitative comments in the summer semester of 2003 and fall semester of 2004 indicated that students in the online section were more motivated than those in the traditional section. For instance, a few students in the traditional section complained about the content and the frequency of chapter quizzes while those in the online section did not. In addition, students in the online section wrote more detailed comments and expressed greater satisfaction with the effectiveness of their learning in this course. Majority of students in the online section thought they had learned more in this course than from a traditional section. It was clear that such students’ qualitative comments were consistent with the research findings described previously.
These results support the previous findings that online students wrote more detailed qualitative comments than their counterparts in the traditional section. Thus research hypothesis 3 in this study was supported. Hardy (2003) found that the students who do respond write more detailed comments online in spite of the lower response rate using the online evaluation approach. These comments provide a valuable resource for the instructor to improve teaching and learning in future online course offerings. In addition, two of the six courses he studied had a higher percentage of positive comments than the class evaluated on paper. McGhee and Lowell (2003) found that online students reported more efforts in online courses and gave overall evaluations similar to their counterparts in traditional courses.
This study supports some previous research that (a) there is not a significant difference in student evaluation of instruction between online and traditional learners and (b) online students wrote more detailed qualitative comments than their counterparts in the traditional section. However, this study found that no statistically significant differences existed in terms of the response rate between the online section and the traditional section. Based on results form this study, it can be concluded that online instruction can be a viable alternative for higher education. This study has significant practical international implications for higher education. It also contributes to the current literature in the area of online instruction and e-learning. However, the results of the present study are limited to only one course and one instructor in an educational research course in two different semesters. Thus, care should be taken in generalizing the results to other environments such as other courses in different subjects.
Ballantyne, C. (2003). Online evaluations of teaching: An examination of current practice and considerations for the future. New Directions for Teaching & Learning, 96, 103-113.
Bosnjak, M. & Tuten, T.L. (2003). Prepaid and promised incentives in web surveys - An experiment. Social Science Computer Review, 21(2), 208-217.
Bullock, C. D. (2003). Online collection of midterm student feedback. New Directions for Teaching & Learning, 96, 95-103.
Cantera, L. (2002). Y puts teacher evaluations online. NewsNet by Brigham Young University. Retrieved March 10, 2005 from http://newsnet.byu.edu/story.cfm/41005.
Carini, R. M., Hayek, J. C., Kuh, G. D., & Ouimet, J. A. (2003). College student responses to web and paper surveys: does mode matter? Research in Higher Education, 44(1), 1–19.
Cobanoglu, C. & Cobanoglu, N. (2003). The effect of incentives in web surveys: application and ethical considerations. International Journal of Market Research, 45(4), p. 1-14.
Cooper, L. (1999). Anatomy of an Online Course. T. H. E. Journal, 26(7), 45-51.
Dick, W., Carey, L., & Carey, J. O. (2001). The systematic design of instruction (5th Edition). New York: Addison-Wesley Educational Publishers, Inc.
Dommeyer, C. J., Baum, P., Hanna, R. W., & Chapman, K. S. (2004). Gathering faculty teaching evaluations by in-class and online surveys: their effects on response rates and evaluations. Assessment & Evaluation in Higher Education, 29(5), 611-624.
Hardy, N. (2003). Online ratings: Fact and fiction. New Directions for Teaching & Learning, 96, 31-39.
Harrington, C. F. & Reasons, S.G. (2005). Online student evaluation of teaching for distance education: A perfect match? The Journal of Educators Online, 2, 1. Retrieved March 10, 2005 from http://www.thejeo.com/ReasonsFinal.pdf.
Hmieleski, K. (2000). Barriers to online evaluation: Surveying the nation’s top 200 most wired colleges. Troy, N.Y.: Interactive and Distance Education Assessment Laboratory, Rensselaer Polytechnic Institute (Unpublished Report).
Hoffman, K. M. (2003). Online course evaluation and reporting in higher education. New Directions for Teaching & Learning, 96, 25-30.
Institute for Higher Education Policy (2000). Quality on the line: Benchmarks for success in internet-based distance education. Washington, DC, USA.
Kearsley, G. (2000). Online education: Learning and teaching in no cyberspace. Belmont, CA: Wadsworth.
Liu, Y. (2003a). Improving online interactivity and learning: A constructivist approach. Academic Exchange Quarterly, 7(1), 174-178.
Liu, Y. (2003b). Taking educational research online: Developing an online educational research course. Journal of Interactive Instruction Development, 16(1), 12-20.
Liu, Y. (2005a). Effects of online instruction vs. traditional instruction on students’ learning. International Journal of Instructional Technology and Distance Learning, 2(3), Article 006. Retrieved March 21, 2005, from http://www.itdl.org/Journal/Mar_05/article06.htm.
Liu, Y. (2005b). Impact of online instruction on teachers’ learning and attitudes toward technology integration. The Turkish Online Journal of Distance Education, 6(4), Article 007. Retrieved October 27, 2005, from http://tojde.anadolu.edu.tr/.
Mayer, J., & George, A. (2003). The University of Idaho’s online course evaluation system: Going forward! Paper presented at the 43rd Annual Forum of the Association for Institutional Research, Tampa, Fla., May 2003.
McGhee, D. E., & Lowell, N. (2003). Psychometric properties of student ratings of instruction in online and on-campus courses. New Directions for Teaching & Learning, 96, 39-48.
McGourty, J., Scoles, K. & Thorpe, S. (2002). Web-based student evaluation of instruction: promises and pitfalls. Paper presented at the 42nd Annual Forum of the Association for Institutional Research, Toronto, Ontario, June 2002.
Sorenson, L., & Reiner, C. (2003). Charting the unchartered seas of online student ratings of instruction. New Directions for Teaching & Learning, 96, 1-25.
Spencer, K. J., & Schmelkin, L. P. Student perspectives on teaching and its evaluation. Assessment and Evaluation in Higher Education, 2002, 27(5), 397–409.
Thorpe, S.W. (2002). Online student evaluation of instruction: An investigation of non-response bias. Paper presented at the 42nd Annual Forum of the Association for Institutional Research in Toronto Canada. Retrieved June 26, 2003, from http://www.airweb.org/forum02/550.pdf
Thurmond, V. A., Wambach, K., Connors, H. R., & Frey, B. B. (2002). Evaluation of student satisfaction: Determining the impact of a Web-based environment by controlling for student characteristics. The American Journal of Distance Education, 16, 169-189.
Waits, T., & Lewis L. (2003). Distance education at degree-granting postsecondary institutions: 2000-2001. U.S. Department of Education. Washington, DC, USA: National Center for Education Statistics (NCES Pub 2003-017).
Acknowledgement.
This study was supported by the Funded University Research (FUR) internal grants
from Southern Illinois University, Edwardsville, Illinois, USA in 2003-2004.
Yuliang Liu is Assistant Professor and graduate program director of Instructional Design and Learning Technologies at Southern Illinois University, Edwardsville, Illinois 62026, USA
Phone: (618) 650-3293; Fax: (618) 650-3808; E-mail: yliu@siue.edu.