April 2006 Index
 
Home Page


Editor’s Note
: Dr. Liu regularly reports his latest research through this Journal. In this instance he is validating the results of online as compared to pencil and paper evaluations for classroom and online courses.

A Comparison Study of
Online versus Traditional Student Evaluation
of Instruction

Yuliang Liu

Abstract

This comparative study was designed to investigate whether there were significant differences in student evaluation of instruction in a graduate educational research course simultaneously taught by the same instructor between an online WebCT section and a traditional section. This study used a quasi-experimental design to collect two sets of data. First, one student evaluation of instruction survey was administered online for the online WebCT section and on paper for the traditional section in an identical course at a Midwestern public university in the USA during the summer semester of 2003. Second, a student evaluation of instruction survey was also administered online for the online WebCT section and on paper for the traditional section during an identical course in the fall semester of 2004. Results from these two sets of data revealed no significant differences in student evaluation of instruction between online and traditional sections in the identical course. Further implications result from this study.

Keywords: student evaluation, online instruction, traditional instruction, quasi-experimental design, no significant difference.

Introduction

Student evaluation of instruction is widely used in most colleges and universities in the USA. Student evaluation of instruction is used by two major groups for different purposes: (a) by administrators to evaluate faculty teaching effectiveness and to make personnel decisions (e. g, tenure and promotion) and (b) by faculty to improve teaching. Typically, students are requested to complete the evaluation of instruction on paper form by using No. 2 pencils in the final week of each semester. But the format has changed recently. Presently with the rapid development of computing and Web technologies, many institutions are offering online courses. According to Waits & Lewis (2003), distance education has grown quickly in recent years. In the 2000-2001 academic year, 56% of all 2-year and 4-year institutions offered various distance education courses. In addition, 12% of all institutions planned to offer distance education courses in the next 3 years. Recent studies (Cooper, 1999; Thurmond, Wambach, Connors, & Frey, 2002) have indicated that students are satisfied with online courses. This signifies that Web-based technology is an acceptable platform for learning and instruction.

Online student evaluation of instruction has received increasing attention in recent years. In 2000, Hmieleski surveyed the 200 mostly wired institutions in the USA and found that 98% of responding institutions still used the paper-based method as the major approach to student evaluation. This indicating that online student evaluation was extremely limited outside the distance education programs. However, since Hmieleski’s study, some universities have explored other possible methods of collecting and reporting student evaluation data (Bullock, 2003; Hardy, 2003; Hoffman, 2003). Hardy reported that Northwestern University implemented a campus-wide system for online student evaluation in 1999 and found that the average numerical scores for the online evaluation were approximately the same as those for the paper-based evaluations. However, some case studies indicated that students in the online courses had a higher overall level of satisfaction. For instance, Cooper (1999) found that all students in her online anatomy course were very satisfied with her online course and the class met their expectations.

Online student evaluation has been noted as a viable alternative to the traditional paper-based method and has numerous advantages such as time efficiency, flexibility, detailed written comments, and low expense (Ballantyne, 2003; Dommeyer, Baum, Hanna, & Chapman, 2004; Sorenson & Reiner, 2003). Thus in recent years, the number of studies on online student evaluation in the literature has grown (Cantera, 2002; Hoffman, 2003; Mayer & George, 2003; McGourty, Scoles, & Thorpe, 2002). According to Hoffman (2003), although paper-based evaluation remains the major method for student evaluation data collection in traditional face-to-face courses, the use of the Internet as a primary method for collecting student evaluation data has increased approximately 8% since 2000. In addition, some institutions use both paper-based and online methods for collecting student evaluation data. At these institutions, online courses are increasingly being evaluated online and the number of face-to-face courses evaluated online has also increased.

The question of there being significant differences in student evaluation of instruction between online and paper-based methods is important since previous research indicates that the response rate in online student evaluation for the online section is likely to be lower than that in the paper-based evaluation for the traditional section. Dommeyer et al. (2004) conducted a study involving 16 instructors and undergraduate business majors. They found that students’ response rate to the online student evaluation was generally lower than that of the traditional paper-based survey. However, according to Dommeyer et al., the difference between the online evaluation and paper-based evaluation was minimal if a grade incentive was used for encouraging the online response rate. Other recent research has indicated that the response rate to the online survey can also be increased with other approaches such as the use of a sweepstakes approach (Bosnjak & Tuten, 2003; Cobanoglu & Cobanoglu, 2003).

Recent studies have indicated that the most important factor affecting student evaluation of instruction is the online learning environment. Thurmond et al. (2002) conducted a study to determine the impact of a Web-based class by controlling for student characteristics. They found that the virtual environment in the online course had a greater impact on student satisfaction than student characteristics. According to Thurmond et al., the online instructor has complete control of the virtual environment. In addition, principles of good practice (e. g, active learning and timely feedback) in traditional classrooms can also apply to the virtual classroom. This implies that if an instructor has experiences of good teaching practice in the traditional classroom, he/she will be able to transfer the good teaching practice to the virtual classroom as well. Thurmond et al.’s perspective has been supported by later studies. McGhee and Lowell (2003) compared online student evaluations in an online course with paper-based evaluations in a traditional course. McGhee and Lowell found that any possible differences in student evaluations were likely related to differences in the instructional environment.

Other comparison studies show students in online courses have similar results in their evaluation of instruction as compared to their counterparts in traditional courses. Hardy (2003) compared six courses evaluated online and on paper and found little or no overall differences in terms of the average numerical scores, the number of positive, negative, and mix comments in online and paper-based student evaluations. Hardy also found that the students who did respond wrote more detailed comments online in spite of the lower response rate. These comments provided a valuable resource for the instructor to improve teaching and learning in future course offerings. In accordance with previous studies, more recent studies have found no systematic differences between online and traditional paper-based student evaluations of instruction (e. g., Carini, Hayek, Kuh, & Ouimet, 2003; Thorpe, 2002), even when different incentives were offered to the students for the completion of online evaluations (Dommeyer et al., 2004).

The above literature review indicates that (a) online student evaluation of instruction in online courses is generally similar to that in traditional courses if both courses are taught by the same instructor, (b) the response rate in the online student evaluation in the online section is likely to be lower than that in the traditional section, and (c) students in the online section will write more detailed comments related to the course than their counterparts in the traditional section. Thus, the purpose of this study is to investigate any possible significant differences related to the above three issues in the student evaluation of instruction in the educational research course at the master level based on the course delivery method: online for the online section and paper-based for the traditional section.

The specific research hypotheses in this study are stated as follows:

Hypothesis 1: There was no statistically significant difference in student evaluation in a graduate educational research course between the online section and the traditional section if both sections were taught by the same instructor in the same semester.

Hypothesis 2: The response rate in the online student evaluation in the online section was lower than that in the traditional section.

Hypothesis 3: Students in the online section wrote more detailed comments related to the course than their counterparts in the traditional section.
 

Method

Participants

The participants in this study were recruited based on convenience sampling on two occasions. The first set of data collected for this study was in the summer semester of 2003. In that semester, the author was assigned to simultaneously teach one online section and one traditional section of the educational research course. All students who self-selected to enroll in this course for 10 weeks during the summer semester of 2003 were solicited in the first week of the semester for participation in this study. The educational research course is a required core course in education at the master’s level at a Midwestern state university in the United States. Students in this course were from different graduate programs in education. Twenty-four students enrolled in the online section, but two of them withdrew within the first two weeks due to time commitment and unexpected family issues. Twenty-two students in the online section were included for final analysis and twenty-one students enrolled in the traditional section. Thus, a total of 43 participants in both sections were recruited to participate in the study. Participants in both sections were asked to complete consent forms and demographic surveys in the first week. A pretest of course content in both sections was administered. A preliminary analysis of the pretest revealed that although the traditional section scored a little higher than the online section, but no significant differences were detected between the two sections.

The second set of data collected for this study was in the fall semester of 2004. In that semester, the author was assigned again to simultaneously teach one online section and one traditional section of the same above course. Similarly, all students who self-selected to enroll in this course for 16 weeks during that semester were solicited in the first week for participation in this study. Students in this course were from different graduate programs in education. Nineteen students enrolled in the online section and twenty-one enrolled in the traditional section. Thus, a total of 40 participants in both sections were recruited to participate in the study. Participants in both sections were asked to complete consent forms and demographic surveys in the first week. A pretest of course content in both sections was administered. A preliminary analysis of the pretest revealed that no significant differences were detected between the two sections.

Instruments

Although many researchers (e. g., Harrington & Reasons, 2005) have recently proposed developing useful online student evaluation of instruction for distance education courses, this study used the existing student evaluation forms of instruction from the author’s department. The author’s department decided to use the same student evaluation of instruction for both traditional and online courses. That is, online courses were evaluated online via WebCT and traditional courses were evaluated using the paper-based approach in the traditional classroom during the last two weeks of each semester. However, both traditional and online courses used the same student evaluation survey. Thus the student evaluation survey in both types of classes has the same validity and reliability. The administration of the student evaluation was anonymous and confidential. During administration, the instructor was required to leave the classroom for the traditional sections. A student volunteer was asked to seal the completed evaluation surveys in the envelope and to take the envelope back to the department’s secretary. In the online section on WebCT, students evaluation results were not related to their identifications such as names.

The student evaluation survey used in the summer semester of 2003 had 16 five-point Likert scale items: 1‑Poor, 2‑Fair, 3‑Average, 4‑very Good, and 5‑Superior. There was one additional item to evaluate the level of difficulty of the course, from Very Easy (1) to Very Difficult (5). Students circled the appropriate number for each item. In addition to these numeric items, students could also write comments.

Since the fall semester of 2003, the author’s department revised the student evaluation of instruction survey and approved the new version. The new student evaluation survey used in the fall semester of 2004 had 18 five-point Likert scale items: from 1-Strongly Disagree to 5‑Strongly Agree. Similarly, there was one additional item to evaluate the level of difficulty of the course, from Very Easy (1) to Very Difficult (5). Students circled the appropriate number for each item. In addition to these numeric items, students were provided with a space to write open comments.

After the administration of each student evaluation survey, the departmental secretary added each student’s evaluation result, calculated the average score for each item, and the average score for all 16 (in 2003) and 18 (in 2004) items for each section. Students’ qualitative comments were typed and added to the report as well. The results were then released to the administrators and the faculty.
 

Research Design

This study used a non-equivalent control group design. In both the experimental group (online via WebCT) and control group (traditional classroom), the dependent variables of learning performance were pretested and posttested. The dependent variable of student evaluation of instruction was completed in the final two weeks of the semester, online for the online section and paper-based in the traditional section. The independent variable was online vs. traditional instruction in a graduate course. Based on recommendations from the Institute for Higher Education Policy (2000) and Kearsley (2000), a hybrid of instructional techniques were employed in the online section. Specifically, several major features of WebCT were used throughout the semester such as weekly online writing, peer critiquing, bulletin board discussion, online testing, and e-mail. Constructivist learning theory was the major theoretical foundation for online instruction in this course. Instructional design was based on the ADDIE model (Analysis, Design, Development, Implementation, and Evaluation) proposed by Dick, Carey, and Carey (2001). For additional information on design, development and instructional strategies used in this course, see other recent publications by the author (Liu, 2003a; 2003b).

To reduce learner anxiety and maximize learning, one Face-to-Face (FtF) orientation was conducted during the first week for the online section. The traditional section met once a week for 3 hours, and was primarily taught FtF throughout the semester. Both sections were taught simultaneously by the lead investigator in the summer semester of 2003. In order to make the sections as equivalent as possible, the instructional objectives, content, requirements, assignments, and assessments in both sections were the same.

Procedure

The pretest was administered in paper-and-pencil format to both sections during the first week of the semester to determine initial learning and performance. The participants in the online section were introduced to the online WebCT environment from the second week through the final week. Ongoing posttests, including chapter quizzes, a final test, and student evaluation of instruction were administered online for the online section and administered in paper-and-pencil format for the traditional section.
 

Results and Discussion

Pretests and posttests of learning performance, as well as student evaluation data in both sections in the summer semester of 2003 and in the fall semester of 2004 were coded and analyzed using SPSS 12.0. Regarding students’ learning outcomes in the summer semester of 2003, there was a significant difference in most chapter quizzes and the final test between online and traditional sections. Specifically, online learners outperformed their counterparts in the traditional section (Liu, 2005a). In addition, regarding students’ learning outcomes in fall 2004, there was not a significant difference between online and traditional sections. That is, online learners performed as well as their counterparts in the traditional section (Liu, 2005b).

Research Hypothesis 1

Regarding students’ perceptions and satisfactions with the course, the student evaluation survey used in the summer semester of 2003 found that the average scores and standard deviations (SD) for all 16 items combined for online and traditional sections were, respectively, 4.5 with a SD = .23 and 4.3, with a SD of .34. The descriptive statistics for all the items in this survey are presented in Table 1. Results from the paired t test in Table 2 revealed that a significant difference in student evaluation of instruction between online and traditional sections was only detected in item 15 (t = 2.08, p = .044), but no significant differences were found in the other 15 items (p > .05). Item 15 asked students to give an overall rating of this instructor's general teaching effectiveness (related to the course objectives and new understanding). The results of item 15 showed that the online section gave a higher rating (mean = 4.77, SD = .43) while the traditional section gave a lower rating (mean = 4.29, SD = 1.01).

The student evaluation survey used in the fall semester of 2004 found that the average scores and standard deviations (SD) for all 18 items for online and traditional sections were, respectively, 4.4 with a SD = 1.1 and 4.4, with a SD of 1.0. The descriptive statistics for all the items in this second survey are presented in Table 3. Results from the paired t test in Table 4 revealed that no significant differences were found in all 18 items (p > .05). In addition, the major reason for the large SD in both sections in the fall semester of 2004 is that there was one student (outlier) who misunderstood the survey instructions and completely chose “1” for all 18 items in each section. This can be verified from that student’s very positive qualitative comments.

Table 1
Descriptive Statistics for Summer 2003 Student Evaluation Items

 

Groups

N

Mean

Std. Deviation

Std. Error Mean

Item 1

experimental

22

4.50

.598

.127

 

control

21

4.33

.577

.126

Item 2

experimental

22

4.50

.598

.127

 

control

21

4.33

1.017

.222

Item 3

experimental

22

4.32

.568

.121

 

control

21

4.10

1.179

.257

Item 4

experimental

22

4.23

.869

.185

 

control

21

3.71

1.146

.250

Item 5

experimental

22

4.91

.294

.063

 

control

21

4.95

.218

.048

Item 6

experimental

22

4.73

.456

.097

 

control

21

4.81

.402

.088

Item 7

experimental

22

4.55

.596

.127

 

control

21

4.38

1.024

.223

Item 8

experimental

22

4.73

.456

.097

 

control

21

4.48

.814

.178

Item 9

experimental

22

4.50

.598

.127

 

control

21

3.90

1.261

.275

Item 10

experimental

22

4.64

.581

.124

 

control

21

4.33

1.065

.232

Item 11

experimental

22

4.59

.503

.107

 

control

21

4.67

.483

.105

Item 12

experimental

22

4.05

.844

.180

 

control

21

3.90

.700

.153

Item 13

experimental

22

4.55

.596

.127

 

control

21

4.05

1.071

.234

Item 14

experimental

22

4.14

.774

.165

 

control

21

4.38

1.024

.223

Item 15

experimental

22

4.77

.429

.091

 

control

21

4.29

1.007

.220

Item 16

experimental

22

4.55

.596

.127

 

control

21

4.14

1.153

.252

Item 17

experimental

22

4.55

.510

.109

 

control

21

4.33

.658

.144

 

Table 2
Independent Samples t Test Results for Summer 2003 Student Evaluation Items

 

 

Levene's Test for Equality of Variances

 

 

 

 

 

 

 

F

Sig.

t

df

Sig.
(2-tailed)

Mean Difference

Std. Error Difference

Item 1

Equal variances assumed

.281

.599

.929

41

.358

.167

.179

 

Equal variances not assumed

 

 

.930

40.993

.358

.167

.179

Item 2

Equal variances assumed

2.188

.147

.659

41

.514

.167

.253

 

Equal variances not assumed

 

 

.652

32.051

.519

.167

.256

Item 3

Equal variances assumed

4.341

.043

.796

41

.431

.223

.280

 

Equal variances not assumed

 

 

.784

28.506

.440

.223

.284

Item 4

Equal variances assumed

1.640

.207

1.658

41

.105

.513

.309

 

Equal variances not assumed

 

 

1.648

37.279

.108

.513

.311

Item 5

Equal variances assumed

1.227

.274

-.546

41

.588

-.043

.079

 

Equal variances not assumed

 

 

-.550

38.686

.586

-.043

.079

Item 6

Equal variances assumed

1.603

.213

-.626

41

.535

-.082

.131

 

Equal variances not assumed

 

 

-.628

40.760

.534

-.082

.131

Item 7

Equal variances assumed

2.381

.130

.648

41

.521

.165

.254

 

Equal variances not assumed

 

 

.640

31.856

.527

.165

.257

Item 8

Equal variances assumed

5.264

.027

1.256

41

.216

.251

.200

 

Equal variances not assumed

 

 

1.241

31.121

.224

.251

.202

Item 9

Equal variances assumed

6.822

.013

1.993

41

.053

.595

.299

 

Equal variances not assumed

 

 

1.963

28.256

.060

.595

.303

Item 10

Equal variances assumed

4.764

.035

1.166

41

.250

.303

.260

 

Equal variances not assumed

 

 

1.151

30.634

.259

.303

.263

Item 11

Equal variances assumed

.966

.331

-.503

41

.618

-.076

.151

 

Equal variances not assumed

 

 

-.504

40.998

.617

-.076

.150

Item 12

Equal variances assumed

2.821

.101

.593

41

.556

.141

.237

 

Equal variances not assumed

 

 

.596

40.240

.555

.141

.236

Item 13

Equal variances assumed

1.203

.279

1.895

41

.065

.498

.263

 

Equal variances not assumed

 

 

1.871

30.982

.071

.498

.266

Item 14

Equal variances assumed

.673

.417

-.886

41

.381

-.245

.276

 

Equal variances not assumed

 

 

-.881

37.237

.384

-.245

.278

Item 15

Equal variances assumed

12.805

.001

2.080

41

.044

.487

.234

 

Equal variances not assumed

 

 

2.046

26.761

.051

.487

.238

Item 16

Equal variances assumed

5.336

.026

1.449

41

.155

.403

.278

 

Equal variances not assumed

 

 

1.429

29.665

.164

.403

.282

Item 17

Equal variances assumed

1.356

.251

1.185

41

.243

.212

.179

 

Equal variances not assumed

 

 

1.178

37.684

.246

.212

.180

 In addition, the last additional item from both sets of data described previously found that a majority of students in both online and traditional sections indicated this course as either “Moderately Difficult” or “Very Difficult” among the five options. The numeric averages in two online sections and two traditional sections were all between 4 (Moderately Difficult) and 5 (Very Difficult) (see the last item in Tables 1 and 3). This result was not surprising for the author of this study since this is a graduate course that requires rigorous instruction. In addition, this seems to be one of the most difficult and challenging course in all educational graduate programs. Results in this study showed that the research hypothesis 1 was supported. This is consistent with findings in other studies. Recent studies have consistently found no systematic differences between online and traditional paper-based student evaluations of instruction (e. g., Carini, et al., 2003; Hardy, 2003; Thorpe, 2002), even when different incentives such as grade were offered to the students for the completion of online evaluations (Dommeyer et al., 2004).

Table 3
Descriptive Statistics for the Fall 2004 Student Evaluation Items

 

Groups

N

Mean

Std. Deviation

Std. Error Mean

Item 1

experimental

19

4.16

1.302

.299

 

control

21

4.48

.873

.190

Item 2

experimental

19

4.58

1.261

.289

 

control

21

4.67

.796

.174

Item 3

experimental

19

4.53

1.264

.290

 

control

21

4.67

.966

.211

Item 4

experimental

19

4.42

.961

.221

 

control

21

4.19

1.030

.225

Item 5

experimental

19

4.00

1.247

.286

 

control

21

4.19

1.078

.235

Item 6

experimental

19

4.42

1.017

.233

 

control

21

4.19

1.123

.245

Item 7

experimental

19

4.42

1.170

.268

 

control

21

4.57

.926

.202

Item 8

experimental

19

4.74

.933

.214

 

control

21

4.67

.966

.211

Item 9

experimental

19

4.58

.961

.221

 

control

21

4.57

.978

.213

Item 10

experimental

19

4.58

.961

.221

 

control

21

4.62

.921

.201

Item 11

experimental

19

4.26

.991

.227

 

control

21

4.52

.981

.214

Item 12

experimental

19

4.47

.964

.221

 

control

21

4.43

1.028

.224

Item 13

experimental

19

4.47

1.264

.290

 

control

21

4.48

1.030

.225

Item 14

experimental

19

4.37

.955

.219

 

control

21

4.29

1.102

.240

Item 15

experimental

19

4.37

1.012

.232

 

control

21

4.33

.966

.211

Item 16

experimental

19

4.42

.961

.221

 

control

21

3.95

1.203

.263

Item 17

experimental

19

4.68

.946

.217

 

control

21

4.43

.978

.213

Item 18

experimental

19

4.37

1.012

.232

 

control

21

4.57

.870

.190

Item 19

experimental

19

4.2632

.93346

.21415

 

control

21

4.1429

.65465

.14286

 
Table 4
Independent Samples t Test Results for Fall 2004 Student Evaluation Items

 

 

Levene's Test for Equality of Variances

 

 

 

 

 

 

 

F

Sig.

t

df

Sig.
(2-tailed)

Mean Difference

Std. Error Difference

Item 1

Equal variances assumed

1.662

.205

-.916

38

.365

-.318

.347

 

Equal variances not assumed

 

 

-.898

30.998

.376

-.318

.354

Item 2

Equal variances assumed

.710

.405

-.266

38

.792

-.088

.330

 

Equal variances not assumed

 

 

-.260

29.822

.797

-.088

.337

Item 3

Equal variances assumed

.683

.414

-.397

38

.694

-.140

.354

 

Equal variances not assumed

 

 

-.392

33.614

.698

-.140

.358

Item 4

Equal variances assumed

.228

.636

.729

38

.470

.231

.316

 

Equal variances not assumed

 

 

.732

37.958

.469

.231

.315

Item 5

Equal variances assumed

.076

.784

-.518

38

.607

-.190

.368

 

Equal variances not assumed

 

 

-.514

35.824

.610

-.190

.370

Item 6

Equal variances assumed

.278

.601

.678

38

.502

.231

.340

 

Equal variances not assumed

 

 

.681

37.999

.500

.231

.338

Item 7

Equal variances assumed

.800

.377

-.453

38

.653

-.150

.332

 

Equal variances not assumed

 

 

-.448

34.276

.657

-.150

.336

Item 8

Equal variances assumed

.164

.687

.233

38

.817

.070

.301

 

Equal variances not assumed

 

 

.234

37.823

.817

.070

.301

Item 9

Equal variances assumed

.021

.887

.024

38

.981

.008

.307

 

Equal variances not assumed

 

 

.024

37.726

.981

.008

.307

Item 10

Equal variances assumed

.032

.860

-.135

38

.894

-.040

.298

 

Equal variances not assumed

 

 

-.134

37.210

.894

-.040

.298

Item 11

Equal variances assumed

.007

.935

-.835

38

.409

-.261

.312

 

Equal variances not assumed

 

 

-.835

37.518

.409

-.261

.312

Item 12

Equal variances assumed

.207

.652

.143

38

.887

.045

.316

 

Equal variances not assumed

 

 

.143

37.944

.887

.045

.315

Item 13

Equal variances assumed

.103

.750

-.007

38

.995

-.003

.363

 

Equal variances not assumed

 

 

-.007

34.831

.995

-.003

.367

Item 14

Equal variances assumed

.475

.495

.252

38

.802

.083

.328

 

Equal variances not assumed

 

 

.254

37.939

.801

.083

.325

Item 15

Equal variances assumed

.024

.877

.112

38

.911

.035

.313

 

Equal variances not assumed

 

 

.112

37.179

.911

.035

.314

Item 16

Equal variances assumed

2.943

.094

1.351

38

.185

.469

.347

 

Equal variances not assumed

 

 

1.367

37.458

.180

.469

.343

Item 17

Equal variances assumed

1.134

.294

.838

38

.407

.256

.305

 

Equal variances not assumed

 

 

.840

37.820

.406

.256

.304

Item 18

Equal variances assumed

.161

.691

-.682

38

.499

-.203

.298

 

Equal variances not assumed

 

 

-.677

35.747

.503

-.203

.300

Item 19

Equal variances assumed

.544

.465

.476

38

.637

.12030

.25296

 

Equal variances not assumed

 

 

.467

31.899

.643

.12030

.25743

 In addition, the high means in the author’s student evaluation for both the online and traditional sections may be related to various reasons. First, the author of this study used the constructivist learning theory as the major foundation for instructional strategy in both the online and traditional sections (see Liu, 2003a, 2003b). Second, the author took students’ learning styles and needs into account during the instructional process. He conducted a student background survey during the first week and a midterm course feedback survey in the midterm week. The results of those surveys were very helpful for the instructor to adapt to students’ learning needs. This finding is consistent with the findings reported by other researchers. Spencer and Schmelkin (2002) found that responding to students about instructional adaptations as a result of midterm feedback has positive effects.
 

Research Hypothesis 2

Results also indicated that no statistically significant differences existed between the online section and the traditional section in the summer semester of 2003 or in the fall semester of 2004. All students in both online and traditional sections participated in the study. Thus, research hypothesis 2 was not supported. This result was surprising to the author since no incentives were used for completing the online and traditional student evaluation of instruction in either section. Students in both online and traditional sections were only requested to complete the student evaluations during the last two weeks. This finding is not consistent with findings in other recent studies. According to Dommeyer et al. (2004), students’ response rate to the online student evaluation was generally lower than that of the traditional paper-based survey. In order to increase the student response rate in the online evaluation, various approaches have been used in recent research. These include the use of the grade incentive (e.g, Dommeyer et al., 2004) and the sweepstakes approach (e.g., Bosnjak & Tuten, 2003; Cobanoglu & Cobanoglu, 2003).
 

Research Hypothesis 3

The numeric results in Table 5 indicate that there was a significant difference in terms of both the number and the details of students’ qualitative comments. In terms of the number, in the summer semester of 2003, there was a significant difference in the number of qualitative comments (X2 = 4.17, p = .04) and words in those comments ((X2 = 433.95, p = .00) between the online and traditional sections. In the online section 20 students (91%) wrote qualitative comments which had a total of 1233 words while in the traditional section only 9 students wrote qualitative comments which had a total of 393 words. Meanwhile, during the fall semester of 2004, there was a significant difference in the number of qualitative comments (X2 = 6.00, p = .01) and words in those comments ((X2 = 835.28, p = .00) between the online and traditional sections. In the online section, 18 students (95%) wrote qualitative comments which had a total of 1192 words while in the traditional section only 6 students wrote qualitative comments which had a total of 138 words.

Table 5
Chi-Square, Numbers, and Words Results for Students’ Qualitative Comments between Online and Traditional Sections in Summer 2003 and Fall 2004

 

 

Number of students

Number of qualitative comments

Number of words in all qualitative comments

Percentages of students who wrote comments

Summer 2003

Online section

22

20

1233

91%

Traditional section

21

9

393

43%

X2

 

4.17

433.95

 

p

 

.04

.00

 

Fall 2004

Online section

19

18

1192

95%

Traditional section

21

6

138

29%

X2

 

6.00

835.28

 

p

 

.01

.00

 

In addition, students’ qualitative comments in the summer semester of 2003 and fall semester of 2004 indicated that students in the online section were more motivated than those in the traditional section. For instance, a few students in the traditional section complained about the content and the frequency of chapter quizzes while those in the online section did not. In addition, students in the online section wrote more detailed comments and expressed greater satisfaction with the effectiveness of their learning in this course. Majority of students in the online section thought they had learned more in this course than from a traditional section. It was clear that such students’ qualitative comments were consistent with the research findings described previously.

These results support the previous findings that online students wrote more detailed qualitative comments than their counterparts in the traditional section. Thus research hypothesis 3 in this study was supported. Hardy (2003) found that the students who do respond write more detailed comments online in spite of the lower response rate using the online evaluation approach. These comments provide a valuable resource for the instructor to improve teaching and learning in future online course offerings. In addition, two of the six courses he studied had a higher percentage of positive comments than the class evaluated on paper. McGhee and Lowell (2003) found that online students reported more efforts in online courses and gave overall evaluations similar to their counterparts in traditional courses.
 

Conclusion

This study supports some previous research that (a) there is not a significant difference in student evaluation of instruction between online and traditional learners and (b) online students wrote more detailed qualitative comments than their counterparts in the traditional section. However, this study found that no statistically significant differences existed in terms of the response rate between the online section and the traditional section. Based on results form this study, it can be concluded that online instruction can be a viable alternative for higher education. This study has significant practical international implications for higher education. It also contributes to the current literature in the area of online instruction and e-learning. However, the results of the present study are limited to only one course and one instructor in an educational research course in two different semesters. Thus, care should be taken in generalizing the results to other environments such as other courses in different subjects.

References

Ballantyne, C. (2003). Online evaluations of teaching: An examination of current practice and considerations for the future. New Directions for Teaching & Learning, 96, 103-113.

Bosnjak, M. & Tuten, T.L. (2003). Prepaid and promised incentives in web surveys - An experiment. Social Science Computer Review, 21(2), 208-217.

Bullock, C. D. (2003). Online collection of midterm student feedback. New Directions for Teaching & Learning, 96, 95-103.

Cantera, L. (2002). Y puts teacher evaluations online. NewsNet by Brigham Young University. Retrieved March 10, 2005 from http://newsnet.byu.edu/story.cfm/41005.

Carini, R. M., Hayek, J. C., Kuh, G. D., & Ouimet, J. A. (2003). College student responses to web and paper surveys: does mode matter? Research in Higher Education, 44(1), 1–19.

Cobanoglu, C. & Cobanoglu, N. (2003). The effect of incentives in web surveys: application and ethical considerations. International Journal of Market Research, 45(4), p. 1-14.

Cooper, L. (1999). Anatomy of an Online Course. T. H. E. Journal, 26(7), 45-51.

Dick, W., Carey, L., & Carey, J. O. (2001). The systematic design of instruction (5th Edition). New York: Addison-Wesley Educational Publishers, Inc.

Dommeyer, C. J., Baum, P., Hanna, R. W., & Chapman, K. S. (2004). Gathering faculty teaching evaluations by in-class and online surveys: their effects on response rates and evaluations. Assessment & Evaluation in Higher Education, 29(5), 611-624.

Hardy, N. (2003). Online ratings: Fact and fiction. New Directions for Teaching & Learning, 96, 31-39.

Harrington, C. F. & Reasons, S.G. (2005). Online student evaluation of teaching for distance education: A perfect match? The Journal of Educators Online, 2, 1. Retrieved March 10, 2005 from http://www.thejeo.com/ReasonsFinal.pdf.

Hmieleski, K. (2000). Barriers to online evaluation: Surveying the nation’s top 200 most wired colleges. Troy, N.Y.: Interactive and Distance Education Assessment Laboratory, Rensselaer Polytechnic Institute (Unpublished Report).

Hoffman, K. M. (2003). Online course evaluation and reporting in higher education. New Directions for Teaching & Learning, 96, 25-30.

Institute for Higher Education Policy (2000). Quality on the line: Benchmarks for success in internet-based distance education. Washington, DC, USA.

Kearsley, G. (2000). Online education: Learning and teaching in no cyberspace. Belmont, CA: Wadsworth.

Liu, Y. (2003a). Improving online interactivity and learning: A constructivist approach. Academic Exchange Quarterly, 7(1), 174-178.

Liu, Y. (2003b). Taking educational research online: Developing an online educational research course. Journal of Interactive Instruction Development, 16(1), 12-20.

Liu, Y. (2005a). Effects of online instruction vs. traditional instruction on students’ learning. International Journal of Instructional Technology and Distance Learning, 2(3), Article 006. Retrieved March 21, 2005, from http://www.itdl.org/Journal/Mar_05/article06.htm.

Liu, Y. (2005b). Impact of online instruction on teachers’ learning and attitudes toward technology integration. The Turkish Online Journal of Distance Education, 6(4), Article 007. Retrieved October 27, 2005, from http://tojde.anadolu.edu.tr/.

Mayer, J., & George, A. (2003). The University of Idaho’s online course evaluation system: Going forward! Paper presented at the 43rd Annual Forum of the Association for Institutional Research, Tampa, Fla., May 2003.

McGhee, D. E., & Lowell, N. (2003). Psychometric properties of student ratings of instruction in online and on-campus courses. New Directions for Teaching & Learning, 96, 39-48.

McGourty, J., Scoles, K. & Thorpe, S. (2002). Web-based student evaluation of instruction: promises and pitfalls. Paper presented at the 42nd Annual Forum of the Association for Institutional Research, Toronto, Ontario, June 2002.

Sorenson, L., & Reiner, C. (2003). Charting the unchartered seas of online student ratings of instruction. New Directions for Teaching & Learning, 96, 1-25.

Spencer, K. J., & Schmelkin, L. P. Student perspectives on teaching and its evaluation. Assessment and Evaluation in Higher Education, 2002, 27(5), 397–409.

Thorpe, S.W. (2002). Online student evaluation of instruction: An investigation of non-response bias. Paper presented at the 42nd Annual Forum of the Association for Institutional Research in Toronto Canada. Retrieved June 26, 2003, from http://www.airweb.org/forum02/550.pdf

Thurmond, V. A., Wambach, K., Connors, H. R., & Frey, B. B. (2002). Evaluation of student satisfaction: Determining the impact of a Web-based environment by controlling for student characteristics. The American Journal of Distance Education, 16, 169-189.

Waits, T., & Lewis L. (2003). Distance education at degree-granting postsecondary institutions: 2000-2001. U.S. Department of Education. Washington, DC, USA: National Center for Education Statistics (NCES Pub 2003-017).

 Acknowledgement.

This study was supported by the Funded University Research (FUR) internal grants
from Southern Illinois University, Edwardsville, Illinois, USA in 2003-2004.

About the Author

Yuliang Liu is Assistant Professor and graduate program director of Instructional Design and Learning Technologies at Southern Illinois University, Edwardsville, Illinois 62026, USA

Phone: (618) 650-3293; Fax: (618) 650-3808; E-mail: yliu@siue.edu.
 

go top
April 2006 Index
Home Page