| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Evaluating Distance EducationComparison of Student Ratings of Instruction |
| Distance Education | Traditional |
|
| ||||
| Off-Campus | On-Campus | On-Campus |
|
| |||
Domain | M | SD | M | SD | M | SD | F | h2 |
Course Effectiveness Rating | 4.13 | .50 | 4.36 | .33 | 4.56 | .15 | 5.61* | .33 |
Instructor Rating | 4.13 | .59 | 4.47 | .30 | 4.63 | .20 | 4.77* | .40 |
Overall Course Rating | 3.85 | .69 | 4.28 | .39 | 4.43 | .29 | 4.79* | .41 |
* p<.05
There was a statistically significant difference between the mode of course delivery for all three domains. The mode of course delivery accounted for a large part of the explained variance (h2), ranging from .33 to .41. Follow-up analysis (dependent t-tests) indicated that there were statistically significant differences between the DE off-campus courses and the traditional on-campus courses for course effectiveness (t=3.00, p<.05), instructor effectiveness (t=3.03, p<.05), and overall effectiveness (t=3.38, p<.05); large effect sizes (Hedges, 1981) were found for (a) course effectiveness (g=1.16), (b) instructor rating (g=1.14), and (c) overall course effectiveness (g=1.10). There were no statistically significant differences between the DE off-campus courses and the DE on-campus. In addition, there were no differences detected between the DE on-campus and traditional on-campus domain scores.
To better understand the differences between the method of delivery, responses to each of the 23 items on the course evaluation questionnaire were examined. Comparing the 11 course rating items (see Table 2), there were statistically significant differences for items 3, 4, 5, 8, and 9. Follow-up analyses indicated that the mean differences were between the DE off-campus and the traditional on-campus courses. The magnitude of differences between the means was large, ranging from .97 to 1.34. There were no differences between the DE off-campus and DE on-campus or the DE on-campus and the traditional on-campus course means. Examining the 7 instructor effectiveness items (Table 3), there were statistically significant differences for all items except item 13. Follow-up analyses indicated that the differences were between the DE off-campus and the traditional on-campus courses. The magnitudes of differences for all items were large, ranging from .83 to 1.42. Examining the overall course effectiveness items (Table 4), there were statistically significant differences for all 5 items. Again, follow-up analyses indicated that the differences were between the DE off-campus and the traditional on-campus courses. The differences were large, ranging from .83 to 1.20.
Descriptive Statistics, Repeated Measures ANOVAs, and Effect Sizes for Course Ratings
| Distance Education | Traditional |
|
| ||||
| Off-Campus | On-Campus | On-Campus |
| Partial h2 | |||
Item | M | SD | M | SD | M | SD | F | |
1. This course had clearly stated objectives. | 4.34 | .52 | 4.54 | .24 | 4.68 | .11 | 2.02 | .22 |
2. The stated goals of this course were consistently pursued. | 4.25 | .45 | 4.41 | .31 | 4.60 | .15 | 2.92 | .30 |
3. I always felt challenged and motivated to learn. | 3.90 | .60 | 4.32 | .39 | 4.49 | .16 | 4.37* | .38 |
4. The class meetings helped me see other points of view. | 4.15 | .40 | 4.37 | .37 | 4.56 | .27 | 4.12* | .37 |
5. This course built understanding of concepts and principles. | 4.17 | .51 | 4.45 | .32 | 4.62 | .17 | 4.20* | .38 |
6. The practical application of subject matter was apparent. | 4.13 | .62 | 4.43 | .43 | 4.60 | .22 | 2.46 | .26 |
7. The climate of this class was conductive to learning. | 4.12 | .59 | 4.15 | .39 | 4.55 | .22 | 2.71 | .28 |
8. When I had a question/comment I knew it would be respected. | 4.20 | .59 | 4.62 | .24 | 4.69 | .13 | 4.48* | .39 |
9. This course contributes significantly to my professional growth. | 4.00 | .58 | 4.27 | .44 | 4.53 | .15 | 3.93* | .36 |
10. Assignments were of definite instructional value. | 4.08 | .52 | 4.26 | .44 | 4.54 | .16 | 3.03 | .30 |
11. Assigned readings significantly contributed to this course. | 4.03 | .45 | 4.20 | .47 | 4.34 | .25 | 1.32 | .16 |
* p<.05
Descriptive Statistics, Repeated Measures ANOVAs, and Effect Sizes for Instructor Ratings
| Distance Education | Traditional |
|
| ||||
| Off-Campus | On-Campus | On-Campus |
|
| |||
Item | M | SD | M | SD | M | SD | F | |
12. Instructor displayed clear understanding of course topics. | 4.45 | .49 | 4.75 | .26 | 4.76 | .19 | 3.73* | .35 |
13. Instructor was able to simplify difficulty materials. | 4.06 | .72 | 4.44 | .42 | 4.59 | .29 | 3.00 | .30 |
14. Instructor seemed well-prepared for class. | 4.33 | .58 | 4.63 | .36 | 4.69 | .19 | 4.57* | .39 |
15. Instructor stimulated interest in the course. | 4.09 | .66 | 4.46 | .38 | 4.59 | .30 | 5.16* | .42 |
16. Instructor helped me apply theory to solve problems. | 3.95 | .56 | 4.36 | .39 | 4.52 | .24 | 4.77* | .41 |
17. Instructor evaluated often and provided help when needed. | 4.02 | .60 | 4.31 | .37 | 4.65 | .18 | 5.29* | .43 |
18. Instructor adjusted to fit individual abilities and interests. | 4.04 | .62 | 4.36 | .32 | 4.58 | .26 | 4.28* | .38 |
* p<.05
Descriptive Statistics, Repeated Measures ANOVAs, and Effect Sizes for Overall Course Ratings
| Distance Education | Traditional |
|
| ||||
| Off-Campus | On-Campus | On-Campus |
|
| |||
Item | M | SD | M | SD | M | SD | F | |
19. Instructor had an effective presentation style. | 4.06 | .66 | 4.50 | .35 | 4.54 | .33 | 5.07* | .42 |
20. Instructional methods used in this course were effective. | 3.97 | .65 | 4.34 | .39 | 4.52 | .28 | 3.96* | .36 |
21. Evaluation methods were fair and effective. | 4.09 | .56 | 4.50 | .26 | 4.56 | .25 | 4.22* | .38 |
22. This course is among the best I have ever taken. | 3.40 | .81 | 3.79 | .64 | 4.18 | .43 | 4.61* | .40 |
23. This instructor is among the best teachers I have known. | 3.70 | .80 | 4.25 | .47 | 4.36 | .30 | 5.14* | .42 |
p<.05
Comfort and convenience have been repeatedly cited as positive elements of the distance condition. Additionally, students have reported that the more experience that they have had with distance education technology and conditions, the more comfortable they have become with the course and mode of interaction (Jones 1992). Moore and Kearsley (1996) identified the following “variables that determine the effectiveness of distance education courses:”
Number of students at learning site (individuals, small groups, large groups)
Length of class/course (hours, days, weeks, months)
Reasons for student taking class/course (required, personal development, certification)
Prior educational background of student (especially experience with self-study or distance education)
Nature of instructional strategies used (lecture, discussion/debate, problem-solving activities)
Kind of learning involved (concepts, skills, attitudes)
Type of pacing (student determined, teacher defined, completion dates)
Amount and type of interaction/learner feedback provided
Role of tutors/site facilitators (low to high course involvement)
Preparation and experience of instructors and administrators (minimal to extensive)
Extent of learner support provided (minimal to extensive). (p. 76)
Spooner, Spooner, Algozzine, and Jordan (1998) assert that learning, attending classes, and obtaining information should be enhanced via distance learning.
In this research, on-campus students in a graduate preparation program for teachers of students with learning disabilities perceived their courses and instructors as being more effective than the off-campus DE students. Students in the off-campus sections consistently rated the course and instructor lower than both on-campus groups. The students in the DE off-campus courses reported (a) less challenge and motivation to learn, (b) lower opinions about the extent to which the class meetings helped them see other points of view, (c) lower opinions about the course building understanding of concepts and principles, (d) less feeling of respect, and (e) lower opinions of the contribution of the course to their professional growth. In addition, the DE off-campus students rated the instructor lower in (a) displaying clear understanding of topics, (b) being prepared for class, (c) stimulating interest in the course, (d) applying theory to solve problems, (e) evaluating often and providing help when needed, and (f) adjusting to fit individuals’ abilities and interests.
This research addresses important concerns identified in recent reports questioning the effectiveness of distance education and arguing that much of the literature is not as useful as it could be because very little of it involves original research or is based on studies of questionable quality that render many of the findings inconclusive (cf. Blumenstyk & McCollum, 1999; Carnevale, 2000; The Institute for Higher Education, 1999). Further, the outcomes are different than the “no significant difference phenomenon” observed in many other studies of attitudes (Young, 2000, p. A55). Of course, there are a number of reasons why these program courses were viewed less favorably and each should be considered in future efforts to evaluate distance education programs. First, class sizes were different on and off campus and the characteristics of students enrolled in different sections of the same course might have influenced the outcomes. While this is difficult to control, it should be considered when comparing courses taught using different methods. The effect of vagaries of method is also a possible explanation for the findings. Organization, instructional strategies, and other methodological differences may have impacted a distance education course differently than an on-campus course. Similarly, placement of the course within the program (e.g., beginning vs. end) and its content (e.g., introduction vs. advanced, theory vs. methods) may create conditions to consider in evaluating instruction provided on and off campus. The novelty of taking courses at a distance should also be considered when evaluating programs (i.e., outcomes for earlier courses may be very different than those for courses taken later). Finally, the complex interaction of learner characteristics and learning style with instructional method and content should not be underestimated:
The primary assumption, which is flawed, is that the instructional effectiveness of each medium studied is constant across all content and all students. You’re lumping all the students together, and you’re ignoring their qualities and attributes as well as the qualities and attributes of the content. So by treating students, content, and instructional content as homogenous, we are ignoring some very important variables that we know for a fact do impact learning. (Barbara B. Lockee in interview with Dan Carnevale, February 21, 2001).
Faculty members and administrators at many universities and colleges remain skeptical about the quality and effectiveness of online research and teaching (Kiernan, 2000). Their skepticism, as well as other factors (e.g., time required for preparing and delivering distance education courses), can discourage young faculty members from embracing distance education. Institutions of higher education that base instructors’ performance on student evaluations should be aware that teaching DE courses might present important issue to overcome. What can be done to address the potential hazards? Spooner, Algozzine, Flowers, Gretes, and Jordan (1998) suggest seven strategies that can be used to facilitate faculty/student interaction at a distance, so that the students at the remote sites believe that they are connected to their peers and the instructor in the studio classroom on campus. These techniques include: (a) establishing weekly agenda that goes beyond the syllabus, (b) facilitating a weekly student share to encourage class participation, (c) establishing off-line small group discussion with reporting, (d) tapping sites and individuals at remote sites for questions, (e) encouraging across site questioning by students, (f) traveling to remote sites for broadcast (each site one per semester), and (g) playing off of your local audience.
Other variables which will likely impact on the instructor’s ability to reach students at remote sites, in addition to altering presentation style might be the overall size of the class. The instructor will likely have to work harder at making ALL students feel included as part of the group when the collective numbers approach 50, as opposed to as smaller number of students. A second important variable, and one that could potentially affect the evaluation outcomes is the number of times that the instructor delivers a course at a distance. The more practice the instructor has and the more times that s/he is “on the air” will also likely impact that individual’s ability to be effective at reaching those students at remote sites. The type of presentation equipment (e.g., white board “on the fly” writing, or prepared overhead material, or material developed with electronic presentation software with appropriate images to illustrate content) that the instructor uses to deliver the content is another variable that could likely affect the outcome of student evaluation of instruction as well. Regardless of the approach taken to address potential problems and difficulties when teaching at a distance, there is a clear need for additional research evaluating implementations of improvement strategies and their effects in distance education courses.
Although the intended purpose of this research was to evaluate a distance education program, the results support the position that technology (or method) is only one factor that contributed to opinions about the quality of the course (cf. Carnevale, 2001). For example, although learning tasks and instructors were the same for the courses evaluated in this study, learner characteristics (e.g., motivation, experience) were potentially very different and, most certainly, contributed to the outcomes. Similarly, the results point to the value of a few good practices as supporting the art of good teaching. In 1996, the American Association of Higher Education (AAHE) proposed the following “Seven Principles for Good Practice in Undergraduate Education” to assist those using new communication and information technologies to improve teaching and learning processes (The Institute For Higher Education Policy, 1999, p. 32):
encourage contact between students and faculty;
develop reciprocity and cooperation among students;
use active learning techniques;
give prompt feedback;
emphasize time-on-task;
communicate high expectations; and
respect diverse talents and ways of learning.
The principles have been included in a variety of publications on best practice and represent potential explanations for differences that result when distance education courses are compared to traditional on-campus courses (Carnevale, 2001; Chickering & Ehrmann, 1996). They also form the foundation for factors to be considered in future research focused on improving ways to teach students in higher education using distance as well as traditional methods.
Blumenstyk, G., & McCollum, K. (1999, April 16). Two reports question utility and accessibility in distance education. The Chronicle of Higher Education, A31.
Carnevale, D. (2001, February 21). Logging in with Barbara B. Lockee: What matters in judging distance teaching? Not how much it’s like a classroom course. The Chronicle of Higher Education. [Internet Archive: http://chronicle.com]
Carnevale, D. (2000, January 7). Survey finds 72% rise in number of distance education programs. The Chronicle of Higher Education, p. A57.
Chickering, A. W., & Ehrmann, S. C. (1996). Implementing the seven principles. AAHE Bulletin, 49(2), 2-4.
Hedges, L.V. (1981). Distributional theory for Glass’s estimator of effect size and related estimators. Journal of Educational Statistics, 6, 107-128.
Keegan, D. (1988). Problems in defining the field of distance education. American Journal of Distance Education, 2, 4-11.
Kiernan, V. (2000, April 28). Rewards remain dim for professors who pursue digital scholarship. The Chronicle of Higher Education, A45.
Jones, T. (1992). IITS students’ evaluation questionnaire for fall semester of 1991: A summary and report. (ERIC Document Reproduction Service No. ED 311 890).
Moore, M. G., & Kearsley, G. (1996). Distance education: A systems view. Belmont, CA: Wadsworth.
Moore, M. G., & Thompson, M. M. (1997). The effects of distance learning (rev. ed.). (ACSDE Research Monograph No. 15). University Park, PA: The Pennsylvania State University, American Center for the Study of Distance Education.
National Center on Education Statistics. (1997). Statistical analysis report: Distance education in higher education institutions. Washington, DC: Author [Report NCES 98-062].
Spooner, F., Algozzine, B., Flowers, C., Gretes, J. A., & Jordan, L. (1998, March). Facilitating communication in distance education classes. Electronic poster presented at the fifteenth annual meeting of the international Conference on Technology and Education, Santa Fe.
Spooner, F., Jordan, L., Algozzine, B., & Spooner, M. (1999). Student ratings of instruction in distance learning and on-campus classes. Journal of Educational Research, 92, 132-140.
Spooner, F., Spooner, M., Algozzine, B., & Jordan, L. (1998). Distance learning: Promises, practices, and potential pitfalls. Teacher Education and Special Education, 21, 121-131.
The Institute for Higher Education Policy. (1999). What’s the difference: A review of contemporary research on the effectiveness of distance learning in higher education. Washington, DC: Author.
Turner, J. A. (2000, September 27). 'Distance learning' courses get high marks from students and enrollments are rising. The Chronicle of Higher Education. [Internet Archive: http://chronicle.com]
Woodward, J., & Reith, H. (1997). A historical review of technology research in special education. Review of Educational Research, 67, 503-536.
Young, J. R. (2000, February 18). Distance and classroom education seen as equally effective. The Chronicle of Higher Education, A55.
LuAnn Jordan (Ph. D., University of Florida) is an Assistant Professor in the Department of Counseling and Special Education at the University of North Carolina at Charlotte. Her current research interests include learning disabilities, attention deficit disorders, and improving distance education programs.
Email: lujordan@email.uncc.edu.
Claudia Flowers (Ph. D., Georgia State University) is an Associate Professor in the Department of Educational Leadership. Her current research interests include assessment issues, alternative assessment, applied statistics and technology in education.
Email: cpflower@email.uncc.edu.
Bob Algozzine (Ph. D., Penn State University) is a Professor in the Department of Educational Leadership and Co-Director of the Behavior and Reading Improvement Center at the University of North Carolina at Charlotte. His current research interests include school-wide discipline, effective teaching, block scheduling, self-determination, alternative assessment, and improving distance education programs.
Email: balgozzine@carolina.rr.com
Fred Spooner (Ph.D., University of Florida) is a Professor in the Department of Counseling, Special Education, and Child Development and Principal Investigator on a Personnel Preparation Project involving distance delivery technologies at the University of North Carolina at Charlotte. His research interests include instructional procedures for students with severe disabilities, alternate assessment, and improving distance education programs.
Ashlee Fisher (M.A., University of North Carolina at Charlotte) is a Mental Health Therapist at Expeditions Day treatment program. Her responsibilities include providing mental health services that target emotional and behavioral problems with adolescents and their families through individual, group, and family therapy.