Editor’s Note: Web and video tools for learning at a distance have their own distinctive advantages. It could be predicted that replacement of a lecture with a recording of that lecture could not be better than the live performance, and might well be inferior because feedback, visual cues, and peer associations developed in the face-to-face situation were lacking in the recorded version. This study tests the validity of the recorded lecture without other means of student support.
Mode of Instructional Delivery and Student Performance in a Research Methods Class
Thomas K. Ross
This study examines the performance of on-line and classroom students in an undergraduate senior level research methods course where the only difference between the two groups was the mode of instructional delivery. Classroom students viewed live lectures and on-line students had access to a recorded version of the same lecture via the internet. Classroom performance was typically 5 to 10 points higher than the scores earned in the on-line section. The lower performance of on-line students was related to lower utilization of course content and the lower effectiveness of on-line recordings relative to live, classroom lectures.
Keywords: Asynchronous web based learning, asynchronous web (AW), attendance, face-to-face learning, GPA, instructional delivery modes, multiple regression, no significant difference, t-test.
The drive to expand distance education is unstoppable. This drive is based on the desire to increase access to education for groups that cannot or do not want to utilize traditional education settings and maximize revenues of educational institutions. As educators, our task, regardless of whether knowledge or commerce is the primary driver, is to ensure the quality of education delivered. The growth of asynchronous web based learning (AW) has lead to a proliferation of studies examining whether AW is comparable to traditional classroom education.
Many hold there is “no significant difference” between AW and classroom delivery of content and an extensive website supports this contention, www.nosignificantdifference.org. Advocates of this view have taken a bold position:
The fact is that the findings of comparative studies are absolutely conclusive, one can bank on them. No matter how it is produced, how it is delivered, whether it is interactive, low tech or high tech, students learn equally well with each technology and learn as well as their on-campus, face-to-face counterparts (Russell, 2001, xviii).
Phipps and Merisotis (1999) reviewed a large number of studies that support this view and identified a host of methodological problems leading them to conclude the results are inconclusive and should be used with caution. The idea that the mode of instructional delivery has no impact on learning runs counter to Marshall McLuhan’s famous dictum: the medium is the message. McLuhan noted the medium “shapes and controls the scale and form of human association and action" and has an impact independent of its content (McLuhan, 1964, p. 9).
Previous studies have both supported and rejected the “no significant difference” conclusion. A meta-analysis of 86 studies concluded that in two thirds of the cases AW students outperformed F2F (Shachar & Neumann, 2003). Multiple studies have documented lower retention rates (Phipps & Merisotis, 1999; Carr, 2001; McLaren, 2004) and lower scores in AW classes (Durden & Ellis, 1995; Brown & Liedholm, 2002). Ross and Bell (2007) analyzed course scores in a quality management course and found F2F students’ scores were statistically better than their AW counterparts and student performance was related to the number of on-line lectures viewed. That paper used the number of lectures viewed on-line to gauge students’ investment in mastering course content and did not control for the amount of time spent on each lecture or classroom attendance.
This study attempts to peer deeper into the education process and determine the cause(s) for the higher F2F performance. The measures of student investment in mastering course content used in this study are: the percent of total content viewed on-line and in-class. It is expected that students who are exposed to greater amounts of content will perform better than students with more limited exposure. The use of two measures provides the opportunity to determine if there are differences in student performance based on instructional delivery mode i.e. on-line or classroom.
A research methods course delivered in 2007 provides an appropriate case study because the only difference between the AW and F2F sections was the latter group was exposed to course content primarily in the classroom while the former viewed the recorded classroom lectures via the internet. The class was built on the Moodle course management system and all class lectures were recorded using Mediasite. Powerpoint slides were provided in Moodle to accompany the lectures. All students had access to these slides prior to the lectures and a common text was used. Both groups of students had complete discretion in how they obtained course content; they could attend class, view the class lectures on-line, or attend class and view the lectures on-line. Class attendance was not mandatory and was not used in evaluating student performance.
F2F and AW students had access to the same information and completed identical homework assignments and exams. Both sections completed exams on-line under the same constraints. The only difference between the two sections was how content was obtained thus the course provides an opportunity to examine how the mode of instructional delivery impacts student performance. The course began with 45 AW students and 31 F2F students. At the end-of-the semester, 69 students had completed all the assigned work.
Standard deviations shown in parentheses
Table 1 shows substantial differences in the students enrolled in each section. The AW section has a marginally higher GPA, is older, and has a smaller percentage of females. The differences in standard deviations are substantially larger than the means; the greater variation in AW group is expected given the goal of AW is to expand access to education to non-traditional students.
Figure 1 shows the grade distribution for overall course scores by instructional delivery mode. The distribution of overall scores has three distinct ranges, 85-94.9 (superior), 70-84.9 (average), and below 70 (substandard). At the highest end of the range, 13.4% of AW students scored above 85 versus 25.8% for F2F. Between 70.0 and 84.9 performance is comparable, 57.8% AW versus 61.3% F2F. The most striking difference occurs at the lower end of the distribution, 69.9 and below. Almost three in ten AW students (28.9%) earned less than 70% versus one in eight F2F students (12.9%).
Figure 1 - Grade Distribution by Content Delivery Mode.
Focusing on a single, end-of-semester course score introduces an attrition problem, performance narrowed as the course progressed due to poorly performing students dropping the class and/or not submitting work. Subsequent analyses of overall course scores include all students enrolled in the class, scores for students who did not complete one or more assignments are based on points earned and points attempted. The analyses of the assignments comprising the overall score do not include students who did not submit the assignment so sample sizes vary.
F2F scores wereOne tailed t-tests, table 2, were run to determine if F2F performance is statistically higher than AW performance. F2F scores were higher for each exercise and statistically higher on three of the five assignments as well as for the overall course score. Table 2 also shows performance was more consistent in the F2F section; variances in the F2F group are lower for each assignment and significantly lower than the AW group in four of six cases.
A primary difference between F2F and AW students is their utilization of course lectures. Figure 2 shows class attendance for F2F students and percent of lectures viewed on-line by AW students. This figure incorporates content viewed through the student’s primary instructional delivery mode; it does not include content received in-class by AW students or viewed on-line by F2F students. In-class attendance tracking was limited to a sign-in sheet, students who arrived late were not included and students who signed in were considered present for the entire lecture. Mediasite calculates the percent of lecture viewed on-line and was unable to calculate viewing time in ten episodes (out of 1,144).
Scores by Assignment
Variances shown in parentheses.
Figure 2. Content viewed by primary view setting.
Determinants of Performance
Previous studies used a multitude of variables to explain student performance encompassing academic measures; GPA, attendance, SAT scores, previous quantitative coursework, and credit hours completed and personal characteristics; age, race, gender, and income (Eskew & Fahey, 1988; Durden & Ellis, 1995; Wojciechowski & Bierlein Palmer, 2005). Part of the problem with analyzing academic performance is many of the independent variables are collinear and it is impossible to determine individual impact in a multiple regression (Johnson, Johnson, & Buse, 1987). Attempting to understand the independent impacts of choice of instructional delivery mode, amount of content viewed, homework performance, and GPA on student scores is impossible given the interrelationships between these variables. Students with high GPAs are assumed to devote more time to their courses and were more diligent in completing homework assignments thus higher course performance should be expected. In addition, choice of delivery mode was correlated with content viewed.
Table 3 reports the multiple regression results when GPA, percent of content viewed on-line, and percent of content obtained in-class are used to predict student performance. When on-line viewing times could not be calculated the unknowns were replaced with the average viewing percent for the other lectures viewed by the student.
Determinants of Student Performance
P values in parentheses.
Overall Course Score
GPA, on-line viewing, and in-class exposure to content are all positively and significantly related to the overall course score. The intercept is insignificant lessening the probability of an omitted variable problem and the adjusted R2 shows that 42% of the change in student performance is explained by the independent variables. A 1.0 change in GPA translates into a 17.34 change in the predicted course score.
The results for GPA are similar to Brown and Liedholm (2002), student scores were predicted to increase by 12.72 to 15.93 for each one point increase in GPA. They used six independent variables and their explanatory power varied from 36% to 50%. Other studies of academic performance consistently find positive and statistically significant coefficients for GPA (Eskew & Faley, 1988; Durden & Ellis, 1995).
On-line viewing of content was positive and significant and predicts that a student viewing 100% of the on-line lectures would expect a course score 13 points higher than a student who did not access the lectures. In-class exposure to content was positive and significant and predicts for every 1.0% of content viewed in-class, overall course score will increase by 0.18. A student attending all 26 lectures would expect to score 18 points more than a student who attended no classes. Comparing the predicted scores based on content viewed for the average on-line (48.1% on-line and 1.6% in-class) and average in-class (67.0% in-class and 7.7% on-line) student shows the AW student is expected to earn 6.5 points less than a F2F student based on lower exposure to course content and the lower effectiveness of on-line lectures.
Brown and Liedholm (2002) found similar tendencies in hours devoted to study and grades, 51% of their on-line group reported devoting less than three hours per week to study while in-class attendance averaged over 80%. Overall course grades were 4.40 points lower in the on-line section. Durden and Ellis (1995) found attendance measured as a number of absences to be significant (p = 0.01) but with a low impact on grade. When attendance was measured as a dichotomous variable, no effect was identified until five or more classes are missed. Measuring attendance as a dichotomous variable speaks to the effect of excessive absenteeism on student performance.
The homework intercept is insignificant yet shows more than 20% of the potential points on homework are due to non-captured variables. The intercept is the second largest calculated and may reflect the grading criterion of the instructor. Grades on homework assignments were based on effort and timely submission of work while test scores were primarily driven by arriving at the correct answers. The grading criteria on homework may explain why the coefficient on GPA is the lowest of the six tests, i.e. prior academic achievement is less important in determining homework scores. Both on-line viewing of content and in-class exposure to content were positively correlated with homework scores and statistically significant. The coefficient for on-line viewing, 0.20, is smaller than in-class exposure, 0.27, indicating that on-line students did not earn the same scores based on the same level of exposure to content. The higher coefficient for in-class exposure indicates student performance varies based on how course content is received.
Test 1, Test 2 and Cumulative Final Exam
Given the consistency of the testing coefficients, the exam coefficients are discussed as a group. The intercept coefficients are small (12.58-20.38) and none are significant. The coefficients on GPA are large, statistically significant, and stable ranging from 16.25 to 16.96. Each 1.0 increase in GPA predicts a 16 point increase in a student’s exam score. On-line viewing of content was statistically insignificant for all tests and indicates on-line viewing had no impact on test scores.
In-class exposure to content was positive and statistically significant for two of three exams. A 12.0 point increase in score is predicted on test 1 for a student who attended the ten classes prior to the exam versus a student who attended no classes. Similarly a student who attended all classes prior to the final exam would be predicted to earn 18 points more than a student who attended no classes. The lack of significance for on-line viewing is interesting given the content delivered on-line and in-class was identical and only the mode of delivery changed. This finding suggests that the mode of instructional delivery has a significant impact on exam performance.
The coefficients for the research paper parallel those calculated for homework, the intercept is positive, large, and significant. GPA is positive and significant but smaller than the coefficients calculated for exams. The research paper and homework were different exercises than the exams as time limits on performance were relaxed. F and t tests showed there was no difference in the average scores or variances on the research paper. Multiple regression shows on-line viewing is unrelated to the research paper score while in-class exposure to content is correlated with higher scores. The lack of difference found for scores using the t-test between the two modes of delivery may be due to other non-captured variables. The intercept was significant and may account for one or more variables that counterbalance the positive impact of in-class exposure to content. Another explanation for the lack of significance in the t-test is this assignment came late in the semester and compares those students most likely to successfully complete the course rather than those initially registered. The sample size for the research paper was 70, by this point in the semester four AW students and two F2F had withdrawn.
Smeaton and Keogh (1999) conducted a similar study on the effectiveness of on-line and in-class lectures and concluded there was no significant difference in end-of-course exam scores for a virtual class in 1997 and a traditional class in 1995. The authors also concluded “the amount of course taken does not correlate at all with exam performance” based on scores in the 1997 class. This conclusion begs the question: what value are the lectures if viewing or not viewing content does not impact grades?
This study finds a lecture delivered in the classroom is positively correlated with performance and the same lecture when recorded and viewed on-line is positive but insignificant. In-class students can expect to earn higher grades if they increase class attendance but AW students cannot expect to earn higher scores on tests or research papers if they view more content on-line. McLuhan (1964) held the medium could amplify or diminish the message; in this case the medium may be negatively affecting student behavior and producing lower performance.
Why should we expect differences in the quantity and quality of on-line study and/or instruction?
Comparing a live lecture to its recorded counterpart, it is clear why AW instruction is not comparable to classroom instruction. The live performance is shared, immediate, involves multi-channel, two-way communication that places the audience in a vulnerable position and the speaker controls the amount of material delivered and the environment.
One of the greatest obstacles for recorded lectures may be the audiences’ prior experience and expectation for recorded material. Students in industrialized countries have been raised on high production value television programming, the low production value of academic lectures may not be able to capture and hold the audiences’ attention. Academic lectures lack most of the features that television and movies employ to hold an audiences’ attention; academic lectures generally have no musical cues, laugh tracks, or zoom capability and are built on poor visuals, a single, fixed camera position, and slow pacing. The low production values of academic lectures may produce the same outcome as poor television, the audience changes the channel or goes to sleep.
The sage on a stage approach may work in classrooms because of students’ different expectations for live performance. The classroom creates unity of purpose among the audience, regulates the amount of content delivered, and reduces distractions. Classrooms offer students the ability to observe and model the behavior of other students whereas on-line learning is mainly a solitary task. Students can observe their classmates to determine if others are “getting” the material and use this information to seek immediate (during class) or postponed (after class) clarification. When one’s attention begins to wane, the knowledge that others are simultaneously experiencing the same event may make the experience more bearable.
On-line recordings require students to self-direct their learning. On the positive side, the student can choose the time and place which is most convenient, on the negative side their choices may involve listening to lectures while attempting to multi-task. The on-line learner is responsible for regulating the amount of content covered at a particular time and distractions. Mediasite statistics demonstrate on-line users often do not listen to entire lectures and consume the parts they listen to in small increments. The advantage of the classroom is the instructor regulates how much time will be spent and the lecture is consumed at a single sitting. Social convention works against students walking out of class so one of the prime differences between distance and traditional education revolves around the desirability of piece-meal and often incomplete consumption of content versus all-at-once classroom consumption.
The live lecture is immediate and has a “capture it or lose it” quality that should enhance attention; on the other hand the AW lecture can be rewound and replayed fostering a mindset of “if I don’t get it the first time, I can play it again”. Communication in the classroom is immediate, multi-channel and bi-directional. The classroom offers students the opportunity to ask questions and receive immediate clarification from the instructor. On-line students can submit questions but this often entails a less-than-prompt response. The classroom instructor also observes the facial expressions and body movements of students to determine if they understand the material and decides to alter the pace of the lecture or repeat material based as they see fit.
The audience in a live performance is more at risk and may be inclined to pay greater attention. The classroom offers the threat that inattentive students will miss information and may be called upon to answer a question. Common courtesy encourages most students to at least pretend to be interested in a live speaker as well as stay for the duration of the remarks. AW statistics again show that on-line students do not view entire lectures and the amount of attention they give to the lecture cannot be assessed.
The differences between on-line and classroom delivery of content, the ability to start/stop/repeat a lecture, e-mail an instructor a question without exposing oneself to the ridicule of fellow students, learn in solitary environment, and self determine the amount of content covered at a particular time may be advantages for some students. Grow (1991) categorizes students from dependent, those that should be in a classroom, to self-directed, those that thrive in on-line environments. Self-directed students have the ability to set their own goals, time and project management skills, and are capable of self evaluation.
This study has demonstrated the difference in performance between AW and F2F students is partly based on self-direction, choosing to bypass course material, and the lower effectiveness of on-line lectures. The negative impact of these two factors can be overcome by self-directed students. In this case, the disadvantages of AW can be overcome by: high incoming GPAs, more diligent use of on-line lectures, and/or attending in-class lectures. In this class if AW students had increased the amount of content viewed to the F2F average, the overall course grade gap between AW and F2F students would shrink from 5.6 to 3.1 points.
One drawback of a natural experiment is self-selection. This study does not meet the standards of a random control study where the only difference between groups is the intervention since subjects have no control over which group they are assigned. While random assignment increases confidence that the intervention was the cause of any change observed (versus some underlying difference in the groups), the natural experiment may increase confidence in the generalizability of the findings. This study shows that AW is a less effective mode of instruction given the type of student that chooses on-line education. Another weakness of the study is omitted variables. Multiple regression analyses were ran using three independent variables. Other studies of academic performance used more predictors including standardized test scores, credit hours completed, age, race, gender, and income among others.
F2F students performed better than their AW counterparts in a research methods class when the primary mode of instructional delivery was classroom lectures for the in-class students and the same lectures recorded and viewable over the internet for the on-line section. F2F performance was typically 5 to 10 points higher than the scores earned in the AW section. The lower performance of AW students was related to lower utilization of course content and the lower effectiveness of on-line recordings relative to classroom lectures. This study demonstrated for this subject that mode of instructional delivery and the types of students that select F2F and AW instruction F2F students outperformed their AW counterparts.
The results are not generalizable to all forms of AW but they disprove the contention of “no significant difference”. An exhaustive study of the difference between AW and the classroom would have to control for a multitude of variables; the subject, the method of AW delivery, how performance is measured and student characteristics among other variables. The subject of the effectiveness of AW versus the classroom will never be settled as long as unmeasured variables can be identified; in this case did the mode of delivery or unmeasured students factors produce the lower AW scores? The study showed that the AW students devoted less time to acquiring course content than their F2F counterparts, was this due to how content was delivered or the AW student? It seems reasonable to conclude that how content is delivered matters and the degree to which it matters varies by individual.
In this course the majority of students produced similar scores whether they were on-line or in-class but AW presents additional challenges not faced by F2F students as evidenced by poorer performance and higher drop-out rates in the AW section. AW may present large obstacles for students with poor academic skills. As educators we must do a better job of “selling” AW programs so at-risk students will understand the challenges they will face and provide advising services that guide students into the academic setting where they are more likely to succeed and when guidance fails, we must provide support services to overcome the additional challenges of AW learning.
Brown, B. W., & Liedholm C. E. (2002). Can Web Courses Replace the Classroom in Principles of Microeconomics? American Economics Review, 92(2), 444-448.
Carr, S. (2000). As Distance Education Comes of Age, the Challenge Is Keeping the Students. Chronicle of Higher Education, 46(23), A39-A41.
Durden, G. C., & Ellis, L. V. (1995). The Effects of Attendance on Student Learning in Principles of Economics. American Economic Review, 85(2), 343-346.
Eskew, R. K., & Faley, R. H. (1988). Some Determinants of Student Performance in the First College-Level Financial Accounting Course, The Accounting Review, LXIII(1): 137-147.
Grow, G. O. (1991). Teaching Learners To Be Self-Directed. Adult Education Quarterly, 41, 125-149.
Johnson, A. C., Johnson M. B., and Buse, R. C. (1987). Econometrics. New York: MacMillian Publishing Co.
McLaren C. H. (2004). A Comparison of Student Persistence and Performance in Online and Classroom Business Statistics Experiences. Decision Sciences Journal of Innovative Education, 2(1), 1–10.
McLuhan, M. (1964). Understanding Media, New York: McGraw-Hill.
Phipps, R., and Merisotis, J. (1999). What’s the Difference? A Review of Contemporary Research on the Effectiveness of Distance Learning in Higher Education, Washington DC: Institute for Higher Education Policy.
Ross, T. K., and Bell, P. D. (2007). “No Significant Difference” Only on the Surface, International Journal of Instructional Technology and Distance Learning, 4(7). Retrieved August 18, 2008 from http://www.itdl.org/Journal/Jul_07/article01.htm.
Russell, T. L. (2001). The No Significant Difference Phenomenon,
Shachar, M., and Neumann, Y. (2003). Differences between Traditional Education Academic Performance: A Meta-analytic Approach. The International Review of Research in Open and Distance Learning, 4(2). Retrieved August 18, 2008 from http://www.irrodl.org/content/v4.2/shachar-neumann.html
Smeaton, A., and Keogh, G. (1999). An Analysis of the Use of Virtual Delivery of Undergraduate Lectures. Computers and Education, 32, 83-94.
Wojciechowski, A., and Bierlein Palmer, L. (2005). Individual Student Characteristics: Can Any Be Predictors of Success in Online Classes? Online Journal of Distance Learning Administration, 8(2). Retrieved August 18, 2008 from http://www.westga.edu/~distance/ojdla/.
About the Author
Dr. Thomas K. Ross, Ph.D., is a member of the faculty at East Carolina University in the department of Health Services and Information Management and received his Ph.D. in economics from St. Louis University. He teaches courses on Applied Health Care Research, Quality Management in Health Care, Health Care Financial Management, and Health Care Strategic Planning and Management. Dr. Ross has worked in health care finance as a director of patient accounts and as a financial analyst.
|June 2010 Index|