Editorís Note: This study resolves a number of issues about test delivery and test performance, and poses questions for future research. Some subtle way to ďobserveď unproctored students might help to explain differences in results. The age factor also deserves additional investigation.
The Relationship Between Performance Levels
and Test Delivery Methods
Patricia Royal, Paul Bell
The purpose of this research study was to determine if a relationship exists between test performance and test delivery methods, particularly for those taking proctored versus un-proctored online exams. Participants in the study were a cohort of undergraduate students enrolled in two on-campus sections and one distance learning session of an undergraduate applied medical sciences course. Students in all three sections had the same instructor, read the same textbook, had access to the same course content via the same web-based learning platform and had access to video recordings of weekly on campus learning sessions. Students from all three course sessions were randomly divided into two groups. One group completed their exams via web-based delivery with supervision, while the other half completed exams via web-based delivery without supervision. A comparison of scores was analyzed to assess if a difference in performance level exists. Although the average test score across the four exams was consistently higher for the un-proctored group compared to the proctored group, this difference was only statistically significant for the first two exams.
Keywords: proctored versus un-proctored testing, test delivery, web-based testing, supervised testing, unsupervised testing, asynchronous web-based learning, online testing, test delivery methods, learning or online learning, online assessment, on campus learning, distance learning.
Test Delivery Methods
The current research was conducted in order to determine whether there is a relationship between student test performance and method of test delivery. With the increase in distance education, there have been many studies that have looked at the influence of learning medias on student performance, with the results having mixed reviews (Steinweg, Davis, & Thomson, 2005; Ragan, & Kleoppel, 2004; Thirunarayanan & Perez-Prado, 2001-2002). However, there has been little research that has investigated the influence of the testing format on student performance, specifically research addressing the effect of proctored versus un-proctored exams.
With the rise in distance education courses, computer-aided exams have become commonplace for most faculty and students. Students who enroll in the distance education courses do so with the understanding that they will spend most of their free time in front of the computer and that exams will usually be taken via computers. Additionally many faculty members who are teaching campus courses also employ the aid of the computer for testing. The course material may be covered in class via lectures; however, the testing may be completed using computer-aided technology. There are various reasons for using online testing including automated grading, less time administering the exam, instant feedback for students, and flexibility for students to take the exam when prepared (Turner, 2005; Warren & Holloman, 2005; Wellman & Marcinkiewicz, 2004; Greenberg, 1998;) However, along with the advantages come some concerns regarding academic honesty, accessibility, student learning styles, limited computer skills and use of supporting software, and student motivation (Turner, 2005; Summers, Waigandt and Whittaker, 2005; Lorenzetti, 2006).
A vast number of research studies have been completed to compare distance education courses and campus courses, and to establish if there are differences in student learning outcomes. Research conducted by Reason, Valadares, and Slavkin (2005), compared three delivery formats: traditional campus courses, hybrid courses, and internet-based courses. Student outcomes were measured by course participation, final course grade, and interaction with website. The courses researched were Educational Psychology and Health Care Delivery Systems, and the software used to support the internet-based course was Blackboard. The results indicated that there were no significant differences between the courses as far as the rate of student participation. However, students in the internet web-based course medium performed better than their counterparts in hybrid and campus-bases settings. The researchers explain that this may have occurred because the internet-based students interacted more frequently with their learning medium compared to the learners in the other learning formats.
Ragan and Kleoppel (2004) compared distance-based students with in-residence students by using academic outcomes as a form of measurement. Both groups of students were in the Pharm.D. programs in the Kansas School of Pharmacy. They also used Blackboard as the software to support the distance-based program. Consistent with the previously mentioned study, the results indicted that the distance-based students slightly outscored the in-residence students when comparing exam scores, which is consistent with the previously mentioned study. Ragen and Kleoppel do note that a limitation of the study was no measurement of incoming skills and knowledge was identified prior to the study.
Other studies comparing distance education and traditional courses have found results contradictory to the previous studies. Warren and Holloman (2005) conducted a research study comparing student outcomes where one course section was a traditional face-to-face course and the other section was taught online. The 52 students were randomly selected to either participate in the campus course or the internet course. Students completed pre and post assessments in a self evaluation format which addressed their level of expertise regarding the competencies and objectives of the course. The results of the study indicted that there was no significant difference between the two sections. These data result include both the pre and post assessments, as well as the final grades.
Another study validating the no significant difference results was conducted by Summers, Waigandt, and Whittaker (2005) with undergraduate nursing students. The required statistics course consisted of thirty-eight students who were allowed to choose either the web-based course or the traditional face-to-face course format. The delivery system used to present the course was WebCT. The instructor was the same for both courses and the content was equivalent. Both course formats had the same exams and the same amount of time allocated for the purpose of testing, and both formats had a proctor present during exams. Although the results found the distance education students were less satisfied with the overall course, there was no significant difference in grades.
With the surge of distance education courses, there has been an increase in research investigating advantages as well as disadvantages of this learning format compared to other learning media. Such research has included studies that have investigated the overall academic performance of learners, student and faculty satisfaction, and the technology used to support distance education programs. However, as mentioned earlier, research studies focusing on proctored versus un-proctored exams have been rare. One recent study conducted by Wellman and Marcinkiewicz (2004) did focus on the impact of proctored versus un-proctored quizzes upon student learning. In this particular study of 120 students, pre and post test were completed along with quizzes based on specific assigned chapters from the textbook. For their research Wellman and Marcinkiewicz defined learning as the change in the pre test and post test scores. Although no difference in text scores was found between the two groups, students in the online un-proctored group did outscore their proctored counterparts on the quizzes.
Designing a study similar to Wellman and Marcinkiewicz, the current research was based on using the measurement of test scores as a comparison between the proctored and un-proctored tests. In this study, the researchers were interested to find if student performance was influenced by the testing format, specifically proctored versus un-proctored.
Participants: Undergraduate students who were enrolled in an Applied Medical Science course at East Carolina University. The research study started with 80 students which included both campus and distance education students. Before the cohort was divided into 2 groups (proctored verses un-proctored), two students withdrew from the course leaving a total of 78 students. Of the total there were 19 males and 59 females. There were three students who did not complete the consent form for the study leaving a total of 75 students. The students were randomly divided into two groups. Group 1 (un-proctored) had a total of 38 students which included 23 campus and 15 distance education students. Group 2 (proctored) had a total of 37 students which included 23 campus and 14 distance education students.
Course: The Applied Medical Science course is a required course for students seeking a degree in either Health Information Management or Health Services Management. All students had taken the same prerequisites and were admitted to the program prior to enrolling in this course. The course is divided into 3 sections due to the large number of students in the program. One section is considered distance education, while the other two sections are counted as campus courses and are taught on two different days due to inadequate space in the classrooms. There is one instructor for all three sections, and the required textbooks are the same. Although two sections of the course are considered campus format, none of the students are required to attend class because the instructor uses video recordings for the lectures. Using mediasite technology, these recording are placed on WebCT, the distance education delivery support technology. Therefore all students can view the same lectures and power points used in the teaching. All content information can be found on WebCT. The only requirement for attending class was for those students who took the proctored examinations. Computer-aided testing was used for all students.
Procedure: The students were given a consent form at a program orientation at the beginning of the semester. Students who were not able to attend the orientation received a consent form in the mail to be signed and sent back to the university prior to the start of the semester. The entire cohort of students was divided into 2 groups and was randomly assigned to either the proctored exams or the un-proctored exams. All students were told they could not use textbooks, notes, or talk with other students when taking the exam. During the semester, the students were given 4 multiple-choice exams which consisted of 35 questions per exam. Group 1, the un-proctored students, were free to take their exams at their leisure, but within a specified time frame. Group 2, the proctored students, were assigned to come to class on a specific day and time to take the exam, or assigned to be proctored at a local community college. The students who came to the university were proctored by a faculty member in the Health Service and Information Management Department at the university, while the students at the community college were given a specific time to report to the college. All students took their exams through WebCT interface. The time allocated for the exams was the same for all students. The only difference between any of the groups was the time of day, and the day of the week allowed to take the exam.
This current study was designed to determine if a relationship exists between method of test delivery and student performance. Students were given 4 exams during the semester.
Exam 1: There were 37 un-proctored students who took exam 1. There were 34 students who took the exam with a proxy. Three students withdrew from the study prior to exam time and one student attempted to complete the exam at a local community college, but found the prearranged proxy unavailable to proxy the exam.
Exam 2: There were 36 un-proctored students who took exam 2. Out of the original 37, there was one student who did not take the test. There were 32 students who took the exam with a proxy. Of the 35 remaining students, one student withdrew from the study and 2 students did not take the exam.
Exam 3: There were 36 un-proctored students who took exam 3. Of the 37 students, one student did not take the exam. There were 33 students who took the exam with a proxy. Of the 35 remaining students, one student had a medical withdrawal and one student did not take the test.
Exam 4: There were 35 un-proctored students who took exam 4. Of the 37 remaining students, two did not take the exam. There were 31 students who took the test with a proxy. Of the 34 remaining students three students did not take the exam.
Exam 1: The mean score for the un-proctored students was 81.3, while the mean for the proctored students was 73.8.
Exam 2: The mean score for the un-proctored students was 90.2, while the mean for the proctored students was 82.8.
Exam 3: The mean score for the un-proctored students was 92.2, while the mean for the proctored students was 88.2.
Exam 4: The mean score for the un-proctored students was 84.2, while the mean for the proctored students was 80.8 (see Table 1).
Relationship between exam scores: To determine whether the relationship between the mean test scores for each group was statistically significant, a T-Test, assuming equal variances, was used to compute the significance (see Table 2).
To further characterize the relationship between exam scores and test delivery methods, Grade Point Averages (GPA) were calculated. The mean GPA for group 1 was 3.08, while group 2 had a mean GPA of 3.06 (see Table 3). In addition to comparing the GPAs, the ages of both groups of students were compared to reduce the possibility that age was a contributing factor in the scores. The mean age for the un-proctored students was 27.8, while the mean age for the proctored students was 23.8 (see Table 4).
Summary Statistics for Scores on Exam 1
Summary Statistics for Scores on Exam 2
Summary Statistics for Scores on Exam 3
Summary Statistics for Scores on Exam 4
T-Test Analysis of Exam Scores
*Correlation is significant at the 0.05 level.
Summary Statistics for Grade Point Averages
Summary of Statistics for Student Ages
Summary and Discussion
The purpose of this study was to determine whether a relationship exists between test performance and method of test delivery among undergraduates students in a medical science course offered at East Carolina University.
The sampling frame was the students enrolled in the medical science course. The students were randomly divided into two groups. Group 1, un-proctored students, completed the course exams at their leisure without a proctor, while group 2, proctored students, completed the same exams while being proctored either at the university or via an independent proxy. Faculty version 15.0 of the Statistical Package for the Social Sciences (SPSS) was used for statistical analyses. Frequencies and summary statistics were computed for exam scores of both groups of students. The mean and standard deviations were computed for each groupís scores, and for each groupís grade point average. A T-Test was computed between each groupsí test score to determine the relationship between the two variables.
The students in the sample were those who had already enrolled for the course. All of these students were in the same program and had the required prerequisites for the course. Since there could be a difference in performance between the distance education and campus students, each cohort was randomly assigned to be either un-proctored or proctored. In order to control for the possible influence of previous academic achievement on student performance, GPAs were calculated for both cohorts. There was no significance difference in average GPA between the two testing cohorts. The only noticeable difference between the students was the ratio of males to females. All students were given the same study materials and access to lectures via the medisite network.
Although the average test score across the four exams was consistently higher for the un-proctored group compared to the proctored group, this difference was only statistically significant for the first two exams. One possible explanation for this finding could be that proctored students began to study more diligently recognizing that no assistance would be available during the exam. There is also the possibility that proctored students were more nervous or uncomfortable during the first two exams and began to feel more at ease with the last two exams. Finally, there is the possibility that exam scores were higher for un-proctored students because of the advantage afforded by being un-proctored: that is they could have accessed the text and other resources during the exam.
The findings of this study show that there was a difference between student performance and test delivery method. However, the overall difference was not significant. The assumption that there was a significant difference between test performance and test delivery method was only substantiated on half of the exams. Since there has been little research addressing this issue, it is difficult to make comparisons with other studies. However, the results were consistent with research by Wellman and Marcinkiewicz (2004) when addressing test scores only.
Recommendations for Future Research
This study provides information on student performance and test delivery methods. A larger study including more students and more frequent testing would help to further knowledge of the relationship between performance and test delivery methods. Also a larger study would be more generalizeable to other students, courses, and colleges/universities.
This research addressed a growing concern which is the relationship between student performance and test delivery methods. To test the hypothesis that un-proctored students tend to outscore proctored students because they have potential to access test taking resources, it would be useful to set up a study where all learners are subjected to exactly the same test taking conditions: that is their access to the internet will be blocked during the test as well as electronically proctored via webcam and surveillance software.
Greenberg, R. (1998). Online testing. Techniques, Making Education & Career Connections. 73(3), 26-29.
Lorenzetti, J. (2006). Proctoring assessment: Benefits and challenges. Distance Education Report. 10(8), 5-6.
Ragan, R. & Kleoppel, J. (2004). Comparison of outcomes on like exams administered to in-residence and asynchronous distance-based Pharm.D.students. Journal of Asynchronous Learning Networks, 8(4).
Reasons, S., Valadares, K., & Slavkin, M. (2005). Questioning the hybrid model: Student outcomes in different course formats. Journal of Asynchronous Learning Networks, 9(1).
Steinweg, S.B., Davis, M.L., & Thomson, W.S. (2005). A comparison of traditional and online instruction in an introduction to special education course. Teacher Education and Special Education, 28(1), 62-73.
Summers, J. Waigandt, A., & Whittaker, T. (2005). A comparison of student achievement and satisfaction in an online versus a traditional face-to-face statistics class. Innovative Higher Education, 29(3), 233-250.
Thirunarayanan, M.O., & Perez-Prado, A. (2001-2002). Comparing web-based and Classroom-based learning: A quantitative study. Journal of Research on Technology in Education, 34(2)131-137.
Turner, C. (2005). A new honesty for a new game: Distinguishing cheating from learning in a web-based testing environment. Journal of Political Science Education, 1,163-174.
Warren, L., & Holloman, Jr., H. (2005). On-line instruction: Are the outcomes the same?Journal of Instructional Psychology, 32(2), 148-151.
Wellman, G. & Marcinkiewicz, H. (2004). Online learning and time-on-task: Impact of proctored vs. un-proctored testing. Journal of Asynchronous Learning Networks, 8(4).
About the Authors
|Patricia Royal, Ed.D is an assistant professor in the Health Services and Information Management Department in the College of Allied Health at East Carolina University in Greenville NC. She holds a masters degree in social work and a doctoral degree in higher education, and completed post graduate work in public health. Email: email@example.com|
Paul D. Bell, PhD, RHIA, CTR, is Associate Professor of Health Services and Information Management in the College of Allied Health Sciences at East Carolina University in Greenville, NC. Email: firstname.lastname@example.org
Address for both Drs. Royal and Bell:
Health Services and Information Management
Health Sciences Drive
College of Allied Health
East Carolina University
Greenville, NC 27858