December 2010Home Page

Editor’s Note: New media are often criticized for perceived weaknesses. When shortcomings are overcome, the result is a superior learning tool. The virtual classroom was initially criticized for isolating students and professors; tools for community building and interactivity now favor distance learning for many subjects. The virtual classroom and asynchronous activities enable mid-career professionals to integrate meaningful learning experiences into their busy schedules; it also opens up college programs for persons anywhere who are unable to attend on-campus programs. Instructional designers, teachers, and administrators are anxious to optimize their contributions to this new learning environment. They have great interest in what research can tell us from the point of view of instructors and students.

Do you teach in a Virtual Classroom?
Measuring student’s perceptions of the features and characteristics

Michele A. Parker, Emily R Grace, and Florence Martin
USA

Abstract

Student learning is a key element of instructional technology, yet little is known about the Virtual Classroom, including student’s perception of its features and characteristics. Therefore, reliable and valid instruments are needed to measure these attributes. The researchers assess the reliability and validity of a survey designed to address this. The sample consisted of 57 students from three classes at a Southeastern University in the United States. Face and content validity were established by a panel of experts. After data collection, internal consistency reliability was determined for the features (α =.92) and characteristics: interactivity = .70, synchrony =.70, usefulness and ease of use =.76 and sense of community = .77. Although the 4 characteristics were reliable, some items within each construct need further investigation due to low item-to-total correlations (<.30). The results are discussed along with implications for future Virtual Classroom research.

Keywords: e-learning, online education, Virtual Classroom, simulated classroom, synchronous, real-time instruction, validity, reliability, survey development, instrument construction

Introduction

In the fall of 2008, over 4.6 million college students in the United States were enrolled in an online course. There was a 17% growth in online enrollment in one year (Allen & Seaman, 2009). This growth can be attributed to the flexibility and convenience that online courses offer students and instructors. With a variety of digital learning environments and platforms; classrooms may be physical, digital, or a combination of the two, and instruction can be synchronous or asynchronous (Clark & Gibb, 2006).  Although online instruction is rapidly gaining acceptance as an alternative and supplement to traditional instruction (Arbaugh, 2000), one of the major challenges that educators face is designing effective online courses (Muirhead, 2004; Keefe, 2003).

According to Liaw, Huang, and Chen (2007) there are three main considerations for online instruction: autonomous learning, multimedia, and teacher-lead learning.  An online environment that captures these principles, which is rapidly growing in popularity is the Virtual Classroom (Flatley, 2007; Gilmore &Warren, 2007; Arbaugh, 2000). The Virtual Classroom (VC) enables students and instructors to interact in real-time as if they were face to face in a classroom. In order to use the VC an instructor and his/her students must have internet access and log-in to the software at the designate time(s). Many virtual classrooms (e.g., Horizon Wimba, Dim Dim) include audio, video, text chat, application sharing, content display, polling and other features. These interactive features allow users to simulate traditional instruction while maintaining the flexibility of online environments.

In 2010, Author and Author examined instructional technology students’ perceptions of the features and characteristics of the VC in order to learn more about this instructional tool. In the study, the characteristics of the VC were grouped in four main categories—usefulness/ease of use, synchrony, interaction, and sense of community. A similar grouping developed by Arbaugh (2000) was used in prior studies on the VC with MBA students. With the explosion of online enrollment (Allen & Seaman, 2009) and the increasing use of the VC, it is important to understand this learning environment. In this study, the researchers discuss the reliability and validity of the Virtual Classroom Instrument, which can be adapted and used in subsequent studies.

Literature Review

VCs have been known to promote synchrony, interaction, and sense of community in different contexts (Clark, 2005; Constantinos & Papadakis, 2009; Author & Author, 2010; Rovai, 2005). Scholars have also discussed the ease of use and usefulness of many technologies in relation to adoption and student learning (Arbaugh, 2000; van Raaij & Schepers, 2008). The following literature covers these characteristics as they relate to the VC.

Usefulness and Ease of Use

Usefulness and ease of use are significant factors in the acceptance and use of new technologies.  Perceived usefulness increases as users gain experience so that computer anxiety decrease and personal innovativeness increases (van Raaij & Schepers, 2008).  Dufrene, Lehman, Kellermanns, and Pearson (2009) found that perceived usefulness and perceived ease of use are positively associated with perceived learning outcomes.  Arbaugh (2000) found that perceived usefulness is associated with student satisfaction, while perceived ease of use is not associated with student satisfaction.  This aligns with research by van Raaij and Schepers (2008), who found that perceptions of ease of use decreases after initial training, but perceptions of usefulness increase as users gain experience in online course platforms.  Student perceptions of the usefulness and ease of use of technologies can provide guidance for instructors to develop and carry out online instruction more effectively. 

Interactivity

Interaction is an important aspect of learning and is essential in virtual learning (Ng & Murphy, 2005). A variety of interaction is available in VCs. This includes text chat, vocal exchanges, and real-time video streaming. This interactivity provides increased opportunities for collaborative learning.  There is less competition for attention from the instructor, less wait time to participate, increased participation from introverted students and decreased anxiety during interactions (Arbaugh, 2000; Gilmore & Warren, 2007).  The use of student teams, which is feasible in the VC, can also influence the quality of online interaction (Dineen, 2005). 

Online interactions are influenced by the structure of the course and the role of the instructor (Arbaugh, 2000; Dineen, 2005; Lee & Rha, 2009; Rhode, 2009).  Ng and Murphy (2005) found that faculty used online interaction more for clarification than promoting student’s higher order thinking. In contrast, student-to-student interaction can be an effective way to communicate learning in online courses and it is not necessary for the teacher to participate in all interactions (Lee & Rha, 2009).  Rhode (2009) found that informal interactions were as important as formal interactions in online learning. Enhanced interaction includes compensating for the lack of visual and non-verbal cues (Bielman, Putney, & Strudler, 2003).  For example, using emoticons to show facial expressions, all capital letters, acronyms, exaggerated spelling, splitting messages as if pausing for a breathe, or other tools available on the course platform.

Sense of Community

Virtual communities are composed of individuals that share information, knowledge and common interests (Ardichivili, 2008).  Virtual communities are social constructions created through the interaction and activity of the members of the group (Vygotsky, 1997, as cited in Bielman, Putney, & Strudler, 2003).  The formation of a sense of community can be facilitated by the instructor's awareness of the student needs and vulnerabilities in an online course and through social tasks using message boards, chat rooms, internet lectures and personal web pages (Falvo & Solloway, 2004). Even a low degree of moderation by an instructor can enable online groups to form community with an element of camaraderie, support and warmth (Winograd, 2000).

Strategies to increase a sense of online community include building rapport, decreasing feelings of isolation and enhancing interaction (Bielman, Putney, & Strudler, 2003).  Building rapport may involve providing choices for student assignments and using student names.  To decrease feelings of isolation, the instructor can use collaborative activities and encourage students to share experiences. Study groups also strengthen the development of a community of learners (Knupfer, Grum, & Largen, 1997 in Tallent-Runnels et al., 2006).  Group work can increase the sense of community in online classrooms so long as it includes instructor support, structured format and the development of social tasks (Cameron, Morgan, Williams, & Kostelecky, 2009).  

Synchrony

Synchronous technology connects users at the same time. The real-time communication in synchronous courses enables students to receive direct, immediate feedback (Mikulecky 1998; Tallent-Runnels et al., 2006).  Additionally, shy students are more likely to participate in discussions and express opinions in synchronous courses versus face to face classes (McBrien, Jones & Cheng, 2009).  However using too many simultaneous modes of communication may over stimulate students and cause confusion (McBrien, Jones, & Cheng, 2009).  Chat options in synchronous online course are actually quasi-synchronous as there is lag time with the person typing, reading, and server delays.  Pauses in synchronous communication can mean emphasis, compliance or confusion, which may differ from face to face communication (Markman, 2009).  The real time communication in synchronous courses can be threatened by technology issues (McBrien, Jones & Cheng, 2009). Yet, the challenges involved in synchronous technologies are outweighed by the many ways that it supports student learning.  

Purpose of the Study

Two characteristics integral to measurement are reliability and validity (American Educational Research Association, 2004). Literature on the reliability and validity of instruments are prevalent in the literature (e.g., Clapper & Harris, 2008; Ioannou, 2009; Ludlow, Enterline; & Conchran-Smith, 2008; Suhonen, Schmidt, & Radwin, 2007 and Yukay-Yuksel, 2009).  The purpose of the study was to assess the reliability and validity of the Virtual Classroom instrument (VCI). The instrument was designed by the researchers to understand student’s perceptions of the VC. As previous research has shown, students attitudes are related to course satisfaction and learning (Liaw, Huang, & Chen, 2007; Arbaugh, 2000). The VCI contains questions about VC features and four characteristics—interactivity, synchrony, usefulness and ease of use, and sense of community.

Methodology

In the Fall of 2008 the VCI was developed by researchers at a Southeastern University.  A group of technology specialists determined the face and content validity of the instrument.  During a cross sectional study, the VC instrument was administered electronically to 101 students enrolled in an undergraduate instructional technology course. Fifty-seven students participated in the study, resulting in a 56% response rate.

Context and Procedure

The respondents were enrolled in three sections of an instructional technology course designed to teach student how to integrate technology effectively in K-12 education. The course material is divided into eight topics, which include integrating educational technology into the curriculum; communication and networks; application software and productivity tools, and hardware for educators. Each section of the course required the same textbook and assignments. The instructors of the course had taught using the Horizon Wimba virtual classroom prior to the study. The students were introduced to the VC during one class session and used on three other occasions for similar content across sections. Toward the end of the semester, students received an email with a brief message about the purpose of the study and a hyperlink to the VCI (hosted by Survey Monkey ©). The instrument was available for a three-week period and weekly email reminders were sent to each student.

Instrument Construction and Description

The Horizon Wimba software guide was used to identify each of the features available within the VC (Wimba, 2009). A thorough search of the e-learning literature revealed these four characteristics and their respective dimensions. This information was used to construct items that reflect the meaning of each characteristic. We paid close attention to the number of items attributed to each feature and characteristic to ensure representativeness (Carmines & Zeller, 1979). Items were positively and negatively worded. The positively worded items were written to address the advantageous aspects of the VC.  The researchers expected students to agree with these items and to disagree with the negatively worded items that were conceptualized and written to represent less favorable aspects of VC instruction.  Another rationale for the negatively worded items was to reduce social desirability bias, by requiring students to contemplate the meaning of the item carefully prior to responding (Ludlow, Enterline, & Cochran-Smith, 2008).

Establishing Validity

Four experts were asked to provide feedback regarding the validity of the VCI. Two experts were instructional technology faculty with an average of 8 years of experience in the field; 2 were experts on instrument construction, each having designed questionnaires and published survey research. They were asked to make suggestions regarding the clarity of the instrument and its ability to ascertain student’s perceptions of the features and characteristics of the VC. The experts were also asked to comment on the overall presentation of the electronic survey, which was deemed appropriate and easy to navigate. The first version of the survey included 15 items in section 1 (features of the VC), and 34 items in section 2 (characteristics of the VC). The feedback from the 4 experts resulted in one amendment, which was eliminating the item “viewing the video streamed by my instructor” as this was considered a process rather than a feature of the VC.

Content validity was established through the feedback from 2 instructional technologist employed at other universities. They were asked to provide comments regarding the format and comprehension of each item. The amendments involved changing words or revising sentence structure to increase clarity and understanding. In the characteristic section, several items overlapped categories.  Items were either removed or shifted to mutually exclusive categories. The second part of the survey was reduced from 34 to 23 items.

The VCI consisted of two sections that used a 4-point Likert scale (4=strongly agree, 3=agree, 2=disagree, and 1=strongly disagree).  The first section of the VCI asked students to respond to 14 statements about the features (e.g., the use of emoticons, the e-board) of the VC. The second section consisted of 23 statements wherein students were asked to rate their VC experience. The items pertained to interactivity, synchrony, usefulness and ease of use, and sense of community within the VC.

Data Analysis

For the statistical analyses the researchers used SPSS 16. Negatively worded items were reverse scored (R) so that higher ratings corresponded with disagreement. The ratings for each item were tallied to create a score for each characteristic that was used in the analyses. Descriptive statistics (means, standard deviations, and 95% confidence intervals) were used to summarize the data. Cronbach’s alpha was computed for internal consistency reliability. We used a Cronbach α coefficient set a prior at .70 to analyze the features (as a whole) and the four characteristics (Santos, 1999; Nunnally, 1978). Each item was also analyzed with an item-to-total correlation, which was regarded as acceptable if the correlation was above .30 (Clapper & Harris, 2008; Ferketich, 1991).

Results

Participants

Ninety-one percent of the students were female and 8% were male. One percent of the participants were 18 and younger, 73.7% of the students were between 19-24 years old, 14% were 25-31 years old, and 10.5% were 32 or older. Seventy four percent of the students used the VC for the first time, 19.3% used the virtual classroom for 2-4 semesters, 5.3% had never used it before, and 1.8% used it for 5 or more semesters.

Virtual Classroom Features

Descriptive statistics for student’s perception of VC features are listed in Table 1.

Table 1
Descriptive statistics for student’s perception of Virtual Classroom features

Feature

M     (SD)

95% CIs

View slide presentations posted by instructor

3.09(.71)

2.90-3.28

Using the whiteboard tools in class

2.74(.64)

2.57-2.91

Reading messages from members in text-based chat

2.84(.68)

2.66-3.02

Posting or replying to a message in a text-based chat

2.82 (.69)

2.64-3.01

Interacting privately using text-based chat

2.49(.63)

2.32-2.66

Talking to the others using the audio chat option

2.60(.88)

2.36-2.83

Asking the moderator questions by raising my hand

2.86(.88)

2.63-3.09

Using the polling feature to respond to questions

3.02(.72)

2.83-3.21

Using emoticons and other activity indicators

2.56(.78)

2.35-2.77

Viewing archived virtual classroom sessions

2.88 (.89)

2.64-3.11

Viewing the desktop shared other participants

3.09(.76)

2.89-3.29

Using the breakout room in a virtual class session

2.51(.63)

2.34-2.68

Viewing websites loaded within a session

2.84(.65)

2.67-3.01

Able to moderate a virtual class session

2.68 (.71)

2.50-2.87


The respondents rated the features using a four-point Likert scale (
4=strongly agree, 3=agree, 2=disagree, and 1=strongly disagree). Their average responses ranged from 2.49-3.09, which indicated a fairly positive view of this learning environment. The ability to view the instructor’s slide presentations (M=3.09) and sharing one’s desktop (M=3.09) were more beneficial than the other features. On the contrary, one-to-one private chats (M=2.49) and using the breakout rooms during VC sessions (M=2.51) were the least beneficial features. The 14 items pertaining to the features of the Virtual Classroom had a Cronbach’s alpha of .92.

Virtual Classroom Characteristics

It was expected that each inter-total correlation would meet the set criteria (.30 < r) (Ferketich, 1991). Item-total correlations for the interactivity scale revealed that only one item “my typing hindered me” was under the .30 threshold.  Similarly, one item on the synchrony scale, “the class was monotonous,” had a correlation below .30. The usefulness and ease of use and sense of community scales also had one item-total correlation that were less than the criteria, r=.21 and r=.28 respectively.

Table 2
Item Analysis for Virtual Classroom Characteristics

Item

Mean(SD)

Scale Mean if Item Deleted

Alpha if
Item Deleted

Item-total correlation

 

Interactivity

 

 

 

 

  Facilitated instructor to student Interaction

3.09 (.81)

20.09

.61

.64

  Facilitated student to student Interaction

2.82 (.68)

20.35

.61

.68

  The quality of class discussions were High

2.75 (.91)

20.42

.58

.74

  I learned from my fellow students in  this class

2.70 (.82)

20.47

.63

.58

  Instructor frequently attempted to elicit student interaction

2.70 (.82)

19.81

.65

.53

  My typing hindered me (R)

3.21 (.62)

19.96

.75

-.06

  It was easy to follow class discussions

3.00 (.87)

20.18

.59

.72

  I could not talk freely because I could  

  not see my classmates face to face

2.23 (.82)

20.95

.83

-.41

Synchrony

 

 

 

 

  It reduced my travel time to the campus to attend
  face to face class


1.83 (1.01)


12.4


.579


.63

  It reduced my travel cost

2.86 (.99)

12.39

.565

.66

  It helped me collaborate with peers without having
  to be in the same location

3.00 (.87)

12.25

.588

.63

  I had bandwidth limitations

2.02 (.72)

13.23

.684

.33

  I had technical problems

2.21 (.90)

13.04

.698

.30

  The class was monotonous

2.32 (.71)

12.93

.753

.03

Usefulness and Ease of Use

 

 

 

 

  It enhanced my effectiveness

2.63 (.84)

11.42

.693

.61

  It improved my performance

2.65 (.83)

11.40

.686

.63

  It was easy for me to become skillful in using VC

2.93 (.65)

11.12

.697

.62

   I found it easy to get the virtual

  classroom to do what I want it to do

2.82 (.68)

11.23

.677

.68

  I was not confident using the VC (R)

3.02 (.77)

11.04

.827

.21

Sense of Community

 

 

 

 

  I felt isolated

2.21 (.796)

7.49

.594

.78

  There were not many collaborative activities

2.18 (.71)

7.40

.719

.58

  I did not feel a sense of belonging in the classroom

2.30 (.68)

7.53

.638

.73

  I worked on my own for most of the projects

3.02 (.767)

6.68

.867

.28

 

While item analysis focuses on the individual item in a composite instrument (Ferketich, 1991), it is equally important to consider the composite scores. Scales were computed by adding the ratings for each subset of items representing a characteristic. Since the number of items varies for each characteristic the original values were converted to 0-100 scales to facilitate comparing the mean and standard deviation of the scales with one another. Usefulness and ease of use had the highest average rating (M=70.5), followed by sense of community (M=67.5), interactivity (M=65.3), and synchrony (M=63.3). Responses for all the scales generated a Cronbach’s alpha between .70 and .77. See Table 3.

Table 3
Descriptive Statistics and Reliability of Virtual Classroom Characteristics

Scales

Range

Mean (SD)

Means (SD) Converted scale 0-100

95% CIs

No. items

Alpha

Interactivity

1-32

20.9 (3.81)

65.31(11.9)

19.9-22.0

8

.70

Synchrony

1-24

15.2 (3.31)

63.3 (13.79)

14.4-16.1

6

.70

Community

1-16

10.8 (1.04)

67.5(6.5)

10.5-11.0

4

.77

Useful and Ease

1-20

14.1(2.72)

70.5(13.6)

13.3-14.8

5

.76

SD, standard deviation; CI, confidence interval; Alpha, Cronbach’s alpha coefficient, r, correlation coefficient

Discussion

Although the VC is growing in popularity (Arbaugh, 2000; Flately, 2007), there are few studies on the Virtual classroom with different populations and in various contexts (Arbaugh, 2000; Author & Author, 2010).  The data from this study provide information on the validity and reliability of the Virtual Classroom Instrument, which was designed to measure students’ perceptions of the features and characteristics of this e-learning environment.

Additional improvements may strengthen the VCI further. For instance, eliminating certain items, using the criteria set for item-analysis, will increase the reliability estimates of the respective scales (of the characteristics). Two examples illustrate this phenomenon: For the interactivity scale deleting the item “I could not talk freely because I could not see my classmates face to face” increases the reliability coefficient from .70 to .83. This item may be problematic for conceptual reasons as well as how it is worded. Conceptually, the item may be less about interaction and more about introversion/extroversion or individual preferences for instruction. In terms of wording this item combines two negative statements. The reliability of the synchrony scale (α=.70) increases to .75 by removing the last item “the class was monotonous.” A class that is boring or mundane is different than instruction that is delivered simultaneously to a group of individuals.

The sense of community scale contained one item that fell outside the criteria (.30 < r) recommended by Ferketich (1991). We suggest reconsidering the item “I worked on my own for most of the projects” (r=.28). On the surface this item seems related to community. Upon closer scrutiny it could also be about individual work habits or preference rather than formation of community. Another plausible rationale for the low correlation pertains to the class in which this study evolved. In the instructional technology class students were required to submit projects individually, which is counterintuitive to developing a sense of community. It is suggested that future studies consider the nature of the class as it relates to this domain. Despite the explanation, the items within the sense of community scale may be need to replaced or modified prior to subsequent use of the instrument and then retested for reliability and validity.

The items that are crafted will need to adhere to instrument construction guidelines. As such, they should avoid double negatives like the one item that was suggested for deletion in the interactivity scale. Statements like this often confuse respondents and can increase measurement error (Dillman, 2000). Other researchers may want to add other questions that are relevant to study such as prior online course enrollment, type of delivery method (fully online class, hybrid, etc), frequency of VC use, and student’s familiarity with other forms of technology to see how these variables correspond with student’s perceptions of the VC.

Both face validity and content validity can be limited ways of ascertaining whether or not a measurement tool is valid. Other types of validity may provide stronger evidence. For example, instrument(s) that deal with the same constructs can be administered to respondents and used to determine convergent validity. Alternatively, construct validity can be determined by conducting factor analyses. While this study does not have the adequate sample size to accomplish this, promoting the use of this instrument in future studies can lead to its use with larger samples. Once the appropriate sample size is obtained this manner of validation can occur. According to the prescription (Burns & Grove, 2001) a minimum number of 10-15 participants for each item is suggested.  Based on the items that represent the VC characteristics in this study (n=23) the sample should consist of a minimum of 230 respondents (10 x 23).

Limitations

The researchers acknowledge the lack of generalizability of this study due to the nature and size of the sample. However due to the novelty of the Virtual Classroom obtaining larger samples is difficult. Ferketich (1991) acknowledges the difficulty in finding 200-300 subjects, for item-analysis, when an instrument is designed for rare populations. In clinical populations, item-analysis usually is conducted with far fewer subjects. While one-source and social desirability response bias may be present, these are inherent in survey research that is used in this capacity (Boardman & Sundquist, 2009). While this study does not have the adequate sample size for factor analysis, which would validate the constructs on the VCI, the authors sought experts to help determine face and content validity. Although the items were valid, few items appear problematic. These items have been identified and can be addressed in future iterations of the instrument.

Conclusion and Future Research

Despite these limitations the data can be used by instructors, researchers, and practitioners who are interested in student’s perceptions of the Virtual Classroom. This study provides evidence of validity and acceptable reliability for measuring student’s perceptions of the VCI.  However several items within the characteristics need more testing to further establish the reliability and validity of the VCI. Making improvements to the existing instrument will strengthen the quality of data on the VC.  In the future, researchers who collect data from larger samples may elect to examine whether demographic characteristics such as sex, age, and previous online course enrollment reveal significant perceptual differences in the features, characteristics, or other aspects of the VC. Other research directions include the need for more cross-disciplinary studies on the VC and studies that investigate course outcomes.

References

Allen, I. E. & Seaman, J. (2009). Learning on demand: Online education in the United States Retrieved from http://sloanconsortium.org/publications/survey/pdf/learningondemand.pdf

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2004). Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association.

Arbaugh, J. B. (2000). Virtual classroom characteristics and student satisfaction with online MBA courses. Journal of Management Education, 24(1), 32-54. doi:10.1177/105256290002400104

Ardichvili, A. (2008). Learning and knowledge sharing in virtual communities of practice: Motivators, barriers, and enablers. Advances in Developing Human Resources, 10(4), 541-554. doi:10.1177/1523422308319536

Bielman, V. A., Putney, L. G., & Strudler, N. (2003). Constructing Community in a Postsecondary Virtual Classroom. Journal of Educational Computing Research, 29(1), 119-144.

Boardman, C. & Sundquist, E. (2009). Toward understanding work motivation: Worker attitudes and the perception of effective public service. The American Review of Public Administration, 39(5), 519-535.

Burns, N. & Grove, S. K. (2001). The practice of nursing research conduct, critique, and utilisation (4th ed). Philadelphia, PA: W. B. Saunders Co.

Cameron, B. A., Morgan, K., Williams, K. C., & Kostelecky, K. L. (2009). Group projects: Student perceptions of the relationship between social tasks and a sense of community in online group work. American Journal of Distance Education, 23(1), 20-33. doi:10.1080/08923640802664466

Carmines, E. G. & Zeller, R. A. (1979). Reliability and validity assessment. Thousand Oaks, CA: Sage.

Clapper, D. C., & Harris, L. L. (2008). Reliability and validity of an instrument to describe burnout among collegiate athletic trainers. Journal of Athletic Training, 43(1), 62-59.

Clark, D. N., & Gibb, J. L. (2006). Virtual team learning: An introductory study team exercise. Journal of Management Education, 30(6), 765-787. doi:10.1177/1052562906287969

Clark, R. (2005, May). Four steps to effective virtual classroom teaching. Learning Solutions Magazine. Retrieved from http://www.learningsolutionsmag.com/articles/266/four-steps-to-effective-virtual-classroom-training

Constantinos, E. R. & Papadakis, S. (2009). Using LAMS to facilitate an effective synchronous virtual classroom in the teaching of algorithms to undergraduate students. Presented at 2009 European LAMS and Learning Design Conference.

Dillman, D. A. (2000). Mail and internet surveys: The tailored design method. New York: John Wiley & Sons, Inc.

Dineen, B. R. (2005). Teamxchange: A team project experience involving virtual teams and fluid team membership. Journal of Management Education, 29(4), 593-616. doi:10.1177/1052562905276275

DuFrene, D. D., Lehman, C. M., Kellermanns, F. W., & Pearson, R. A. (2009). Do Business Communication Technology Tools Meet Learner Needs? Business Communication Quarterly, 72(2), 146-162.

Falvo, D. A., & Solloway, S. (2004). Constructing Community in a Graduate Course about Teaching with Technology. TechTrends: Linking Research & Practice to Improve Learning, 48(5), 56-85.

Ferketich, S. (1991). Focus on psychometrics. Aspects of item analysis. Research in Nursing & Health, 14, 165-168.

Flatley, M. E. (2007). Teaching the virtual presentation. Business Communication Quarterly, 70(3), 301-305. doi:10.1177/1080569907305305

Gilmore, S., & Warren, S. (2007). Themed article: Emotion online: Experiences of teaching in a virtual learning environment. Human Relations, 60(4), 581-608. doi:10.1177/0018726707078351

Ioannou, A. (2008). Development and initial validation of a satisfaction scale on diversity. Paper presented at the annual meeting of American Educational Research Association, New York, NY.

Keefe, T. J. (2003). Using technology to enhance a course: The importance of interaction. EDUCAUSE Quarterly, 1, 24–34.

Kirkpatrick, G. (2005). Online 'chat' facilities as pedagogic tools: A case study. Active Learning in Higher Education, 6(2), 145-159. doi:10.1177/1469787405054239

Knupfer, N. N., Gram, T. E., & Larsen, E. Z. (1997). Participant analysis of a multiclass, multi-state, on-line, discussion list. Retrieved from ERIC Database (ED 409845)

Lee, D., & Kang, S. (2005). Perceived Usefulness and Outcomes of Intranet-Based Learning (IBL): Developing Asynchronous Knowledge Management Systems in Organizational Settings. Journal of Instructional Psychology, 32(1), 68-73.

Lee, H., & Rha, I. (2009). Influence of Structure and Interaction on Student Achievement and Satisfaction in Web-Based Distance Learning. Educational Technology & Society, 12(4), 372-382.

Liaw, S., Huang, H., & Chen, G. (2007). Surveying instructor and learner attitudes toward e-learning. Computers & Education, 49(4), 1066-1080.

Ludlow, L. H., Enterline, S. E., & Cochran-Smith, M. (2008). Learning to teach for Social Justice Beliefs Scale: An application of Rasch measurement principles. Measurement and Evaluation in Counseling and Development, 40(4), 194-214.

Markman, K. M. (2009). So what shall we talk about: Openings and closings in chat-based virtual meetings. Journal of Business Communication, 46(1), 150-170. doi: 10.1177/0021943608325751

Mikulecky, L. (1998). Diversity, discussion, and participation: Comparing Web-based and campus-based adolescent literature classes. Journal of Adolescent & Adult Literacy, 42(2), 84–97.

McBrien, J. L., Jones, P., & Cheng, R. (2009). Virtual spaces: Employing a synchronous online classroom to facilitate student engagement in online learning. International Review of Research in Open and Distance Learning, 10(3). Retrieved from http://0-search.ebscohost.com.uncclc.coast.uncwil.edu/login.aspx?direct=true&db=eric&AN=EJ847763&site=ehost-live

McLure Wasko, M., Faraj, S. (2005). Why should I share? Examining social capital and knowledge contribution in electronic networks of practice. MIS Quarterly 29(1), 35-57.

Muirhead, B. (2004). Encouraging interactivity in online classes. International Journal of Instructional Technology and Distance Learning, Retrieved from http://itdl.org/Journal/Jun_04/article07.htm

Ng, K. C. & Murphy, D. (2005). Evaluating interactivity and learning in computer conferencing using content analysis techniques. Distance Education, 26(1), 89-109.

Nunnally, J. C. (1978). Psychometric theory. New York: McGraw-Hill.

Michele A. Parker and Florence Martin (2010). Using virtual classrooms: Student perceptions of features and characteristics in an online and blended course. Journal of Online Learning and Teaching, 6(1), 135-147.

van Raaij, E. M., & Schepers, J. J. L. (2008). The acceptance and use of a virtual learning environment in China. Computers & Education, 50(3), 838-852.

Rhode, J. F. (2009). Interaction equivalency in self-paced online learning environments: An exploration of learner preferences. International Review of Research in Open and Distance Learning, 10(1), 1-23. Retrieved from http://0-search.ebscohost.com.uncclc.coast.uncwil.edu/login.aspx?direct=true&db=eric&AN=EJ831712&site=ehost-live

Rovai, A. P., & Wighting, M. J. (2005). Feelings of alienation and community among higher education students in a virtual classroom. Internet & Higher Education, 8(2), 97-110. doi:10.1016/j.iheduc.2005.03.001

Santos, J. A. (1999). Cronbach’s Alpha: A tool for assessing the reliability of scales.

Journal of Extension, 37(2). Retrieved from http://www.joe.org/joe/1999april/tt3.php

Suhonen, R., Schmidt, L. A., & Radwin, L. (2007). Measuring individualized nursing care: Assessment of reliability and validity of three scales. Journal of Advanced Nursing, 59(1), 77-85, doi: 10.1111/j.1365-2648.2007.04282.x

Wimba (2009). Wimba for Higher Education. Retrieved from http://www.wimba.com/solutions/highereducation/wimba_classroom_for_higher_education

Winograd, D. (2000, October). The effects of trained moderation in online asynchronous distance learning. Paper presented at the annual meeting of Association for Educational Communication and Technology, Denver, CO.

Tallent-Runnels, M. K., Thomas, J. A., Lan, W. Y., Cooper, S., Ahern, T. C., Shaw, S. M, & Liu, X. (2006). Teaching courses online: A review of the research. Review of Educational Research, 76(1), 93-135. doi: 10.3102/00346543076001093

Yukay-Yuksel, M. (2009). A Turkish version of the School Social Behavior Scales (SSBS). Educational Sciences: Theory & Practice, 9(3), 1633-1650.

About the Authors

Description: Description: Michele.JPG

Michele A. Parker is an Assistant Professor of Educational Leadership at University of North Carolina at Wilmington. Her doctorate is in Research, Statistics, and Evaluation from the University of Virginia. She teaches instructional technology and research courses. Her scholarship includes the use of technology in education and in research.

Email: parkerma@uncw.edu

 

Photo
not
available

Emily R. Grace is a doctoral student in Educational Leadership at University of North Carolina at Wilmington. She works as the Project Coordinator for the Hill Center Regional Educational Model at UNC Wilmington.  Her master’s degrees include school administration and elementary education.  Prior to her current position, she worked as a school administrator and teacher in grades K-12.

Email: gracee@uncw.edu

 

Description: Description: florence.jpg

Florence Martin is an Assistant Professor in the Instructional Technology program at the University of North Carolina, Wilmington. She received her Doctorate and Master's in Educational Technology from Arizona State University. She has a bachelor's degree in Electronics and Communication Engineering from Bharathiyar University, India. Previously, she worked on instructional design projects for Maricopa Community College, University of Phoenix, Intel, Cisco Learning Institute, and Arizona State University. She was a co-principal investigator on the Digital Visual Literacy NSF grant working with Maricopa Community College District in Arizona. She researches technology tools that improve learning and performance (e.g., learning management systems, virtual classrooms, web 2.0 tools).

Email: martinf@uncw.edu

 

 


 
go top
December 2010
Home Page