Academic Exchange Quarterly     Spring  2010    ISSN 1096-1453    Volume  14, Issue  1

To cite, use print source rather than  this on-line version which  may not  reflect print copy

format requirements  or   text lay-out and pagination.

This article should not be reprinted for inclusion in any publication for sale without author's explicit permission. Anyone may view, reproduce or store copy of this article for personal, non-commercial use as allowed by the "Fair Use" limitations (sections 107 and 108) of the U.S. Copyright law. For any other use and for reprints, contact article's author(s) who may impose usage fee.. See also
electronic version copyright clearance CURRENT VERSION COPYRIGHT © MMX AUTHOR & ACADEMIC EXCHANGE QUARTERLY

 

The Effect of Clickers on Student Learning

Kimo Ah Yun, California State University, Sacramento

Maureen Lojo, California State University, Sacramento

 

AhYun, Ph.D. is Director of the Center for Teaching and Learning and Professor of Communication Studies, and Lojo, Ph.D. is Professor of Operations Management.

Abstract

The use of “clickers” or student handheld response devices in college classrooms is on the rise. The study reported here examined student attitudes toward the use of these devices as well as student learning among students (N = 942). Data reveal that students like these devices and those students in the experimental condition (those using clickers) outperformed students in the control condition (those not using clickers) on exams by six percentage points.

 

Introduction

The use of “clickers” in higher education along with research efforts to document their effectiveness and guide their use is on the rise. Vendors offer promises of improved learning to encourage professors to adopt these systems in their courses, but some researchers question the value they deliver as compared to cheaper lower technology alternatives. At present, there are a growing number of published case studies describing the implementation of clicker systems in college classrooms (Judson & Sawada, 2002; Koenig, 2010; Perkins & Turpen, 2009; Weerts, Millers, & Altice, 2009). Most of these studies survey students to measure their perceptions of the impact of clickers on their motivation (Gauci, Dantas, Williams, & Kemm, 2009), engagement (Weerts, Miller, & Altice, 2009), and/or learning (Perkins & Turpen, 2009). Student response is almost universally positive to clickers (Beckert, Fauth, & Olsen, 2009). However, there are few studies that measure clicker effectiveness using objective learning outcomes, such as student test scores (Berry, 2009; Mayer, Stull, DeLeeuw, Almeroth, Bimber et al., 2009; Shaffer & Collura, 2009)

 

Clickers (known variously as electronic response systems, audience paced feedback, and personal response devices) are not particularly new, but their use has shifted over the past four decades as the technology has developed. Early systems provided tallies of student responses to multiple choice questions for instructor use. This enabled professors to gauge student understanding and respond accordingly. Results could be shared with students as quick feedback or used to pace the delivery of material. The focus has gradually shifted to promoting student discussion of key conceptual points (Caldwell, 2007; Fies & Marshall, 2006; Judson & Sawada, 2002). Current technology clicker software provides histograms of the aggregated responses and classroom practice typically includes time for students to compare their viewpoints and possibly revise their answers. This “interactive engagement” is now seen as the most important benefit of clickers. (Beatty, Gerace, Leonard, & Dufresne, 2006; Weerts, Miller, & Altice, 2009)

 

Given the push for more evidence based learning at the university level, a surprising limitation of the current clicker research is that it mostly fails to assess the impact of the use of clickers on student learning. Understanding if clickers do indeed impact student learning is important given that clickers add class costs to the students. As such, to warrant additional class costs, faculty members should be able to point to the added advantages of the use of these devices to justify their use.

 

The purpose of this article is to (1) test the effect of clicker use on student learning and (2) ascertain student perceptions about how clickers impact their overall satisfaction with the class, perceived interest in the material being covered, cognitive effort they are willing to exert in learning the material, general understanding of the material, and motivation to engage in the learning process.

 

Hypothesis

As described above, the use of clickers in the classroom provides a host of pedagogical opportunities. For students it can help them to engage with the material, provide opportunities to practice test questions, and generally help them to get them excited about the learning process. For faculty it provides immediate feedback on areas of student learning that require additional attention. Given the proposed benefits of clicker use in engaging students and in providing immediate feedback on student understanding of classroom material, it can be expected that student performance as measured by examination scores will improve when clickers are used. As such, the following hypothesis is proposed.

 

Hypothesis 1: Students that use clickers will perform better on examinations than those who do not use clickers.

 

Research Question

Research has generally shown that students like and enjoy using clickers in the classroom (Beckert, Fauth, & Olsen, 2009). However, one might argue that attitudes toward clicker use differ across majors. For example, physics and electrical engineering students might like clickers, but philosophy and theatre students might not. Since prior research has not examined student populations of Communication Studies and Operations Management students, replication within these populations can address this issue and allow for a broader understanding of student attitudes toward the use of clickers. While a long list of perceived student outcomes could be studied, a review of the current research identifies student satisfaction along with perceptions of increasing interest, cognitive effort, understanding of the material, and motivation to learn the class material as important variables. As such, the following research question is posed.

 

Research Question 1: Are students satisfied with the use of clickers and do they perceive that clickers increased their interest, cognitive effort, understanding of the material, and motivation to learn the class material?

 

Method

Participants

Participants included 942 students from a large western university enrolled in research courses in the areas of Communication Studies and Operations Management. The mean age of participants was 23.26 years (SD = 3.80); 480 were female (51%), 462 were male (49%). Included in the sample were 565 Caucasian (60%), 76 African American (8%), 84 Hispanic (9%), 64 Asian (8%), and 153 individuals who listed their race as other (15%).

 

Procedures

Participants enrolling into course were selected to either participate in the experimental (use of an e-Instruction™ clicker) or non-experimental (no use of clicker) conditions. Data was collected on student grades based on common test questions on the midterm and final examinations for both sections. Additionally, for the experimental condition, data was collected on perceptions about clickers, including multi-item measures that asked them to report on their satisfaction with the clicker, the degree to which the clicker increased their interest in learning the course content, their perception of the clicker increasing their cognitive effort in the class, the ability of the clicker to boost their understanding of the material, and their raised motivation as the result of the clicker use in the classroom.

 

Measures

Satisfaction with clickers. A five-item seven-point Likert type scale that included items such as, “I am satisfied with the use of clickers in this class” and “I am pleased with the use of clickers in this class” was used to measure clicker satisfaction. A confirmatory factor analysis was performed and revealed the items were consistent with a unidimensional factor structure. Further, the reliability for these items was quite high (α = .97). Given the CFA and reliability findings, these five items were summed to form the final clicker satisfaction measure (M = 5.25, SD = 1.60).

 

Interest in class material. A five-item seven-point Likert type scale that included items such as, “Using clickers made the material we covered in class more interesting” and “I felt more engaged in the class material because clickers were used” was used to measure clicker interest. A confirmatory factor analysis was performed and revealed the items were consistent with a unidimensional factor structure. Further, the reliability for these items was quite high (α = .94). Given the CFA and reliability findings, these five items were summed to form the final clicker interest in the class material measure (M = 5.04, SD = 1.59).

 

Cognitive effort due to clickers. A five-item seven-point Likert type scale that included items such as, “The clickers caused me to put a lot of mental effort into this class” and “I am pleased with the use of clickers in this class” was used to measure clicker satisfaction. A confirmatory factor analysis was performed and revealed the items were consistent with a unidimensional factor structure. Further, the reliability for these items was quite high (α = .97). Given the CFA and reliability findings, these five items were summed to form the final cognitive effort due to clickers measure (M = 4.67, SD = 1.53).

 

Understanding due to clickers. A five-item seven-point Likert type scale that included items such as, “Clickers helped me to understand the class material ” and “Clickers assisted my ability to grasp the material covered in this class” was used to measure clicker use as a function of understanding. A confirmatory factor analysis was performed and revealed the items were consistent with a unidimensional factor structure. Further, the reliability for these items was quite high (α = .96). Given the CFA and reliability findings, these five items were summed to form the final understanding due to clickers measure (M = 4.86, SD = 1.47).

 

Motivation as a function of clickers. A five-item seven-point Likert type scale that included items such as, “Clickers helped to motivate me in this class ” and “Clickers inspired me to learn the material” was used to measure clicker use as a function of motivation. A confirmatory factor analysis was performed and revealed the items were consistent with a unidimensional factor structure. Further, the reliability for these items was quite high (α = .91). Given the CFA and reliability findings, these five items were summed to form the final motivation as a function of clickers measure (M = 4.66, SD = 1.52).

 

Results

H1 predicted that participants in the clicker condition would perform better on examinations than those in the non-clicker condition. To test this hypothesis, an independent groups design t-test was performed. For these data, participants in the clicker condition (M = .77, SD = .10) outperformed participants in the non-clicker condition (M = .71, SD = .12), t = 5.35 (940), p = < .001, r = .17.

 

RQ 1 asked about the degree to which participants were satisfied with the use of clickers and perceived that the clickers increased their interest, cognitive effort, understanding of the material and motivation to learn the class material. A one-sample t-test was performed on each of the outcome variables for the participants in the experimental condition. Because a seven-point scale was used for each of the outcome variables, the comparison point was set at the midpoint of the scale to test for potential differences (scale midpoint = 4). For these data, significant positive differences were found with respect to satisfaction, t = 17.23 (484), p < .001, interest, t = 14.30 (482), p < .001, cognitive effort, t = 9.57 (475), p < .001, understanding, t = 12.83 (482), p < .001, and motivation, t = 9.55 (482), p < .001

Table 1. Student Attitudes Toward Clicker Use

 

 

 

 

 

 

 

Mean

SD

t

df

p

Satisfaction with clickers

5.25

1.60

17.23

484

<.001

Interest in class material

5.04

1.59

14.30

482

<.001

Cognitive effort due to clickers

4.67

1.53

9.57

475

<.001

Understanding due to clickers

4.86

1.47

12.83

482

<.001

Motivation as a function of clickers

4.66

1.52

9.55

482

<.001

 

Discussion

Findings of this study revealed that the use of clickers showed a positive impact on overall student learning. Moreover, students also reported that they were satisfied, and that clickers increased their interest, cognitive effort, understanding, and motivation. Additionally, the findings here are consistent with other research that shows that clickers can serve as a useful pedagogical device (see for example, Berry, 2009; Mayer, Stull, DeLeeuw, Almeroth, Bimber et al., 2009; Shaffer & Collura, 2009). Overall, these results are promising in that the spread of clicker use amongst all levels of education might be having positive impacts on overall student learning.

 

The authors, however, caution individuals from the wholesale generalizing of the results found in this study. As with most pedagogical tools and strategies, it can be expected that there will be variation amongst users. Both instructors who were studied here are familiar with theories of teaching and learning and used the clicker devices because of their curiosity of how students would respond and their desire to explore ways to further improve student learning in their courses. As such, the effects found here might be enhanced given the users of this technology. Quite simply, it is believed that the technology can have positive effects, but only when it is applied with considerable forethought on how it can support overall learning goals. As with the use of other technology based teaching devices and teaching strategies, there is no reason to expect that clickers uniquely impact student learning, but instead, clickers coupled with teachers that use them with an understanding of their affordances can yield a successful outcome.  

 

Conclusion

The purpose of this study was to (1) test the effect of clicker use on student learning and (2) ascertain student perceptions about how clickers impact their overall satisfaction with the class, perceived interest in the material being covered, cognitive effort they are willing to exert in learning the material, general understanding of the material, and motivation to engage in the learning process. The findings of this study as illustrated above show support for the use of clickers in the classroom. Further research should seek to explore other factors that might affect clicker use in the classroom. For example, additional research might explore the conditions under which the benefits for clicker use in the classroom might be maximized. Considering factors such as the size of the course, level of course (introduction versus advanced) and composition of students (entry versus advanced) all might be fruitful endeavors to fully understand the limits and capabilities of using clickers as a learning tool.
 
References

Beatty, I.D., Gerace, W.J., Leonard, W.J., Dufresne, R.J. (2006) Designing Effective

Questions for Classroom Response System Teaching. American Journal of Physics, 74, 31-39.

Beckert, T., Fauth, E., & Olsen, K. (2009). Clicker Satisfaction for Students in Human

Development: Differences for class type, prior exposure, and student talkativity. North American Journal of Psychology, 11, 599-611.

Berry, J. (2009). Technology Support in Nursing Education: Clickers in the Classroom. Nursing

Education Perspective, 30, 295-298.

Caldwell, J.E. (2007) Clickers in the Large Classroom: Current Research and Best-

Practice Tips. CBE – Life Sciences Education, 6, 9-20.

Fies, C. & Marshall, J. (2006) Classroom Response Systems: A Review of the Literature.

 Journal of Science Education and Technology, 15, 101-109.

Gauici, S., Dantas, A., Williams, D., @ Kemm, R. (2009). Promoting Student-Centered Active

Learning in Lectures with a Personal Response System. Advances in Physiology Education, 33, 60-71.

Judson, E. & Sawada, D. (2002) Learning from Past and Present: Electronic Response

Systems in College Lecture Halls. Journal of Computers in Mathematics and Science Teaching, 21, 167-181.

Koenig, K. (2010). Building Acceptance for Pedagogical Reform through Wide-Scale

Implementation of Clickers. Journal of College Science, 39, 46-50.

Mayer, R., Stull, A., DeLeeuw, K., Almeroth, K., Bimber, B., et al. (2009). Clickers in College

Classrooms: Fostering Learning with Questioning Methods in Large Lecture Classes. Contemporary Educational Psychology, 34, 51-57.

Perkins, K., & Turpen, C. (2009). Student Perspectives on Using Clickers in Upper-Division

Physics Courses. AIP Conference Proceedings, 1179, 225-228.

Shaffer, D., & Collura, M. (2009). Evaluating the Effectiveness of a Personal Response System

in the Classroom. Teaching of Psychology, 36, 273-277.

Weerts, S., Miller, D., & Altice, A. (2009). “Clicker” Technology Promotes Interactivity in an

Undergraduate Course. Journal of Nutrition Education & Behavior, 41, 227-228.