Academic Exchange Quarterly     Fall  2009    ISSN 1096-1453    Volume  13, Issue  3

To cite, use print source rather than  this on-line version which  may not  reflect print copy format   requirements  or   text lay-out and pagination.

This article should not be reprinted for inclusion in any publication for sale without author's explicit permission. Anyone may view,  reproduce or store copy of this article for personal, non-commercial use as allowed by the "Fair Use" limitations (sections 107 and 108)  of the U.S. Copyright law. For any other use and for reprints, contact article's author(s) who may impose usage fee.. See also




Can Faculty Predict Student Perceptions?


Michael S. Goodstone, Jennifer Nieman Gonder, Farmingdale State College

Jennifer Strangio, Adelphi University


Goodstone, Ph.D. is Associate Professor and Nieman Gonder, Ph.D. is Assistant Professor, Psychology Department. Strangio, B.A., is a graduate intern in school psychology.



There is a lack of research investigating instructors’ ability to predict student interest and learning in the classroom. However, instructors make modifications to daily classroom activities based on their beliefs about students’ interest and learning. The present study investigated the ability of two instructors to predict their students’ perceptions of interest and learning during each class session. The relationship between instructor and student perceptions was examined in both technology based presentations and traditional classroom activities.



Student evaluations have become the primary method used to assess instructors’ classroom effectiveness (Berk, 2005). These evaluations are typically completed at the end of a course and provide feedback regarding how an instructor has been perceived over the entire semester. While this allows instructors to make revisions before the next semester, we must make immediate decisions throughout each class session regarding our students’ understanding of material and degree of interest. Throughout any class, instructors modify daily activities including pace, depth, and breadth of material based on subtle cues such as student yawning or head nodding. While we make these decisions based on our perception of student interest and comprehension, we often do so with no assurance that we are perceiving students accurately. The current study was undertaken to investigate the ability of two instructors to predict their student’s in-class perceptions of interest and learning in both technology presentations and traditional classroom settings. Before discussing the methodology and findings of the current study, this paper will review prior research on the congruence between faculty and student perceptions as well as the effect of technology on such perceptions.


An instructor’s ability to accurately perceive student interest and learning is critical to effectiveness in the classroom and may be used as an alternative measure of teaching effectiveness. Berk (2005) proposed that faculty complete student rating scales based on the anticipated ratings students would provide. Discrepancies between actual student ratings and an instructor’s perception of these ratings could serve to enhance self awareness and provide insights into classroom practices. While some institutions have incorporated such practices into assessment efforts (Broder & Kalivoda, 2004), there is a lack of empirical research investigating whether faculty are able to predict their students’ perceptions of interest or learning in the classroom. There is, however, an extensive literature examining the congruence between faculty and student perceptions of instructional effectiveness. While this research does not provide direct evidence of an instructor’s predictive ability, it does offer some understanding of the relationship between faculty and student perceptions.


Research on Faculty and Student Perceptions

In an effort to validate student ratings of instructional effectiveness, Marsh, Overall, and Kesler (1979) assessed the agreement between faculty and student ratings of instructor effectiveness via end of course evaluations. Students were asked to evaluate faculty members on six factors including instructor enthusiasm and learning. Faculty completed the same survey and were asked to rate their own effectiveness rather than how they expected students would rate them. Mean differences between faculty self-evaluation and student ratings were infrequent, reaching significance on only five of the 24 survey items. Consistent with earlier studies (Central, 1974), Marsh et al.’s (1979) overall conclusion is that faculty and students generally agree in their assessment of instructional effectiveness.  In a later meta-analysis, Feldman (1989) found inconsistent correlations between faculty and student ratings of overall effectiveness yet identified a high degree of congruence regarding specific strengths and weaknesses of instructors. 


While past research suggests a general congruence between faculty and student perceptions of instructional effectiveness, a number of other factors including characteristics of the student and the classroom may have a significant effect on learning. Lammers and Smith (2008) assert that to enhance learning, one must first have a comprehensive understanding of all of these variables from both a faculty and student perspective. To accomplish this, faculty and students completed a questionnaire measuring 110 variables potentially related to student learning. The variables were separated into three categories (instructor, student, and physical environment), each of which consisted of several predetermined factors. As expected, both rating sources agreed that instructor variables are somewhat more important than student or environmental variables. The authors conclude that faculty and students are in general agreement regarding the factors that are important to student learning.


Much of the research examining faculty and student perceptions of instructional effectiveness was conducted prior to the 1980s. Recent research has instead focused on the congruence between faculty and student perceptions of variables including student effort, grading policies, and technology. In contrast to earlier research, several discrepancies were found. For example, Jaskyte, Taylor, and Smariga (2009) identified significant differences in faculty and student perceptions of characteristics of innovative teaching techniques. Adams (2005) found a significant disparity between faculty and students’ perceptions regarding the factors that should be considered in grading with students believing that effort should account for significantly more of a final grade than did faculty. Wyatt, Saunders, and Zelmer (2005) found that although faculty correctly predicted students’ study time, they disagreed with students regarding overall level of preparedness.  


Impact of Technology on Perceptions

Another important area of recent research focuses on the potential impact of pedagogical technology on the congruence of faculty and student perceptions.  Modes of classroom instruction have changed significantly since much of the literature on perceptions of instructional effectiveness was published. The technology prevalent today is likely to have an impact on faculty and student perceptions of effectiveness as well as student learning. While the use of advanced technology is steadily rising, there is debate whether this technology increases students’ learning or actually hinders understanding. Studies suggest that students’ reactions to technology are variable (D’Angelo & Woosely, 2007) and may be dependent upon the way the instructor uses technology (Young, 2004).


To investigate this claim, Hardin (2007) tested the independent and interacting effects of instructor and the use of presentation software on student learning, satisfaction, and engagement. Four instructors were assigned to teach two sections of an Introduction to Psychology course. Each instructor used PowerPoint in one section and not the other. Pretests revealed no significant differences on dependent measures. At the end of the semester, each student again completed a survey which measured their learning, satisfaction and engagement. Results of MANOVA revealed no main effects of PowerPoint. There was, however, a significant main effect for instructor on students liking of the course, perceived learning, and actual learning. There was also a significant instructor by PowerPoint interaction. Exploration of the interaction provided inconsistent results with one instructor’s students believing they learned significantly less when the instructor used PowerPoint while the use of PowerPoint had no effect on students’ perceived learning for the other three instructors. Similarly, one instructor’s students provided higher ratings of their interest in psychology when that instructor used PowerPoint but again, this effect was absent for the other three instructors. It is interesting to note that PowerPoint appears to have no direct effects but may interact with instructor to idiosyncratically impact student perception. Hardin (2007) summarizes his research by emphasizing that it is the instructor, not the technology, which is most critical to student learning.


The Present Study

Prior research indicates general congruence between faculty and student perceptions of overall instructional effectiveness and variables essential for student learning. Further, the impact of technological instruction techniques on these perceptions was found to be dependent on instructor characteristics. Yet, there is a lack of research investigating an instructor’s ability to predict student perceptions of interest and learning during a class session. This is a critical skill which will allow faculty to modify instruction in real time, rather than learning what students thought of the class at the end of the semester. The present study investigated the ability of two faculty members to predict their students’ perceptions of interest and learning during each class session in both technology based presentations and traditional classroom activities.



Participants were students attending two sections of an instructor’s Introduction to Psychology class (Instructor A) and one section of a second instructor’s Abnormal Psychology class (Instructor B). Each section contained approximately 40 students. Instructor A used presentation software (PowerPoint) in one section and not the other. Instructor B alternated the use of PowerPoint and traditional lecture within the single section. At the end of each class session, students were asked to anonymously complete two Likert type ratings. Two global ratings were selected instead of a full assessment instrument due to time constraints, as ratings were collected at the end of each class.  There is support in the literature for the use of global student ratings (Cashin & Downey, 1992). One question asked the student to rate the overall amount learned in the class on a scale ranging from 1 = Very Little to 7 = A Great Deal and the second asked for a rating of student interest on a scale ranging from 1 = Not at All to 7 = Very Interested. Instructors completed the same two items after each class session but were asked to predict their students’ responses rather than to assess their own performance. Instructors and students were blind to each other’s ratings.


In the interest of anonymous responding, no demographic data were collected. Instructors’ ratings for each class were compared with the mean of the students’ ratings for each class. Correlation analyses were performed to assess concordance between instructor and mean student ratings. Further, classes were coded with regard to whether the presentation included presentation technology (PowerPoint) or traditional classroom methods. Technology was entered as a variable to determine whether it altered the relationship between instructor and student perceptions.



Preliminary Analyses

As previous research (Hardin, 2007) suggests that instructor can be an important moderating and/or mediating variable in pedagogy research, we first examined whether it made sense to combine the data from both instructors. We followed the procedures described by Preacher (2003) for using multiple linear regression (MLR) to test interaction effects of a dichotomous variable (instructor) on the relationship between two continuous variables (instructor ratings and student ratings). MLR was used to test the effects of instructor on the relationship between instructor/student perceptions of amount learned and interest. Results suggested a different regression relationship for each instructor for instructor ratings of learning and mean student perception of amount learned (beta = -.21, p = .02). Given this finding, the literature indicating the potential moderating and mediating impact of instructor, and differences in methodology employed by the two instructors, all further analyses were separated for each instructor.


Instructor A

Correlational results for Instructor A suggest that this instructor was able to predict her students’ perceptions of learning (r = .43, p = .02, n = 30) as well as interest (r= .51, p = .004, n = 30). Separating these Instructor A correlations for classes where PowerPoint was used produced r = .67, p =.004, n = 16 for ratings of learning and r = .57, p = .02, n = 16 for interest. The corresponding correlations for Instructor A classes that did not use PowerPoint were r = .56, p = .04, n = 14 for learning and r = .36, p = .20, n = 14 for interest. Z score comparisons of the correlations for the PowerPoint versus traditional classes revealed non significant results (z = .42, p = .67 for learning and z = .65, p = .52 for interest). These results suggest that Instructor A was able to predict her students’ ratings for amount learned and interest whether using PowerPoint presentations or traditional lecture methods.


The impact of technology on student and instructor ratings of amount learned and interest was examined through ANOVA. Instructor A’s mean student rating of learning in PowerPoint based classes (M = 5.2, n = 16) was significantly lower than in her traditional classes (M = 5.5, n = 14). See Table 1. While Instructor A’s students believed they learned more in her traditional lectures, no other significant mean differences were observed for mean student ratings or instructor ratings in Instructor A’s PowerPoint versus traditional classes.


Instructor B

Correlational results for Instructor B suggest that this instructor was able to predict his students’ average perception of interest (r = .76, p = .001, n = 15) though not the average ratings for learning (r = -.29, p = .30, n = 15). Separating Instructor B correlations for classes where PowerPoint was used was somewhat problematic as there were only six classes where this instructor used PowerPoint and nine traditional classes. In the six PowerPoint classes, Instructor B’s ratings of student interest were significantly correlated with mean student ratings r = .76, p = .08 as were ratings of interest in the nine traditional classes (r = .65, p = .06). The learning rating correlation for the six PowerPoint classes was r = -.14, p = .80 and the corresponding correlation for the nine traditional classes was r = -.31, p = .41. Z score comparisons of the correlations for the PowerPoint versus traditional classes revealed non significant results (z = .26, p = .80 for learning and z = .29, p = .77 for interest). While obviously limited in statistical conclusion due to the small sample size, the observed correlations are similar in magnitude and it appears safe to conclude that PowerPoint did not have any meaningful impact on Instructor B’s ability to predict his students’ ratings of interest or learning.


ANOVA was used to determine the impact of PowerPoint on Instructor B’s mean student and instructor ratings of amount learned and interest. Instructor B’s PowerPoint based class (n = 6) instructor mean rating for interest was significantly lower (M = 4.2) than his interest rating in the traditional class (M = 6.1). See Table 1. Instructor B believes his students were more interested in traditional lectures though no other significant differences were observed for either mean student ratings or instructor ratings in Instructor B’s PowerPoint versus traditional classes.



Table 1. Mean differences in instructor and student ratings between                                  

PowerPoint and traditional classes




M (SD)


       M (SD)



Instructor A

Instructor Learning

5.8 (.75)

5.6 (.93)


Instructor Interest

4.9 (1.1)

5.4 (.85)


Student Learning

5.2 (.25)

5.5 (.26)


Student Interest

5.0 (.39)

5.2 (.38)



Instructor B

Instructor Learning

5.3 (.82)

5.8 (1.2)


Instructor Interest

4.2 (1.6)

6.1 (.78)


Student Learning

6.0 (.15)

6.0 (.19)


Student Interest

5.9 (.27)

6.1 (.20)


** p equals less than .01




Discussion and Conclusion

Results suggest these two instructors were able to predict their students’ perceptions of interest in the class. One instructor was able to predict her students’ ratings of learning but the other instructor was not. The use of PowerPoint did not appear to impact either instructor’s ability or inability to predict students’ ratings. Instructor A’s students in the PowerPoint section rated amount learned lower than those in her traditional class, though pre-treatment differences and other influences are entirely possible. Instructor B rated his students as more interested in traditional lectures than PowerPoint based presentations. Overall, these results are consistent with prior research that suggests with regard to pedagogy, what is true for one instructor may not be true for another.


The ability to predict student perceptions is a critical skill which will allow faculty to modify instruction in real time. The current study sought to answer the questions of: 1) whether the modifications instructors make during each class are based on accurate perceptions of student interest and learning and 2) whether the instructor’s ability to perceive students is influenced by the use of technology. The answer to the first question appears to partially depend upon the instructor although both instructors accurately perceived student interest.  Second, the use of presentation software did not impact ability to perceive students. Consistent with past research, the effect of presentation software on student and faculty perceptions differed by instructor.  Thus, perhaps the most important conclusion from this study is not that both instructors were accurate regarding their students’ interest or that presentation software had no impact on the relationship between instructor and student perceptions, but that each instructor should attempt to answer these questions in their own classroom.  While it is tempting to ask questions such as “are instructors accurate in their perceptions of their students?” or “are students more interested when we use PowerPoint?” the answer to such questions may very well be quite different for each instructor.




Adams, J.B. (2005).  What makes the grade?  Faculty and student perceptions.  Teaching of

Psychology, 32(1), 21-24.


Berk, R.A. (2005). Survey of 12 strategies to measure teaching effectiveness. International

Journal of Teaching and Learning in Higher Education, 17(1), 48-62.


Broder, J.M. & Kalivoda, P.L. (2004). The quest for excellence in Teaching and Learning at

UGA. Paper presented at the 19th International Conference of Improving University Teaching; Bern Switzerland.


Cashin, W. & Downey, R. (1992). Using global student rating items for summative evaluation.

Journal of Educational Psychology, 84, 563-572.


Centra, J. (1974). Self-rating of college teachers: A comparision with student ratings. Journal of

Educational Measurement, 4, 287-295.


D’Angelo, J.M. & Woosley, S.A. (2007). Technology in the classroom: Friend or foe? 

Education, 127(4), 462-471.


Feldman, K.A. (1989). Instructional effectiveness of college teachers as judged by teachers

themselves, current and former students, colleauges, administrators, and external observers. Research in Higher Education, 30(2), 137 – 194.


Hardin, E. (2007). Presentation software in the college classroom: Don’t forget the instructor.

Technology in Teaching, 34, 53-57.


Jaskyte, K., Taylor, H., & Smariga, R. (2009).  Student and faculty perceptions of innovative

teaching.  Creativity Research Journal, 21 (1), 111-116.


Lammers, W., & Smith, S. (2008). Learning factors in the university classroom: Faculty and

student perspectives. Teaching of Psychology, 35, 61-70.


Preacher (2003). A Primer on Interaction Effects in Multiple Linear Regression. Retrieved March 24, 2008, from


Marsh, H., Overall, J. & Kesler, S. (1979). Validity of student evaluations of instructional

effectivenss: A comparision of faculty self-evalautions and evaluations by their students. Journal of Educational Psychology , 71, 149-160.


Young, J.R. (2004). Students say technology has little impact on teaching. Chronicle of Higher

Education, 50, A31.


Wyatt, G., Saunders, D., & Zelmer, D. (2005).  Academic preparation, effort, and success:  A

comparison of student and faculty perceptions.  Educational Research Quarterly, 29(2), 29-37.