Academic Exchange Quarterly
Summer 2006 ISSN 1096-1453
Volume 10, Issue 2
To cite, use print source rather than this on-line
version which may not reflect print copy format requirements or
text lay-out and pagination.
Perceptions
of Web-mediated Peer Assessment
Lan Li,
Allen L, Steckelberg,
Li, is an instructor and doctoral student at the
Abstract
Previous studies have revealed that peer pressure is a factor in negative student perceptions of peer
assessment. In this study, a web-mediated system was utilized to facilitate
peer assessment and provide anonymity minimizing the impact of peer pressure.
Post-assessment survey results indicated that students generally accepted this
method and recognized its’ value in promoting critical learning. The merits of
anonymity and instant feedback were acknowledged in student responses.
Introduction
Peer
assessment has become one of the most common strategies for shifting students’
roles in learning from passively observing to actively participating in higher
education in the past two decades.
Peer
assessment is a process in which students evaluate the achievement or
performance of others of similar status (Topping, Smith, Swanson, & Elliot,
2000). Peer assessment has been “viewed as having significant pedagogic value”
(Patri, 2002). Peer assessment’s benefits in
promoting higher order thinking and supporting cooperative learning have been
established. Pope (2001) suggested that peer assessment stimulates student
motivation and encourages deeper learning. Freeman (1995) noted that studying
the marking criteria and evaluating peers’ work could improve students’
awareness of their own work and encourage deeper understanding. Topping (1998),
after reviewing 109 articles focusing on peer assessment, confirmed that peer
assessment yields cognitive benefits for both assessors and assessees
in multiple ways. Those “benefits might accrue before, during and after” the
process. He further concluded that feedback yielded from this process has a
positive impact on students’ grades and subjective perceptions. Researchers
have generally agreed that peer assessment promotes student autonomy and
facilitates meaningful learning (Freeman, 1995; Pope, 2001). However, despite
this widespread acceptance of this process, there were only a very limited number
of publications in literature exploring how students view this method (Hanrahan & Isaacs, 2001).
In general, the
literature reveals that student perceptions towards peer assessment are
twofold: On one hand, students acknowledge and recognize the merits of peer
assessment; on the other hand, a number of diverse reasons caused student
negative or unsure feelings.
Gatfield (1999) utilized peer assessment in a compulsory
international marketing management course. After peer assessment, students were
asked to respond to a survey regarding their attitudes towards peer assessment.
The analysis of the survey was divided into three parts. The first part
considered students’ perceptions of the suitability of the peer assessment method in that course. The second
part dealt with the degree of student satisfaction. The third part solicited
student suggestions for improvement of the process. Data analysis indicated
that students in general held an approximate level of agreement and showed an
acceptance of the method of peer assessment.
Data also revealed that overall there was a high level of student satisfaction.
Students’ suggestions for improvement were for tutors to offer more
consultation time and to allocate more time in tutorials to assist group work.
This finding of
positive student attitudes was also supported by Stefani’s
study (1994). Almost all the participants in his study indicated that peer assessment
made them think more and 85% students said that it made them learn more than traditional
assessments of their work. In their study, Hanrahan
and
Isaacs (2001) presented an
analysis of the views of a large number of students (233) who had just experienced
self- and peer-feedback as part of one of their subjects. The data indicated
that students felt that they benefited from the intervention.
Although most
students enjoy peer assessment and understand its values, not all the
experiences associated with peer assessment were favorable. Besides positive
themes like “gained better understanding”; “productive” (including learning
benefits and improved work) and “motivation” (to impress peers), data from Hanrahan and Isaacs (2001)’s
study also revealed other themes of student perceptions such as discomfort
cause by peer pressure (associated with having peers rating own paper and
critiquing others) and problems with implementation (such as “time-consuming”,
“process not taken seriously/doesn’t count for marks”, etc). This picture was
confirmed by a number of other studies. Falchikov
(1986) reported students least liked features
in her study: “difficulty of task”, “possibility of marking down/failing a
peer” and “system was too rigid/clinical”. Chen and Warren (1997) conducted a
study focusing on students who changed their attitude before and after peer
assessment and the reasons given for these shifts. The reasons that caused
students to switch from being positive, or unsure of peer assessment to being
negative included students’ ability to assess peers, students’ seriousness,
distribution of peer marks in students’ grades, limited training and peer
pressure/objectivity (Students “felt compelled to award a higher score to those
with whom they were more friendly”. Some student felt that this process was
‘unfair and risky’).
Some of the
above problems causing student negative perceptions of peer assessment can be
overcome by improving the peer assessment
process. For example, instructors may use scaffolding to reduce difficulty of
task or provide more effective training to help student get deeper
understanding of subjects and acquire basic assessment skills. Some are hard to
control, such as peer pressure. One assumption of this process’s credibility is
that students usually provide fair and unbiased feedback to their peers.
However, students find it difficult to rate their peers. They don’t want to be
too harsh on their peers; they are uncomfortable critiquing others’ work.
Conducted in an open environment, potential biases like friendship, gender or
race could cause students to rate good performance down or poor performance up.
Instructors need to design and maintain a distribution system to keep both
reviewers’ and reviewees’ information confidential
and anonymous, and at the same time, traceable for instructors to maintain the
fluency of the process. The importance of maintaining anonymity of peer
assessment has been realized and suggested by researchers (e.g. Davies, 2002).
However, most
current peer assessment methods are conducted through paper-based systems. In
such systems, it is extremely hard and time-consuming to maintain an anonymous
environment. Falchikov and Magin’s
(1997) reviewed studies demonstrating that
students’ gender can be revealed from their style of handwriting and Hanrahan and Isaac (2001) reported more than 40 person
hours for documentation work in classes with 244 students to manage an
anonymous peer feedback distribution.
Web-mediated
peer assessment has been proposed as a solution to provide anonymity. In this
system, data can be automated and summarized, and students and instructors have
instant access to data once they are generated. Moreover, the whole process can
be conducted in an anonymous way via the Internet. Reviewers and reviewees are not aware of each other. Ideally, anonymity
provided in a web-mediated peer assessment should diminish peer pressure
substantially. Therefore, student discomfort caused by peer pressure should be
reduced. This paper explores student perceptions following an anonymous
web-mediated peer assessment system.
A
database-driven website was built to provide anonymity and facilitate peer
assessment process (Li & Steckelberg, 2005). This
system contained separate interfaces for instructors and students. In the
student interface, once students logged in, each student was randomly assigned
two peers’ projects. Students performed two roles in this system – reviewers
and reviewees. As reviewers, they rated and commented
upon peers’ projects confidentially according to the marking criteria. The data
were summarized for the author of each project; as reviewees,
they had access to the feedback for their own projects. The instructor
interface was designed to enable instructors to keep track of peer review
process. For each student, the instructor had access to the two reviews created
by the student as well as the feedback this student’s project received from two
peers.
This system has the following major
merits:
1.
Anonymity was assured. This system
ensured anonymity in two ways. First, students’ identities were coded as
numbers. Students were instructed to remove any personal information from their
projects. No personal information, such as initials of their names, could be
associated with their work. Secondly, students’ projects were typed and running
in Internet, no handwriting would reveal their identities or characteristics,
such as gender.
2.
Management workload was reduced. All the
data collected were automatically summarized and transmitted from users’
computers to database. Management workload was reduced substantially.
3.
Students’ interaction was stimulated.
Students and instructors had immediate access to feedback as it was submitted,
which encouraged students’ engagement and promoted their interaction.
Methods
Subjects
Forty-nine
students from an undergraduate course at a central
Preparation
for peer assessment
Since
peer assessment was a new concept for most students, discussions of advantages
and disadvantages of peer assessment were given in class. Students were also
introduced to the web-mediated peer assessment site. Special attention was
given to explaining the anonymity features of this system. After the introduction,
students should have familiarized themselves with the site and its use.
Procedure
In this study,
students were asked to build a WebQuest project and
upload it to the Internet. A WebQuest is “an inquiry-oriented activity in which most or all of the
information used by learners is drawn from the Web” (Dodge & March, 1995).
This model, developed by Bernie Dodge and Tom March in early 1995, is designed
to involve users in a learning process of analysis, synthesis and evaluation,
which promotes their critical thinking and scaffolding skills. The peer
assessment process was used to help students improve the quality of WebQuest project.
Step 1: Studying the
content area and discussing assessment criteria
After
thoroughly studying the content area, students were
presented a rubric and were asked to study it. The assessment rubric was
studied in two levels in a student-centered atmosphere. First students formed
groups and discussed the rubric; then they were encouraged to share their
understanding in class.
Step 2:
Applying assessment skills
The
goal of this step was to make sure
that students would have basic assessment skills to assess their peers. One
example project was provided and students were asked to use the rubric to rate
it. Students grading and instructor grading were compared and discussed in
class.
Step 3: Developing project
Students
were requested to construct a WebQuest project and
make it available on the Internet.
Step 4: Judging the performance of peers and providing
feedback
Once students
logged onto the peer feedback website, they had access to two randomly assigned
peers’ WebQuest projects. Students were asked to rate
the projects according to the rubric and provide detailed comments and
suggestions.
Step 5: Reviewing peer feedback and improving their own
projects
Feedback from
peers was automatically summarized and made available to the creator of each
project. After viewing the peer rating scores and comments, students were asked
to go back to improve their own projects.
Step 6: Completing the post-assessment survey
After students
completed their final projects, they were asked to fill in a survey. Forty-one
students responded to this survey. The survey was adapted from a previous study
(Lin, Liu, & Yuan, 2002) and consisted of 11 five-point Likert
Scale items (ranging from 1/strongly disagree to 5/strongly agree) dealing with
students’ general perceptions about the
process, as well as two open-ended questions related to their likes and
dislikes: “Please specify what you like most about this
peer assessment procedure.” “How would you change this peer assessment
procedure? And why?”
Results
This
survey (Table 1) provides a positive picture of students’ perceptions of peer
assessment through the web-mediated system. Students’ responses expressed more
than general satisfaction level for most of the items.
*************
Insert Table
One
***************
For
the first open-ended question (“Please specify what you like most about this
peer assessment procedure.”), four major themes were identified. First,
feedback that students received from peers helped them reconsider and improve
their projects. Student indicated that it was really beneficial to “look at
what others are doing”. Some students got “inspired” by peers’ work. Secondly,
the opportunity to review and grade peers’ performance urged students on to
greater efforts in the content area and the marking criteria. “I spent more
time (studying the project and rubric)”. “It was quite a responsibility to grade
others”. The third theme was the comfort brought by anonymous marking.
Anonymity provided by this web-mediated peer assessment system provided
students a rather “relaxing” environment and less pressure
from peers. The last theme was student appreciation
for instant feedback.
For the second
open-ended question (“How would you change this peer assessment procedure? And
why?”), three themes emerged. First, several students stressed their
satisfaction with this web-mediated process. They stated that they wouldn’t
suggest any changes. Secondly, some
students would like to have more time for this project. Some students
specifically noted that
they wish they had more time to rethink how to modify/revise their projects
after receiving peer comments. Finally, some students asked for more critical
and constructive feedback. “I got (a) good score and nice comments (for my
project). (But) I know my WebQuest is not perfect”.
“I’d like them to tell what they really think”.
Conclusion
This study
utilized a web-mediated system in peer assessment of an undergraduate course.
Students’ perceptions towards the peer assessment method in this system were
explored and results indicated that in general, students acknowledged peer
assessment and recognized the merits of this web-mediated system. This result
replicated, on a larger scale, the findings from a previous study with a
smaller student group utilizing this web-mediated system (Li & Steckelberg, 2005).
Anonymity is
one of the major concerns of conducting peer assessment in paper-based systems.
One of special features of this system is to provide student anonymity to
minimize the impact of peer pressure, thus improving the accuracy of peer
assessment. The literature suggests that peer pressure contributes directly to
student negative feelings regarding peer assessment. These feelings included
not feeling comfortable rating/critiquing peers’ work, feeling obligated to
assign friends a higher score, etc. Since anonymity was provided in this study,
peer pressure should be substantially minimized. This was confirmed from
student survey response. For the last item of the survey “I felt that I was
critical of others when marking peers’ work.” The mean score was 4.00 with a
standard deviation of .92. This suggested that most students felt quite
comfortable rating peers’ work and being critical. This was further confirmed
by student responses – “less pressure from peers” to the first open-ended
question about the most liked feature of this peer assessment method.
Overall, the authors felt that
the peer assessment process in this study was a worthwhile activity. During
this process, students were fully engaged and they shifted their roles from
reviewers to reviewees. During this process,
students’ interaction was stimulated and their critical thinking skills were
fostered. At the same time, anonymity was provided and administrative workload
was substantially reduced. Compared to paper-based systems, a web-mediated
system is certainly promising. However, the authors do realize that, unlike
face-to-face peer assessment, where students exchange thoughts and opinions,
the interaction facilitated in this web-mediated system goes only one-way (from
reviewers to reviewees). There was no opportunity for
reviewees to let reviewers know their opinions/responses
on reviewers’ feedback, their reactions to reviewer’s comments, or discuss
particular aspects of the review in greater details. This is a limitation of
the current web-mediated peer assessment approach. Providing a two-way and
multilayer interaction in this anonymous environment presents an intriguing
opportunity for further investigation. This kind of “feedback on feedback” is
likely to be beneficial in promoting both parties’ understanding of subject
matter and fostering their critical thinking skills.
Reference
Chen W., &
Davies, P. (2002). Using Student Reflective
Self-Assessment for Awarding Degree Classifications. Innovations in Education and Teaching International, 39(4),
307-319.
Falchikov, N. (1986).
Product comparisons and process benefits of collaborative peer and
self-assessments. Assessment and
Evaluation in Higher Education, 11, 146–166.
Falchikov, N., & Magin, D.
(1997). Detecting gender bias in peer marking of students' group process work. Assessment & Evaluation in Higher
Education, 22(4), 385-396.
Freeman,
M. (1995). Peer assessment by groups of group Work. Assessment & Evaluation in Higher Education,
20(3), 289-300.
Gatfield. T. (1999). Examining student satisfaction with group projects and
peer assessment. Assessment &
Evaluation in Higher Education. 24(4).
Hanrahan, S. J., &
Isaacs, G. (2001). Assessing self- and peer-assessment: the students' views. Higher Education Research & Development,
20(1), 53-70.
Li, L., & Steckelberg, A. L.
(2005). Peer assessment support system (PASS). TechTrends, 49(4), 80-84.
Lin, S. S. J.,
Liu, E. Z. F., & Yuan, S. M. (2002). Student Attitudes toward Networked
Peer Assessment: Case Studies of Undergraduate Students and Senior High School
Students. International Journal of Instructional Media, 29(2), 241-254.
Patri, M. (2002). The influence of peer feedback on self- and
peer-sssessment of oral skills. Language Testing, 19(2), 109-131.
Pope, N.
(2001). An examination of the use of peer rating for formative assessment in
the context of the theory of consumption values. Assessment & Evaluation in Higher Education, 26(3), 235-246.
Stefani, L. A. J. (1994). Peer, self and tutor assessment; relative
reliabilities. Studies in higher
Education. 19(1), 69-75.
Topping, K.
(1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3),
249-276.
Topping, K.
J., Smith, E. F., Swanson,