Academic Exchange Quarterly
Winter 2003: Volume 7, Issue 4
Editors' Choice
To cite, use print source rather than this on-line versions.
|
Tapping Multiple Voices in Writing Center Assessment
Emily Donnelli, University of Kansas
Kristen Garrison, University of Kansas
Donnelli is Assistant Director of the Writing Center. She earned her MA in English from
the University of Kansas and is currently completing her PhD there. donnelli@ku.edu
Garrison, former Assistant Director, holds MA degrees in English and Special Education
from the University of Kansas.
Abstract
In an effort to enhance the quality and quantity of writing center assessments, we
turn in particular to Cindy Johanek, whose contextualist research paradigm provides
us with a specific framework to guide our efforts to measure the impact of our work
on students, faculty, and institutions. This article presents the enactment of
Johanek’s paradigm in one writing center, the University of Kansas Writing Center,
describing this center’s evaluation designs and findings. The rich results of
Johanek’s research approach suggest the usefulness of her paradigm to the writing
center community, enabling the challenging of dichotomies between quantitative and
qualitative methods and the subsequent production of unique, multi-voiced
assessments—assessments that most accurately capture the complex work of a writing
center.
* * *
Numbers alone won’t reveal everything we need to know. Stories alone can’t do it,
either. But when researchers stop defining their work by method only…then the full
power of any data, be it story or number, will truly blossom into the knowledge our
field seeks and the discipline we hope to become (Johanek, 2000, p. 209).
Although Cindy Johanek advances a paradigm to direct research in the general field of
composition studies, its specific and immediate relevance to writing center work is
obvious. Perhaps for writing center folks, the relationship between stories and
numbers is especially immediate: after all, the terrain of our daily work shifts from
attention to the narrative, the composed text, and higher order concerns to “The
Twenty Most Common Errors” of grammar and usage delineated in composition handbooks
like the Everyday Writer. Against the backdrop of this landscape, we must also
negotiate performance indicators, demographics, funding proposals, space constraints
and the like. Johanek’s Contextualist Research Paradigm provides us with the
framework for balancing and making sense of the stories and numbers that represent
our work; she invites us to use a mixed bag to assess the rich context of writing
centers.
This essay describes one writing center’s efforts to enact Johanek’s contextualist
research paradigm—one that looks to the unique context of a writing center to
determine methodologies and desired outcomes. Johanek’s thesis asserts that the
research problem and method should grow out of the particular site; thus, we briefly
describe the research projects that arose from applying Johanek to our evaluation
process only to show the ways that contextualist research can be fruitfully worked
out in a writing center. As our experience researching the ways that gender
influenced our writing center work suggests, Johanek can help the writing center
community expand our conceptions of writing center assessment in general and our
sources for data in particular—moving us beyond methodologies that merely count every
body to ones that make everybody involved in our work count as a participant in its
evaluation.
What follows, then, is not a traditional research article; we do not advance any of
the research projects we describe as necessarily replicable, an assertion that would
surely undermine Johanek’s emphasis on the contextual, historically-situated nature
of research. Instead, we simply wish to narrate the story of our assessment to
demonstrate Johanek's paradigm in action—in short, we describe a process rather than
prescribe a method. Though such an approach does not follow the conventional format
of a research article, it does reflect the spirit of Johanek's work by emphasizing
the context that gave rise to our assessment. We discuss these implications of our
application of Johanek after presenting synopses of selected research projects.
Before describing our contextualist exploration, though, we will review some of the
influential work in writing center assessment in order to suggest some of the ways
Johanek’s paradigm complements, both in theory and practice, current methods of
writing center evaluation.
The uniqueness of each writing center necessarily prevents a monolithic definition of
our work; likewise, we cannot identify generic objectives that every writing center
strives to meet. However, despite our varied institutional settings and their
distinct contexts, virtually every writing center in our diverse community is somehow
driven, even if in resistance, by Stephen North’s (1984) now famous axiom: “our job
is to produce better writers, not better writing” (p. 76). His words voice our
concern for the affective, and such outcomes can only be measured, if at all, through
multiple perspectives, with stories and numbers coming together to accomplish what
neither can do alone.
Our familiarity with North’s idealist outcome goes hand in hand with an understanding
of the difficulty of defining and measuring what any of us might mean by “better
writers.” In “Counting Beans and Making Beans Count,” Neal Lerner (1997) asks:
“How can we assess this improvement? Should we? Isn’t the Writing Center only one
among the many influences that shape student learning, some of which might undermine
the help we offer?” (p. 1). Lerner wrestles with these questions as he tries to
satisfy institutional expectations for quantitative research and his own comfort
with and trust in the data generated by such approaches. Perhaps quantitative
methods can help us explore the extent to which a writing center has served as an
academic support organization, but measuring students’ use of the writing center
in terms of number of visits and grades in composition classes falls short of the
holistic conceptualization of a writer’s development suggested by North. These
quantitative measures can prove a worthy start, but they do not tell the whole
story, and what seems to drive us is not just the number of students we see but
also our desire to influence students’ visions of themselves as thinkers and
writers—in other words, a quantitative method alone cannot fully support a rigorous
investigation of the affective element of the work we do.
In an email post to the WCenter listserv regarding writing center outcomes and
assessment, Lerner (2001) acknowledged the importance of the affective domain:
“[we should] broaden our search for impact not just on students’ writing but on
their larger sense of ‘fit’ with a college or university.” He also expressed hope
that writing centers “contribute to students’ academic and social integration”
into college, an outcome that, like “making better writers,” more truly matches
the higher goals of many, if not all, writing centers. However, measuring the
impact of writing centers on students’ assimilation into post-secondary environments
requires more than numbers. So, the question persists: how do we measure the
impact of writing centers on student writers?
Enter Jim Bell (2000). In “When Hard Questions Are Asked: Evaluating Writing
Centers,” Bell identifies key impediments to evaluation: “some say evaluation is a
good idea but they never get around to it, [and] others explain they are [too] busy”
(p. 7). Lack of resources and funding further saps the ability and motivation of
writing center directors to assess what they do. In response, Bell advances the
concept of a “small-scale evaluation [which] focuses on one aspect of the program
at a time” (p. 16). Focusing the scope of evaluation complements limited resources
and time while still providing meaningful and reliable information; according to
Bell, “writing centers should conduct a series of carefully limited evaluations
which, pieced together after a few years, create a fairly comprehensive picture”
(p. 16). He compares six evaluation orientations—Consumer-Oriented,
Adversary-Oriented, Management-Oriented, Naturalistic and Participant-Oriented,
Expertise-Oriented, and Objectives-Oriented—that writing center directors can adopt
and identifies the Objectives-Oriented approach as the most appropriate: “whether
trying to improve writing processes, increase self-confidence, foster critical
thinking, or place writing at the center of higher education, writing centers are
aiming to alter behavior, and objectives-oriented evaluations specialize in
documenting behavior change” (p. 15). However, before we can begin to measure
what Bell identifies as the primary goal of a writing center—altering behavior—we
must first explore the context in which such behaviors take place, turning the
lens of assessment on ourselves to examine the dialectic between the student writer
and the writing center. In short, we must expand the scope of our assessment beyond
the measurable behavior of the student to survey the entire context—the stories and
numbers—in which the student thinks, reads, and writes.
While Bell certainly challenges the writing center community to move beyond more
traditional forms of evaluation, such as “counting clients, post-conference surveys,
and end-of-semester surveys” (p. 9) and provides us with a useful system for
expanding and categorizing our assessment efforts, an Objectives-Oriented approach
still relies on the presence of a measurable outcome, which may or may not facilitate
attention to the affective. For Bell, the Objectives-Oriented approach is
distinguished by performance indicators—that is, highlighted goals and accompanying
timeframes—that guide assessment (p. 11). A limitation of this approach, depending
on incentives, is that it may influence us to rely exclusively on outcomes-based
assessment. Considering our desire to assess the affective elements of our writing
center and to let the unique context and accompanying needs of our center and
individual tutors direct that assessment, Bell left some of our questions unanswered:
can we use the idea of a small-scale evaluation to achieve a multi-faceted large-scale
evaluation within a limited time frame, one academic year? Can multiple small-scale
evaluations of a single question result in a more holistic understanding of that
question? Are the boundaries between Bell’s evaluation approaches permeable? If
so, can we use all six approaches to achieve a large-scale writing center assessment
that maximizes the benefits of both qualitative and quantitative research methods?
Drawing upon Johanek’s (2000) Composing Research: A Contextualist Paradigm for
Rhetoric and Composition, we found answers to these questions. Johanek challenges
the false dichotomy between quantitative and qualitative research:
A contextualist approach to research does not (cannot, should not) value one
set of research methods over another…[C]ontexts and questions…should guide our
methodological decisions, whatever they might be. But in no context should we
choose our method first, allowing it to narrow what kinds of questions we can
ask, for to do so is to ignore context itself. (p. 27)
To collapse this traditional dichotomy, she posits the Contextualist Research
Paradigm, “one that focuses our attention not on form or politics, but on the
processes of research that naturally produce varied forms in the varied research
contexts we encounter in our work” (p. 27). Johanek helped us reconceive Bell’s
paradigm of evaluation approaches and his small-scale evaluation method to develop
an assessment approach that fit our unique context; that is, we took Bell’s original
matrix, which identified the general purposes, distinguishing characteristics,
benefits, and limitations of each approach, and tailored it to fit the needs and
context of our writing center (Fig. 1). Thus, our method used multiple small-scale
evaluations, from all six evaluative approaches, to achieve a large-scale picture
informed by diverse perspectives. More specifically, to detail the impact of the
writing center on our university and our student writers, we began with the idea
that “every body counts”—that is, everyone involved in the writing center, from the
intern writing consultant to the writing center director to the first-time visitor
is not only counted, but should actively participate in evaluation; their voices
must be tapped and represented, not just their visits tallied, if we are to craft
a richly detailed portrait of our writing center. Johanek enabled us to utilize
Bell’s approaches without sacrificing the type of context-driven organic assessment
that we felt was necessary for us to investigate the goals of our writing center—both
those that lend themselves to quantitative measurement and those that do not.
Fig. 1. Michele Eodice’s writing center-specific adaptation of Bell’s evaluation approaches.
Below we describe our assessment process and findings for the 2000-2001 academic year
with the understanding that neither our procedures nor our findings are generalizable
to other writing centers. Like Johanek, we insist that our particular approach
served our particular needs for the given academic year and allowed us to answer
specific questions that grew out of the context of our writing center. In particular,
we narrate below how we used a contextualist approach to investigate the issue of
gender in our writing center. The descriptions of our research studies are meant
only to provide a snapshot of the myriad assessment perspectives that result when
we expand our conceptions of researcher and research methodology and approaches that
resulted from expanding our research paradigm; combined they suggest the more holistic
assessment enabled by a contextualist paradigm. We hope our experience may
illustrate how context-driven assessment can draw upon both quantitative and
qualitative methods to satisfy the needs and interests of a particular center.
Starting with context necessitates an active survey of the myriad voices that can
contribute to assessment—these voices, of course, come from the usual suspects
(tutors, directors, student writers) but can and should include those that echo from
the far reaches of campus (other academic departments, various administrative arms)
and even from other writing centers and their associated resources. Once these voices,
each representing a small-scale evaluation, are tapped, they blend to construct a
holistic and detailed account. Drawing upon different members of our academic
community allowed us to overcome the barriers that often impede large-scale assessment;
instead of taking several years to collect data, as Bell suggests, we were able,
through the kinds of collaborative relationships advanced by the contextualist
paradigm, to maximize our resources, tapping the voices of those who had something
to tell us about our work, and accomplish in a year’s time what might have taken
several.
Within our context—a free-standing, Research I university-wide resource, based on a
peer-tutoring model and serving undergraduate and graduate students, as well as
faculty and staff—we identified our questions and potential resources for answering
them, first interpreting and adapting Bell’s taxonomy to represent our small-scale
evaluations (Fig. 1). We found, however, that our evaluations did not fit neatly
within his system, some overlapping several categories, which confirmed our suspicion
that the demarcations between Bell’s evaluation orientations are indeed permeable.
Despite the messiness that necessarily arose from all the overlapping and line-crossing,
we will try to clearly illustrate our evaluation method with the following description
of the process we used to answer one of our research questions: how does gender
affect writing center work? Again, we offer the following descriptions of our
research process as exemplary of how a writing center can enact a contextualist
research paradigm. Each project—and its motivating question about gendered
interactions in the writing center—could easily warrant a separate examination.
However, our purpose in presenting these projects here is only to suggest how the
contextualist research model advocated by Johanek can enable us to not only count
every body but to make everybody a participant in writing center assessment.
To explore our research question about gender following the contextualist model, we
drew upon data collected from five small-scale evaluations, each reflecting one or
more of Bell’s evaluation orientations; we tapped the voices of undergraduate and
graduate students, undergraduate and graduate writing center consultants, and faculty,
who had used a variety of quantitative and qualitative methodologies to help us
assess how gender affects writing center work. Although we identified this interest
early in the academic year, we did not assign each of the following investigations;
instead, we searched our university and writing center communities for individuals—for
voices—who were already engaged in exploring the relationship between gender and
writing. Figure 2 provides a visual representation of how the individual findings
of each small-scale evaluation combine to create a holistic representation of our
research question.
Small Scale Evaluation 1 (Management-Oriented; Naturalistic and
Participant-Oriented; Expertise-Oriented): This study began with graduate writing
consultant AUTHOR interest in gender dynamics and tutoring. She video-taped a
practicum session during which consultants discussed their reactions to Meg
Woolbright’s article, “The Politics of Tutoring: Feminism within the Patriarchy.”
AUTHOR then analyzed the videotape and edited it to highlight segments that suggested
consultants’ resistance to the idea that gender affects writing center work; she
then showed this edited tape to the same group of consultants and prompted a
discussion regarding their resistance. She found that consultants doubted the
impact of gender on their work, preferring, instead, to identify other factors,
such as race, ethnicity, and personality as more influential, despite the
relationship between gender and these factors. Based on her analysis of the
videotaped practicum discussions, AUTHOR reported that our writing consultants
resisted seeing themselves as highly “gendered” and redirected the discussion on
gender dynamics in an attempt, perhaps, to preserve their self-representations as
politically correct, educated adults. Her research, then, which was enhanced by
feedback she received at the National Conference on Peer Tutoring in Writing,
prompted us to reflect on the ways that gender might complicate issues of race,
ethnicity and personality, as well as on the roots of our resistance to the belief
that gender significantly influences our practice.
Small Scale Evaluation 2 (Consumer-Oriented; Expertise-Oriented;
Objectives-Oriented): Erika Dvorske, administrative assistant of the
Freshman-Sophomore English program, and Michele Eodice, Writing Center director,
conducted a collaborative study to gain information regarding the relationship
between students’ gender and their performance in composition courses. Specifically,
this project sought to determine if a correlation existed between final grades for
English 101 and 102 and demonstrated efforts to seek writing support from the Writing
Center or instructor; we then sorted these findings by gender to determine whether
male students earned lower grades and sought support less frequently than females.
These quantitative findings indicated that the percentages of males and females
enrolled in English 101 and 102 matched those of the University, and we also found
that the gender breakdown of students visiting the Writer’s Roosts matched University
statistics. Teachers added a qualitative dimension, reporting that male students
more frequently missed required conferences and initiated fewer appointments with
instructors than female students. Since more females also visited the Writing Center
than males, Dvorske and Eodice’s study suggested that female students tend to seek
support more frequently and consistently than males. This pattern made us more
aware of the potential resistance or discomfort male students might feel, especially
during an initial visit to the Writing Center, and prompted us to reflect on
different ways that we might work to make them more comfortable in seeking support
and accepting feedback.
Small-Scale Evaluation 3 (Naturalistic and Participant-Oriented): Mackenzie
Roberts, a student enrolled in our Tutoring and Teaching Writing course, explored
how gender influences communication and interaction between a consultant and student
writer during a session to determine how the patriarchal and hierarchical aspects of
the institution contribute to the gender dynamic of consulting sessions. She
conducted on-site observations of sessions as well as interviews with consultants
in order to analyze the styles of three different consultants. Roberts found that
consultants’ styles reflected the cooperative ethos of our writing center more than
the hierarchical and gendered one of the University, and she concluded that,
although traditional gender behaviors did surface during sessions, consultants more
frequently adopted a style that reflected the ideology of the writing center.
Roberts’ investigation helped us look at gender as a continuum of behaviors that
each consultant, regardless of sex, draws upon to meet the needs of a given session.
Small-Scale Evaluation 4 (Consumer-Oriented; Expertise-Oriented): Melissa
Nicolas, then Ph.D. candidate at Ohio State University, asked us to distribute a
survey she designed to collect data for her dissertation study on gender and writing
centers; she collected over 320 surveys representing 16 writing centers across the
country. The purpose of Nicolas’ research focused on understanding “who uses the
writing center and for what reasons” in order to unveil the “similarities and
differences between women and men” who both visit and work in writing centers. We
collected 30 surveys for her; of those 30, 60% were completed by female participants
and 40% by males. Despite the small size of the sample, the gender breakdown was an
accurate match of all sessions for the 2000-2001 academic year. The survey items
focused primarily on affective aspects of consulting sessions (“I felt comfortable
expressing my ideas”; “I felt my tutor cared about me and my work”; and “My tutor
and I worked through my paper together”). More specifically, three items concerning
gender prompted the participants to rate the extent to which they prefer working with
a male or female tutor. Exactly 80% of both males and females indicated a neutral
position with regard to consultant gender preference; moreover, 80% also “strongly
agreed” that “the gender of [the] tutor does not matter.” Generally, then, students
who visited the writing center did not identify the gender of the consultant as a
concern. The following evaluation, however, suggested that gender might have more
impact on the students than they acknowledge or recognize.
Small-Scale Evaluation 5 (Consumer-Oriented; Expertise-Oriented): We also
administered surveys for Scott Eidelman, Ph.D. candidate in psychology at the
University of Kansas, who was interested in determining how students’ race
influenced both their self-perceptions and their perceptions of how others perceive
them. We distributed his survey after consulting sessions to 97 different
participants and paid special attention to correlations between gender and session
satisfaction. Approximately 55% of the participants were female, 40% were male,
with 5% not disclosing their gender; again, this sample mirrors the population of
students who visited our writing center during the 2000-2001 academic year.
Eidelman found that students felt significantly better about a session with a
consultant of the same gender when working on aspects of the writing process that
he defined as “subjective”: he categorized prewriting and revision as subjective
to emphasize the contextual nature of these particular stages in the writing process.
Conversely, working on editing, mechanics, and documentation represented “objective”
concerns that usually had a right or wrong answer. These findings suggested that,
contrary to the general responses gathered from Nicolas’ study, gender does, in
fact, contribute to the quality of a consulting session. This information not
only reinforced our efforts to remain sensitive to the ways in which gender
impacts students’ acceptance of and response to a consultant’s feedback but also
raised our awareness about the relationship between the focus of a session, the
gender of the participants, and the level of session satisfaction.
Fig. 2. Multiple small-scale projects combine to provide a holistic view.
From multiple small-scale assessment projects, we developed a more holistic picture
of the intersection between gender and writing center work than any of the individual
projects could have created independently. Overall, we found that our consultants
resist the notion that gender has a significant influence, a response validated by
observations of consultants during sessions: a consultant’s style reflects the
collaborative ethos of the writing center and draws upon both traditionally masculine
and feminine traits in order to work effectively with students. Consultants, then,
tweak their styles in response to the individual student. Our findings also
suggested that female students are more likely to seek out and respond positively
to the generally collaborative environment of the writing center; we must be more
sensitive, then, to the ways in which the collaborative environment may be unfamiliar
to or uncomfortable for some male students. Finally, although students reported
that the gender of the tutor does not matter, we observed a significantly higher
level of session satisfaction from those students who worked with a consultant of
the same gender. We concluded, therefore, that gender does influence our work,
affecting our male students perhaps more than our female ones, and that we must
self-consciously address the ways that gendered behavior and language operates in
a session.
Our 2000-2001 assessment, however, did not end with a survey of the effects of
attitudes toward gender; we investigated several other questions using the same
process of piecing together information from small-scale assessment projects to
achieve a holistic picture of a research question. We recruited writing center
consultants, interns, and other members of our academic community to help us
collect information; they conducted quantitative assessment of the demographics
of the students who visited and distributed surveys to faculty to determine their
attitudes toward our writing center; additionally, we investigated the effectiveness
of our public relations strategies around campus, and administered post-session,
as well as end-of-semester, surveys to measure student satisfaction. The
culmination of investigating these multiple questions resulted in a large-scale
evaluation of our writing center that we have posted on our website
www.writing.ku.edu and that we have used to set objectives for performance.
Making our final report accessible to the public met the criteria for Bell’s
Adversary-Oriented approach; likewise, we met the aims of Bell’s Objectives-Oriented
approach by drawing upon our findings to set goals.
Apart from what these multiple research projects told us about gender in our writing
center, though, their importance here is in showing us the benefits of building on
the work of Lerner, Bell, and others with Johanek’s contextualist research model.
Using multiple approaches, as categorized by Bell, and drawing upon multiple methods
of assessment, including focus group discussion, observation, post-session surveys,
narrative, and quantitative data, we investigated, from many perspectives, the
extent to which gender influenced both consultants and students who visited our
writing center. As Johanek advocates, our research question grew out of the
exigencies and specific context of our own center. Our assessment capitalized on
the types of inquiry already being undertaken by the various members of our writing
center community, our university community, and even the larger writing center
community. Of course, the reality of writing center work, as all academic work,
is that a slice of assessment pie must feed administrative expectations, which often
requires us to produce quantitative data; as Lerner and others recognize, at some
point, we must all “count beans.” Johanek helps us see, though, that we can have
our pie and eat it too—that is, we can meet administrative demands without
compromising our own professional interests.
Johanek motivates us to rethink the ways we view assessment by dissolving the rigid
boundaries often imposed between quantitative and qualitative methods and the
ideologies that support them. Viewed as a rhetorical process driven by the unique
context of a center, assessment becomes a mining of all available research resources
and approaches; we do not limit our evaluation to those methods that we feel most
comfortable with, or those methods that are most accepted in our university
environment. The unique and specific needs of our individual centers direct our
inquiries, and we proactively coordinate assessment of our own work rather than
defensively react to demands, however reasonable, from administrators. Our task,
then, is to recognize ways in which we already collect information about our writing
centers, to find and create local collaborative relationships, to generate
context-driven questions, and to identify the stories and numbers that will help us
answer them.
Notes
1. For a fuller description of Bell’s evaluation approaches, including his original
matrix, see his article in Writing Center Journal 21.1 (2000): 7-28.
2. We would like to thank Neal Lerner for allowing us to quote from his post to the
WCenter list.
3. Thanks to Michele Eodice, director of the KU Writing Center, for creating a
writing center-specific version of Bell’s comparison of evaluation
approaches.
4. With permission of the researchers, we include their names and institutional
affiliations.
5. The institutional response to our evaluation method has been positive.
Administrators report that the holistic assessment facilitated by multiple
researchers from different disciplines and within and outside the university
gave them a 360 degree view of the writing center.
References
Bell, J. (2000). When hard questions are asked: Evaluating writing centers.
Writing Center Journal 21(1): 7-28.
Johanek, C. (2000). Composing research: A contextualist paradigm for rhetoric
and composition. Logan: Utah State UP.
Lerner, N. (1997). Counting beans and making beans count. Writing Lab Newsletter
22(1): 8-9.
Lerner, N. (2000, February 15). The dreaded o-word. Message posted to Wcenter
electronic mailing list, archived at
http://lyris.acs.ttu.edu/cgibin/lyris.pl?sub=6453&id=169262330
Lunsford, A. (2001). The everyday writer (2nd ed.). Boston, MA: Bedford/St.
Martin’s.
North, S. (1984). The idea of a writing center. In C. Murphy and J. Law (Eds.),
Landmark
essays on writing centers (pp. 71-85). Davis, CA: Hermagoras.
(Reprinted from College English 46.5: 433-446).
Woolbright, Meg (1992). The politics of tutoring: Feminism within the patriarchy.
In C.
Murphy and J. Law (Eds.), Landmark essays on writing centers (pp. 227-239).
Davis, CA: Hermagoras. (Reprinted from Writing Center Journal 13.1: 16-30).
Please consider submitting your manuscript
|