Academic Exchange Quarterly Winter 2011 ISSN 1096-1453 Volume 15, Issue 4

To cite, use print source rather than this on-line version which may not reflect print copy

format requirements or text lay-out and pagination.

 

This article should not be reprinted for inclusion in any publication for sale without author's explicit permission. Anyone may view, reproduce or store copy of this article for personal, non-commercial use as allowed by the "Fair Use" limitations (sections 107 and 108) of the U.S. Copyright law. For any other use and for reprints, contact article's author(s) who may impose usage fee.. See also electronic version copyright clearance CURRENT VERSION COPYRIGHT © MMXI AUTHOR & ACADEMIC EXCHANGE QUARTERLY

 

Differentiating Maximum Values in Writing Centers

Janet Boyd, Fairleigh Dickinson University

Mutiara Mohamad, Fairleigh Dickinson University

 

Boyd, Ph.D, is Assistant Professor of English and Mohamad, Ed.D., is Director of the Programs in Language, Culture and Professional Advancement.

 

Abstract

Writing centers can increase the possibility for collecting meaningful assessment data about the impact they have on students’ writing when they partner with academic departments or programs to track outcomes of specific cohorts of students. The focus of this paper is to document how, in tracking the international students’ use of the writing center at our institution, we established maximum value thresholds of writing support for these students at three different proficiency levels and demonstrated improved outcomes.

 

Introduction

With the increasing focus in higher education on learning outcomes goals and assessment that measure educational effectiveness (Kuh & Ewell, 2010), administrators rely more than ever upon the data academic departments collect not only to inform programmatic decisions but to allocate funds across the institution, including to writing centers (Simpson, 2008). Accordingly, writing centers need also determine which modes of assessment, whether qualitative and/or quantitative (Bromley, Northway & Schonberg, 2010; Salem & Denny, 2009; Thompson, 2006), serve to establish their institutional worth in dollars and “sense.” However, the possibility for collecting concrete data often eludes writing centers for various reasons; even so, as Jessica Williams (2006) contends, “difficulties notwithstanding, it is both possible and essential to establish whether what happens in [writing center] sessions makes a difference” (p. 118). Jason Mayland prudently advises writing center directors to “look at specialized populations” such as returning, or mature, or non-native English speakers (NNES) in the context of writing centers so to “differentiate maximum value” of this support in quantifiable ways (Lerner & Mayland, 2008). In other words, it is more feasible and meaningful, and perhaps more responsible, for centers to begin to substantiate their worth by collecting data on specific cohorts of students than it is to try to generalize more broadly.  

 

This paper presents how, at our institution, the Metro Writing Studio collaborates with the Programs in Language, Culture, and Professional Advancement (hereafter referred to as the Programs in Language), which tracks the learning outcomes of its international students, to determine what effect the Studio’s support has on the writing proficiency of this cohort. We describe the type of information each unit collects and how the respective data is then layered and assessed. Our intention here is to document our methods and to share our findings regarding the maximum value thresholds of writing support we have established for these students, which were validated by the students’ progress within three proficiency levels.

 

Tracking Usage

Like most writing centers, the Studio provides individualized sessions during which tutors review a student’s writing with the student to discuss higher and lower order writing concerns; papers are not edited or proofread for students. Because almost as many graduate student writers are tutored as undergraduates, the staff is comprised of individuals who hold or are pursuing Master’s degrees in a writing-related field and who have experience either as adjunct instructors of college writing and/or as writing tutors; there are no undergraduate/peer tutors. All students who wish to be tutored fill out a paper form that records the student’s name and identification number; the class and professor for which writing is being done; and the date, arrival time, and start and stop time of the tutoring session, all of which document usage. Students are also asked to communicate their goals for the session, and tutors record what in their view was addressed, which reveals as much about students’ perceptions regarding their needs as it does about what they believe writing centers can do for them. The tutor’s report indicates whether the student-identified needs were attended to and/or if the session took a different turn. Much qualitative and some quantitative information can be culled from these statements, information that reveals just what kind of advice tutors are imparting to students—or what of value students might be taking away from the session that could affect their writing. 

 

This information allows Boyd, the Coordinator, to improve her management of the daily operations, which makes the Studio more user-friendly, and to make the case that the Studio is being utilized efficiently—in other words, cost-effectively. These data get entered into a dedicated, web-based server created so that the Studio and the Programs in Language can generate customized reports and have restricted access to each other’s information. The development of this specialized database is an indication that our institution values the data we track, which gets reported to the Campus Provost and the Dean of the College, both of whom provide portions of the Studio’s funding. Clearly, counting students is not assessing them, as Neal Lerner (1997) articulates, but determining which students are shared can lead not only to more productive collaboration but to more creative and robust ways to undertake assessment and to document the writing center’s effectiveness. In other words, even utilitarian record keeping can become significant when it is layered with other data whether collected in the writing center or elsewhere in the institution.

 

The majority of the international, NNES students who make use of the Studio’s services are enrolled in one of the classes offered by the Programs in Language, courses designed to teach English for specific purposes while also acculturating students academically. These undergraduate and graduate students[1] work towards proficiency in English by taking, during their first semester, a three-credit English class specific to the discipline they have enrolled to study with a co-requisite lab component that focuses on academic writing.  All of the international, NNES students take a written, essay placement test upon their arrival to campus, which determines whether they must complete the course or are exempt.

 

Because the Programs in Language welcomed its inaugural class in the fall of 2006, Mohamad was particularly cautious, in the absence of benchmarks, about monitoring NNES student progress. Granting that students who score fifteen to the maximum eighteen points on the placement pre-test place out of the course, the remaining students came to be distinguished by their proficiency levels according to three ranges for purposes of tracking improvement and support:  those who score between 3 (the lowest score) and 9.4, between 9.5 and 12.4, and between 12.5 to 14.4. These ranges will be referred to as Clusters One, Two, and Three, respectively.

 

The Data

At the end of the first academic year, in the spring 2007, Mohamad noticed a reduced improvement across all three clusters in the spring as compared with the students in the fall 2006 semester, and she feared that the weakest students might not achieve adequate English proficiency with just the one semester of single-level instruction. Informed by the emerging data, Mohamad decided in the fall 2007 to require (see Gordon, 2008) only the least proficient NNES students seek Studio support by receiving tutoring and/or attending workshops. These Cluster One students would now have to complete fifteen hours of support each semester—the idea being to add an average of one additional instructional hour to the four existing contact hours per week without having to add additional levels and/or semesters of instruction. The expectation was that these required hours of writing support would ultimately contribute to improved post-test scores of the weakest students. 

 

Since the essay format of the Programs in Language pre- and post-tests are identical, they provide a good measure of actual writing skill improvement that the Studio can tap into. Examining single pieces of writing that these students had reviewed with tutors at the Studio for evidence of  improvement would ascertain whether or not each piece had improved but not whether the writers had acquired skills (e.g. idea development, essay coherence, and  mechanical accuracy) that could be retained and applied to future writing. Furthermore, the consistent samples of pre- and post-test scores designed to isolate and evaluate writing ability provide a direct measure that avoids those variables that factor into course grades (e.g. attendance, extra credit, and/or late submissions, for example), which make grades a less precise reflection of actual writing ability (Lerner, 2003). 

 

At the end of fall 2007, the students in Cluster One, all of whom were mandated to seek fifteen hours of support, did see an improved increase from pre- to post-test with a 4.18 point average improvement compared with their counterparts who in the spring 2007 achieved a 3.88 average improvement with course instruction only. Even with the new mandate, not all of the Cluster One students completed the requirement, however. In isolating the data further to factor out the students who did not pursue the support, Mohamad discovered that the average improvement of those who completed or exceeded the fifteen hour requirement and completed the course was even higher at 4.46 points. Because the majority of these students—76%—sought their academic support at the Studio, the data suggests that writing center support positively contributes towards improving student writing outcomes for the least proficient students, doing much to put them on par with the initially more proficient students.

 

Considering that the maximum score for the test is eighteen, and the students who did complete the mandated tutoring improved on average 4.46 points, that makes for a 24.8% improvement from the pre- to post-test. Conversely, these data suggest that the weakest students who did not receive fifteen hours of academic support did not fare as well as those who did. Notably, that same semester the post-test results in the other two clusters, those with higher initial proficiencies who took the Programs in Language course but were not mandated to receive academic support, dropped even lower than their counterparts in the previous semester. Based on the positive trend Mohamad had just observed with the Cluster One students in fall 2007, in spring 2008 she aimed to close the loop by mandating ten hours of Studio writing support for Cluster Two and five hours for Cluster Three students. While the pre- to post-test scores also improved for these Clusters, their gains were not as dramatic.

 

What we have confirmed over the ensuing semesters is that the least proficient NNES students benefit the most from mandated writing support, that such support is beneficial to all levels of NNES students, and that the Studio contributes in demonstrable ways to successful student learning outcomes in the Programs in Language, which impacts their ability to succeed at the institution. We can only speculate as to why there is a point of diminishing returns for the more proficient students: 1) the course is designed to bring all students to a certain proficiency level and no more, and the Cluster Two and Three students have less of a score gap to close to achieve this proficiency; and, 2) arguably, students’ performance on these written exams (if not other writing tasks) can plateau at a certain level of proficiency before they can advance to the next level.   

 

Even so, of significance is that maximum value thresholds were established for these three clusters. The two clusters with higher proficiency, those mandated to receive five and ten hours of support, achieved maximum benefit when they sought (see Williams & Takaku, 2011) one additional hour, or six and eleven hours respectively, but not necessarily hours beyond these thresholds. For the least proficient group, fifteen hours emerged as the maximum value threshold and validated that support level. Beyond the Studio’s collaboration with the Programs in Language, these maximum value thresholds demonstrate to the University the relative value the Studio holds for this cohort of students.

 

While these data convinced us that the Studio has a positive effect on the ability of the students in the Programs in Language to write academic English, we next designed a survey to ask the students themselves whether the Studio had an impact on their learning. The surveys were distributed in Programs in Language classes in December 2009 and May 2010, and of the 111 students who completed the survey, 104 or 94% indicated that “the Studio contributed to their success in the course” while six students believed it did not (one did not respond). When asked if “the Studio contributed to their success more generally” at our institution, 94 or 85% of the students indicated that it did, while eleven students felt it did not (eleven did not respond). Since seeking academic support is compulsory for students while they are enrolled in the Programs in Language course, we also inquired as to whether these students would likely return of their own volition to the Studio in the future; 94 students or 85% said they would return for tutoring. The responses to this survey reveal that the students value the Studio as support that helps to sustain their academic success. 

 

Conclusion

Mohamad’s controlled tracking of students’ pre- and post-test results combined with Boyd’s consistent tracking of these students’ attendance and tutoring session activity at the Studio allows for a sustained inquiry into learning outcomes, which reveals that the Studio has positive effects not just on student writing but on the student writers. These findings have important implications for the Studio and for writing centers more generally. While the Studio does measure what we call our utilization rate—how many of the hours tutors are available (and “on the clock”) are spent tutoring—this rate measures effectiveness only in terms of whether payroll dollars are being maximized. It does not assess student satisfaction, nor does it assess how the tutoring impacts students’ writing ability, which should be the real measure of a writing center’s effectiveness, not to mention what all directors would like to know and administrators would presume could be demonstrated. In other words, when the Campus Provost and the Dean provide the Studio payroll and operating budgets—and expect no revenue in return—the University is investing in the expectation that the Studio will, in fact, produce better writers. 

 

In the case of the thresholds established through our collaboration, we can demonstrate that the Studio contributes to improved Programs in Language student writing outcomes to the degrees described, as well as helps to make the single-level course feasible, so the University can be assured it is getting a tangible return on its investment in the Studio both in dollars and “sense.” While the Programs in Language students represent a portion of the students the Studio serves, and the Studio has yet to find a way to demonstrate its effectiveness for other cohorts, it is significant for our units and our institution that we have been able to document effectiveness for a specific population by means of differentiating maximum value. Equally satisfying is that through their survey responses, the NNES students confirm that they, too, value the Studio’s contribution in improving their written English proficiency.

 

 

Endnotes

[1]Each year, our campuses enroll roughly 500 new international students with a total enrollment of about 1200 international students of which 85% are graduate students.

 

References

Bromley, P., Northway, K., & Schonberg, E. (2010). Bridging institutions to cross the quantitative/qualitative divide.  Praxis: A Writing Center Journal 8(1). Retrieved from http://projects.uwc.utexas.edu/praxis/

Gordon, B. L. (2008). Requiring first-year writing classes to visit the writing center: Bad attitudes or positive results? Teaching English in the Two Year College 36(2), 154-163.

Kuh. G. D., & Ewell, P. T. (2010). The state of learning outcomes assessment in the United States. Higher Education Management and Policy 22(1), 9-28.

Lerner, N. (1997). Counting beans and making beans count. The Writing Lab Newsletter 22(1), 1-4.

Lerner. N. (2003). Writing center assessment: Searching for proof of our effectiveness. In M. Pemberton & J. Kincaid (Eds.), The center will hold: Critical perspectives on writing center scholarship (pp. 58-73). Logan, UT: Utah State University Press.

Lerner, N., & Mayland, J. (2008, Oct). Two experts talk writing center assessment [Audio podcast]. In Writing Center Podcasts. Retrieved from http://writing.wisc.edu/ podcasts/ index.html

Mohamad, M., & Boyd, J. (2010). Realizing distributed gains: How collaboration with support services transformed a basic writing program for international students.  Journal of Basic Writing 29(1), 78-98.

Salem, L. & Denny, H.  (2009, July).  Assessing what we really value in writing centers: A conversation with Lori Salem and Harry Denny” [Audio podcast].  In Writing Center Podcasts. Retrieved from http://writing.wisc.edu/podcasts/index.html

Simpson, J. (2008). Perceptions, realities, and possibilities: Central administration and writing centers. In R. W. Barnett & J. Blumner (Eds.), The Longman guide to writing center theory and practice (pp. 189-193). NY: Pearson/Longman.

Thompson, I. (2006). Writing center assessment: Why and a little how. Writing Center Journal 26(1), 33-61.

Williams, J. (2006). The role(s) of writing centers in second language writing instruction. In P. K. Matsuda, C. Ortmeier-Hooper, & X. You (Eds.), The politics of second language writing: In search of the promised land (pp. 109-26). West Lafayette, IN: Parlor Press.

Williams, J.D., & Takaku, S. (2011).  Help seeking, self-efficacy, and writing performance among college students.  Journal of Writing Research 3(1), 1-18.