Academic Exchange Quarterly      Spring   2001: Volume 5,  Issue 1

 

 

Developing a Tool For Assessing Writing  in a General Psychology Course

 

Thomas Reynolds, University of Minnesota

Thomas Brothen, University of Minnesota

Cathrine Wambach, University of Minnesota

 

Reynolds is an Assistant Professor at General College <reyno004@umn.edu>. Brothen is a Professor at General College <broth001@umn.edu>. Wambach is an Associate Professor at General College <wamba001@umn.edu>.

 

Abstract

This article describes and discusses the development of a writing assessment tool applied to student writing produced in an introductory psychology course. A performative scoring tool is discussed as appropriate for the study given the location of students in a developmental education curriculum, the purpose of measuring improvement in student writing products, and the current state of scholarship in writing assessment. The process of arriving at this tool is also discussed as beneficial for a faculty committed to teaching writing across the curriculum.

 

Introduction

Faculty in the University of Minnesota's General College have been working toward making the experiences that students have with writing more consistent and coherent across the curriculum. It's not that we all wish to give the same assignments or that we wish students would practice the same kind of writing in our classes, but rather that students come to see the different kinds of writing that they do in our classes as having cross‑over value. If students cannot "invent the university," to repeat a much‑used term coined by composition theorist David Bartholomae (1985), then teachers directing students into the particular forms and discourses that they enter into through composition, and through content‑area courses, can nevertheless offer some common strategies and build recognizable cross‑curricular writing expectations.

 

In this piece, we describe how we came to a particular assessment tool that would address our concerns for writing completed in an introductory psychology course. We developed such a tool in order to test whether the process that students used in order to write for this course produced effective writing. In carrying out such a project, we kept in mind that our tool would have to be sensitive to the context in which the writing was carried out. Often, assessment of student writing is performed on writing produced in artificial settings, removed from immediate teaching situations. Since we were interested in the connection of the writing product to the particular course situation in which that writing was produced, we consciously chose to work with graded, course‑based writing.

 

Our awareness of the high value placed on products of writing in our university and the need for students to learn to approximate the kinds of texts that will lead to success in future courses further convinced us of the need for an examination of student papers. We wanted to know, in short, whether students were learning to write effective academic prose in a course that paid some attention to student writing. To this end, we set out to examine student writing performed over a single ten‑week term in order to determine whether writing about a limited number of concepts in the course would help students improve their writing in a number of definite ways. We also recognized that assessing the products in this way would steer our analysis in a direction that would obscure other important aspects of student writing performance. It would not tell us, for example, about their attitudes toward writing, a significant factor for developing writers (Herrington and Curtis, 2000).

 

As we developed our assessment tool, the process we went through also led to a deeper discussion about writing and its perceptions held by teachers in our courses and separate fields. Below, we describe the process of arriving at our tool and discuss how its development helped us to communicate to each other our previously tacit assumptions and expectations about writing.

 

Background

Before  we elaborate on the assessment tool that we developed, it helps to recognize our students as writers in a developmental curriculum. Having matriculated through the University of Minnesota's General College, they were admitted as "underprepared" for work in the University. GC 1281, General Psychology, is part of a credit‑bearing developmental curriculum that students take as preparation for transfer to another degree‑granting college within the University. As part of their preparation in writing, they take two composition courses and a number of writing intensive courses, as well as courses like GC 1281, which make use of writing more incidentally, but also purposefully. In all these courses, we work with our students, many of whom are first‑generation college students or from traditionally under‑served groups in the university, to help them make sense of the university as a place where writing is a central activity for creating and receiving knowledge.

 

Writing assessment has often focused on writing as an end‑result measured by standardized views of effectiveness (White, 1996). More recently, researchers have studied writing done in content‑area courses using both single‑course (Herrington, 1985) and cross‑curricular and longitudinal approaches (Sternglass, 1997). Kathleen Blake Yancey and Brian Huot (1997) studied the various practices and underlying philosophies of assessing writing‑across‑the‑curriculum programs.

 

Attempting to get away from the gatekeeping role of placement testing (Wambach and Brothen, in press), we focus in our assessment on a number of pieces of writing completed by developmental students in a single course enrolling large numbers of lower‑division students. We considered the assessment project as a joint process of determining the effectiveness of the students' writing when the occasion for writing was a first or second year "content" class that could reasonably expect students to exhibit certain features of writing, which we explain below. Although students gained most of the possible points allotted to these assignments for the course, we also knew that it was important to gain an independent assessment to determine whether the writing would be judged effective by outside readers knowledgeable of the context in which the pieces were written.

 

Marilyn Sternglass (1997) has shown that developmental writers succeed on the college level when they are given challenging writing tasks with instructional support as they carry out those tasks. Although much has been written about how students learn to write in first‑year basic writing and composition classes, where challenging tasks and support are often at the core of the course, less has been published about writing performance by developmental students in what traditionally have been called content courses. We set out to study how such students perform on a set of given writing tasks in a psychology course that demands mastery of a large number of introductory concepts. The course instructors, aware of other successful writing projects in undergraduate psychology courses that made use of various written forms (Blevins‑Knabe, 1987; Hettlich, 1976; Polyson, 1985), assigned four formal essays, each taking up a specific concept of psychology within the context of individual students' experiences.

 

One assignment, for example, asked students to choose a trait from a Personality Inventory that differed the most from average on the scale and analyze it for their lives (individually) using two of the overriding "Big Issues" (explained in the course text) for the field of Psychology. An assumption built into the question was that a combination of already‑studied course material and student life experience would form the basis for their responses. The questions asked for analysis with supporting evidence for claims.

 

Having designed challenging writing tasks, the instructors also addressed the kind of support that would be offered to students in carrying out these tasks. In keeping with Sternglass' observations, as well as other current developmental education approaches (Wambach et al., 2000; Higbee & Dwinell, 1996), students were provided support that aided them as thinkers and writers approaching a particular writing situation. In class, experienced graduate teaching assistants for the course acted as coaches to the process of writing the assignments. As in our composition classes, the writers had the opportunity for discussion of what they wrote in a drafting stage and were asked to re‑write their essays when the teaching assistants or the instructors judged particular pieces not yet in a form appropriate for the assignment. Students also had the opportunity to make use of classroom computers equipped with word‑processing software for writing their essays. In all the support given to students, a crucial principle was to help with, but not take over, the students' papers.

 

Because students were offered the kind of support that writing theory suggests as reasonable for all classes making use of writing, we did not fear that we would be confounding the results of the study by offering such help. Since any study of writing involves taking writers in a particular situation, and ours was the situation of an actual classroom and course, we attempted to devise an assessment instrument that would acknowledge the fact of students working in the messy conditions of carrying out actual assignments.

 

Arriving at a Performative Scoring Tool

Although the wide range of assessments of student writing done in content courses can only be suggested here, Yancey (1999) has described how, for the last fifty years, the writing assessment techniques most often employed have been objective tests, holistic scoring of single pieces of writing, and, most recently, portfolio assessment. In our situation, discussions around a number of issues led to our adopting what Faigley has called a "performative" assessment tool (101). Related to holistic scoring, but different in important ways, performative assessment shifts the standards by which the writing is judged. Where holistic scoring generally assigns a single score to a single piece of writing, and is norm‑referenced, performative assessment allows for the scoring of multiple textual features and is criterion‑referenced (Faigley, et al., 1985).

 

Chief among our concerns when considering the kind of tool we would adopt for the assessment was that our purpose was simply to gain knowledge of our students as developing writers within a particular course. Since writing always takes place as an activity within a specific context, we were cautious in attempting to devise an instrument aimed at telling us anything beyond the particular context of the psychology course. In considering a form related to holistic scoring, we knew that, traditionally, scores assigned to student writing through this method have been the basis for judging overall writing ability, a practice that has been seriously challenged in recent scholarship (Elbow, 1996; Huot, 1996). Decisions about future placement of students or exit criteria from a course or program have often been based on such scores. In our case, we had no such purpose in mind. We considered the study to be one that made use of writing‑across‑the‑curriculum principles in order to help students learn psychology and develop their college‑level writing skills within a single course. We hoped that results would help other teachers think through their approaches to writing assignments in content courses.

 

Another concern for the instructors was that critical thinking skills and accurate knowledge of the course content would have to be shown in the students' responses in order for it to be judged favorably for the assignment. Although this was not a testing situation, especially considering that the same material was covered in course exams, students would nevertheless have to make use of course‑based knowledge in order to write an effectively argued response. Performative scoring would take into account this particular concern that we judged to be important for writers at this level of their college careers.

 

We also were convinced that, for our needs, a quantitative measure would best represent student display of certain definite textual features. Although a single‑score method of holistic scoring would group a number of traits with a single score, a modified instrument would be able to assign scores to a number of different traits. We decided on a modified scoring system (Faigley, et al., 1985) that would allow us to measure a number of traits that we deemed important for student writers performing this kind of task on this level in our program. As suggested above, we also acknowledged that a more textured view of these writers working in individual contexts might be gained through other methods not available to us.

 

Since one of the questions asked by the study was about improvement, the scoring method would have to be modified further to account for the performance of students over the entire term. Instead of a more traditional decontextualized pre‑ and post‑test method, we gathered and measured student writing from the actual site of instruction. This was possible only by collecting and rating a number of pieces of writing, written in response to similar analytical tasks, produced for the class. A "mini‑portfolio" of student work was thus compiled, without the usual reflection that goes along with writing portfolios, and treated as a single item to be read by raters. Improvement based on the studied writing traits became another item in the scoring system. Scores were assigned after reading the entire series of papers for each student.

 

A relatively recent insight in writing assessment studies has been to emphasize the need for assessors to be relative "insiders" who understand the context in which the writing is completed (Huot, 1996). Since we operate under a particular mission devoted to developmental education, this was especially important for us. Readers of our student essays would have to be able to judge the writing with an experienced eye for this level of developmental writing instruction. They would also understand the assessment as a task to be completed from within the goals and expectations of the course and college. Two experienced graduate instructors from the General College writing program were hired and trained to carry out a performative scoring of the student work. Having worked in the college for at least two years, each scorer was well familiar with the curriculum and the course for which the writing was completed.

 

Our decisions were also shaped by the material and political realities within which we worked. In addition to the cost of hiring scorers, we also had to take into account copying costs and personnel. A performative tool was also something that was recognizable and viable to administrators, an important consideration in getting the study funded and completed.

 

Our Scoring Tool

We based our scoring instrument on selected standards governing instruction in the General College writing program, as they could be reasonably applied to an introductory content‑area course in our college. Since only textual features of writing were considered, and only those that pertained to analytic writing, the instrument necessarily focused more narrowly on certain features than typical instruction in a composition course would provide. The scoring was geared toward whether the students had achieved a response that met the standards expected at the end of our program's basic writing course, a credit‑bearing introduction to college‑level writing that, with a second writing course, meets the University of Minnesota's first‑year writing requirement. We felt that this was a reasonable standard to impose because writing courses in the college, like all other courses, work toward the same goal of preparing students for transfer to other University of Minnesota colleges.

 

A limited number of features of writing figured into the scoring sheet developed with the raters:

o         Completion. Basic writers learn to complete assignments and meet deadlines.

o         Length. An early goal for basic writers in college is to get them to write essays of a length that meets the demands of particular assignments and tasks.

o         Addressing the question adequately. Students learn to tailor their writing in early college writing assignments to specific prompts. Here, the measure also reflects an appropriate engagement with the course material and their own experience, as asked for in the questions.

o         Making points, in an appropriate organizational scheme. Students learn to group thoughts under over‑riding points and learn to organize those thoughts into a logical progression for the assignment.

o         Providing supporting material for points. Developmental writers learn to provide adequate support for their points, here in terms of explanation of important concepts and analysis of life experiences that help to illustrate main points.

o         Surface issues. Writers learn to present their writing in a form that is relatively free of the errors that college‑level writing has traditionally held as important.

 

Additionally, we included a more global concern:

        Improvement over time. Basic writers improve their writing over time, given ample  opportunity and support.

 

We made use of a Likert scale in order to judge the level of effectiveness within categories that asked for judgment from the raters.

o         Exceptional scores indicated achievement that not only mastered the item but did so throughout all essays to such a degree that communicated a conscious manipulation of the item in order to achieve a particular effect.

o         Good scores indicated an achievement of the item throughout most of the writing.

o         Fair scores indicated an achievement of the item only unevenly, either from essay to essay or within essays.

o         Poor scores indicated little or no evidence of achievement of the item within and across essays.

 

Raters were also trained and tested for reliability by the composition faculty member before each rated all essays for the study. Standards were developed in sample readings of actual papers from the course. Ongoing conversation and negotiation in the rating period among the raters and us clarified the few questions raised during this period by the process of rating.

 

Assessing our Assessment Tool

In order to ascertain the effectiveness of our tool, we asked ourselves what this assessment could tell us about student writing that a more general predictive measure, namely the American College Testing Service college entrance exam (ACT), could not. We checked our students' ACT scores against the measures in our study. Our calculations showed a high correlation between the ACT and just one of the distinct factors that we were attempting to measure. Not surprisingly, the ACT correlated highly (+0.366) with our question on editing concerns. A weaker, but significant (p < 0.05), correlation was established for the issues of addressing the assigned question (+0.173), making effective points within an organizational scheme (+0.175), and providing support (+0.162). The assessment found no significant correlation between the ACT and appropriate essay length (+0.108) and demonstrated improvement (+0.064). Apart from editing concerns, the tool proved useful in giving us information that predictive measures could not.

 

How Collaboration Led to Discussion of Writing in the Curriculum

As we developed our assessment tool, we realized that it provided a significant occasion for talk about writing and writing instruction. Our discussions of the assessment process necessarily included exchange of views on how we saw the functions of writing in our curriculum. We agreed on certain principles around which the tool was formed, but we also learned to respect the different uses for writing in our separate fields.

 

Because of our common goal of helping students succeed in the academy, we found that certain assumptions about student writing, including the need for students to produce texts that will be valued in other courses, were shared by all of us. Textual features such as organization, focus, and support for points are recognizable in most analytical college writing that is judged to be effective.

 

Nor did we disagree on the need to recognize that students would achieve these goals through different strategies. For example, some students, we imagined, would organize their essays by using personal experience as a more heavily controlling factor in the writing while others would lean more heavily on the concepts from the course and make use of their experience primarily as supporting material. Demonstration of critical thought through good writing is possible in either format. Allowing for both approaches laid the ground for students to write to their strengths.

 

We also learned that the act of assessment foregrounds assumptions about epistemologies and writing. Where the very act of assessment traditionally has been, as Huot (1996) has pointed out, an activity with positivist underpinnings, composition theorists have often adopted positions that question traditional views of writing. Current assessment approaches recognize the need for greater attention to the contextual factors of writing and are built on an assumption that the act of assessment is inherently a "sociopolitical process" (Selfe, 1997, p. 54). Collaboration involved an exchange of these views, even as we worked from assumptions governing our individual fields.

 

While we did not argue for one position over another, we recognized that our approach would be shaped within the particular circumstances and politics in which we found ourselves working. In the end we have elements of both traditional and more current approaches to assessing writing. Although we made use of quantitative methods more associated with traditional assessment, we were also careful to carry out these methods from within the context of our college mission. Focus on the developmental education aspects of what we were doing gave us a way to negotiate our differences and respect our areas of expertise.

 

References

Bartholomae, D. (1985). Inventing the university. In M. Rose (Ed.), When a writer can't write: Research on writer's block and other writing process problems (pp. 134‑165). New York: Guilford.

Blevins‑Knabe, B. (1987). Writing to learn while learning to write. Teaching of Psychology, 14, 239‑241.

Elbow, P. (1996). Writing Assessment in the 21st century: A utopian view. In L. Z. Bloom, D. A. Daiker and E. White (Eds), Composition in the 21st century: Crisis and change (pp. 83‑100). Carbondale IL: Southern Illinois Press.

Faigley, L., Cherry, R. D., Jolliffe, D. A., & Skinner, A. M. (1996). Assessing writers' knowledge and processes of composing. Norwood NJ: Ablex.

Herrington, A. (1985). Writing in academic settings: A study of the contexts for writing in two college chemical engineering courses. Research in the Teaching of English, 19, 331‑361.

Herrington, A. & Curtis, M. (2000). Persons in process: Four stories of writing and personal development in college. Urbana, IL: National Council of Teachers of English.

Hettlich, P. (1976). The journal: An autobiographical approach to learning. Teaching of Psychology, 3, 60‑63.

Higbee, J. L. & Dwinell, P. L. (Eds.) (1996). Defining developmental education: Theory, research, and pedagogy. Cold Stream, IL: National Association for Developmental Education. (ERIC Document Reproduction Service No: ED 394 415).

Huot, B. (1996). Toward a new theory of writing assessment. College Composition and Communication, 47, 549‑566.

Polyson, J. (1985). Students' peak experiences: a written exercise. Teaching of Psychology, 12, 211‑213.

Selfe, C. L. (1997). Contextual evaluation in WAC programs: theories, issues and strategies for teachers. In K. B. Yancey and B. Huot (Eds.), Assessing writing across the curriculum: Diverse approaches and practices. Greenwich CT: Ablex.

Sternglass, M. S. (1997). Time to know them: A longitudinal study of writing and learning at the college level. Mahwah NJ: Lawrence Erlbaum.

Wambach, C. & Brothen, T. (in press). Content area reading tests are not a solution to reading test validity problems. Journal of Developmental Education, 25.

Wambach, C., Brothen, T., & Dikel, T. (2000). Toward a developmental theory for developmental educators. Journal of Developmental Education, 24, 2‑10, 29.

White, E. (1996). Writing assessment beyond the academy. In L. Z. Bloom, D.A. Daiker and E. White (Eds.), Composition in the 21st century: Crisis and change (pp.101‑111). Carbondale IL: Southern Illinois University Press.

Yancey, K. B. (1999). Looking back as we look forward: Historicizing writing assessment. College Composition and Communication, 50, 483‑503.

Yancey, K. B. & Huot, B. (1997). Assessing writing across the curriculum: Diverse approaches and practices. Greenwich CT: Ablex.