California English Journal
When writers write, they respond to an inside locus, and outside locus, or some combination of the two. In this piece, the locus is us, mostly. Some people would say that because we selected the topic, we have an opportunity to display how well, or ill, we write. But the editor dictated the deadline. Some of our colleagues would say that an other-dictated deadline risks compromising our opportunity to display how well, or ill, we write. Our purpose here is to focus narrowly on who dictates topic and time. “On-demand” means someone else does.
Professionals’ criticism of “on-demand” writing for testing purposes has a long history. Both James Gray (1995) and Donald Graves (cited in Fearn and Farnan, 2003), among others, suggested that large-scale writing testing fails because the pressure to perform is too high and because prompts are other-selected. We cited both Gray and Graves in an earlier argument (Fearn and Farnan, 2003) that neither time nor prompt are significant variables in students’ high-stakes writing performance. We suggested that students “…write lifeless, voiceless prose because that is how they write. Perhaps they would write flat prose absent energy if they identified their own topic and had an hour to write” (p. 14). Then we took up our own challenge to begin measuring the influence of demand on writing performance.
Gray referred to time pressure as a reason why large-scale writing testing fails. Reduction of pressure is the focus of this piece. We suggest that pressure might be at least partly an artifact of making young writers produce against the clock, on artificial time rather than the authenticity of their own time.
If we are right, then perhaps the more time students have to write, the better they will write. Stated another way, perhaps the latter portion of students’ papers might be better-written than the early portion. That is a place to start, though not without some important caveats.
There is also the question of what better means. We get that question routinely when we talk and write about the influence of intentional instruction (Fearn and Farnan, 2001). “What do you mean by ‘better’,” someone invariably asks. For our purposes here, “better” means increasingly fluent, mature, disciplined, focused, organized, and textured (Spandel and Stiggins, 1997; Fearn and Farnan, 2001).
Each of those attributes can be either counted or identified and explained. The first three are explicitly objective, and interrater reliability over the past two decades of scoring, perhaps 10,000 writing samples, is in the mid 90s. The last three attributes are less objective, though they can be described and agreed upon by trained raters (There are two adjectives, one adverb, a metaphor, and an uncommon simile [“…like coins in disarray along the grassy path”]). Interrater reliability on the last three attributes, among trained raters, ranges between .94 and .97.
If we could not make an argument about what better means, we could not plan for instruction, for there would be nothing toward which to direct writing lessons. We define better as increasingly fluent, mature, disciplined, focused, organized, and textured. Every time we talk with teachers about the students they would like to receive at the start of a year, they agree those attributes would be highly valued.
If we are right about the influence of time on students’ writing performance, it is possible that the latter portion of students’ papers might be better-written than the early portion. We can score some writing samples and either eliminate or confirm a relationship between early and later portions of writing samples written under various time circumstances. We did just that. We scored student writings prepared under five very different circumstances.
We scored writing samples for the purpose of answering two questions: Are the latter portions of these pieces of writing different in quality from the early portions? Does students’ writing get better as they have more time to write?
The evidence from this analysis indicates that writing quality is not affected by allotted time. Across the board, whether third graders or graduate students, five minutes or up to a year, the writing at the beginning of the papers was about the same as the writing at the end. It turns out that students write about as well as they can, no matter the circumstances of time.
We should not be surprised. As shown in caveat #3 above, most students are like most other writers (self-reported by practicing writers in many interviews and autobiographical tracts). Their writing reflects an investment in their own perspectives, so going back over the paper produces, at best, surface adjustments. That is why good editors come at a considerable price and are the “fresh eyes” so many writers covet. It is also the reason why Writers’ Workshops (read and feedback groups; as distinct from writing workshops, which are management plans [Kauffman, 2002]) are such a valued addition in the larger instructional program (Fearn and Farnan, 1993).
We may not have sufficiently confronted the argument that timing compromises the quality of student writing. Rather, we hope we have opened the possibility that what students write on-demand (timed circumstances) isn’t wholly influenced by the time constraint, itself. It may be something else. But it may be that what students write when they write is precisely how they write, no matter how much time the testing circumstances allot.
And we may have surfaced something else, too. If the latter portion of students’ papers is not substantively different from the earlier portion, and a major problem for teachers is the staggering reading load, why assign longer pieces? Why not capitalize on the fact that brevity is a value, that students who know their topic are better able to capture it in short space than those who do not, and that better writers need less space? If we assign at least five pages, and it is possible to write to the assignment in four, why not assign four and reduce the reading load by a fifth? It isn’t more literate nor more demanding of topic knowledge to write longer; it is both more literate and demanding to write shorter. And if not five pages, but four, why not three? After all, our students are not different from who we were when our teachers assigned five pages and we wrote two good ones and padded the other three. Why not just assign the three?
Maybe that is what emerges from this study.
Bereiter, C. and Scardamalia, M. (1987). The psychology of written composition. Hillsdale, NJ: Erlbaum.
Fearn, L. and Farnan, N. (1993). Writers’ Workshops: Middle school writers and readers collaborating. Middle School Journal, 24, 61-65.
Fearn, L. and Farnan, N. (2001). Interactions: Teaching writing and the language Arts. Boston: Houghton Mifflin.
Fearn, L. and Farnan, N. (2003). Writing instruction: Theories and responsibilities. California English, 8, 13-15.
Flower, L. (1979). Writer-based prose: A cognitive basis for problems in writing. College English, 41, 19-37.
Gray, J. (1995). Why large-scale testing fails. California English, 1, 11.
Kauffman, D. (2002). Writing workshops: A descriptive study. National Reading Conference, San Antonio, Texas.
V. and Stiggins, R.J. (1997). Creating writers: Linking writing assessment
to instruction (2nd edition). New York: Longman.
ABOUT THE AUTHORS:
On-demand writing assessments ask our students to turn in their first drafts, and, perhaps, we are too kind to our students regarding what we will accept as first drafts. Many times we are so pleased that they wrote anything at all that we give points or credit to work that is weaker than what they can actually produce. As we teach throughout the year, we need to help students write stronger first drafts—drafts that answer all parts of the question and reflect the students’ abilities to compose sound sentences using strong vocabulary.
Our students can practice on-demand assessments, but if we don’t work with those essays and help our students improve, then what are we accomplishing? If we give practice on-demand tasks, score and record them, then give another practice and score it, how will our students improve? When we give them practice on-demand assignments, we need to show them how to improve, ask them to rewrite, and hold them accountable for showing improvement.
As I work through the school year, I try to weave together lessons that will help students improve the content of their on-demand writing, the sentences they compose to express that content, and the vocabulary they choose to color their ideas.
Work at the Content Level: Analysis
Work at the content level is the most critical. In order to pass any on-demand assessment, students must answer all parts of the question. If students omit any portion of the writing task, they cannot make the cut-off score—and nearly all on-demand questions require two-to three- part answers. Students might be asked to: (1) read a poem or passage, then write about the author’s purpose and how that purpose was achieved or (2) read a passage, summarize the main points, agree or disagree with the author’s view and explain their own opinions or (3) select an event from history, describe the event, and explain its importance.
Analyze the response : Color Coding Once students have written to an on-demand question, teachers need to help students analyze their responses to see if they have answered all parts of the writing task. One classroom activity that helps students see whether or not they have written to all parts of the question is Carol Booth Olsen’s color coding strategy. I adapt and use Olsen’s strategy many ways, depending on the class and task at hand. Students need a draft of writing and colored pencils. I have red, green, blue, and yellow pencils in plastic containers, and we use them all the time.
To use color coding, assign different colors to the different parts of the question. For instance, the question might have asked students to read a passage, then write an essay in which they summarized the author’s main points and agreed or disagreed using evidence from the passage. Ask students to use red to color the part of their essays in which they summarized the author’s main points, use green to color the part of their essays in which they agreed or disagreed, and use blue to mark the quotes or evidence from the passage. Very quickly students, and the teacher, can see if they answered all parts of the question. If they did not, then the number one focus for rewriting is adding the task they omitted the first time.
Helping students analyze their own essays is not easy, so first they need practice using the rubric and some model essays. Use the published rubric and sample essays that come with test preparation materials, and make overhead transparencies. Go over the rubric first so students understand how their writing will be assessed. Then use sample essays to help students see how scorers will assess their writing. If published sample essays aren’t available, select a couple of student models, take off the names, and make overhead transparencies of those. I often use student essays from previous years. At the overhead, I use the same colors students are going to use to mark the sample essays and then their own essays. Sometimes I make copies of sample essays so students can practice color coding on their own or in small groups.
Analyzing the question and analyzing an essay to see if it answered all parts of the question is difficult and at the heart of understanding the on-demand task. Color coding the different parts of the answer helps students see if they accomplished the task, and if they did not, then they must go back and complete it. If they wrote a cursory response to part of the question, then they must go back and write a more thorough response.
My sophomores generally need to answer a part of the question they omitted; my AP students invariably need to add quotes and examples from the passage. Regardless of the levels of students in our classrooms, they need more work at the content level, and rewriting for content helps students learn what they need to accomplish in one draft in order to score well on their on-demand writing assessments.
Work at the Sentence Level
Studying scoring rubrics for on-demand writing tasks reveals what kind of stylistic measures the readers will be using to score the tests. To score a 3 or 4 on the CAHSEE, for instance, students must write a variety of sentence types. For the SAT, sentence variety is required for scores of 4, 5, or 6. Teachers, then, need to teach students to write a variety of sentences, and there are many good strategies to help students learn to do that.
Sentence combining is one strategy that helps students develop stylistic maturity. The works of Strong (1973), Weaver (1996), and Hillocks (1995) all indicate sentence combining instruction results in gains for all levels of students. There are many sentence combining texts available for purchase, but textbooks are not necessary. Teachers can use their own students’ writing to demonstrate sentence combining techniques at the board or on the overhead projector. Ask students to count the number of sentences in each of their paragraphs, then find a way to combine two sentences in each paragraph—keeping all the details, but reducing by one the number of sentences per paragraph. Practice with sentence combining helps students learn to write sentences of varying lengths and types so they can mature stylistically as writers.
Sentence expansion is strategy that helps students combine a number of details into one well-written sentence. Start with a basic sentence, such as The student talks and keep adding details. What kind of student? What kind of talk? When does the student talk? To whom does the student talk? As students give ideas, the sentence grows: Whenever the teacher turns to write on the board, the sneaky senior student talks quietly to his good-looking neighbor.
Sentence awareness: I ask students to think about their sentences as they write and, before they turn in their essays, I instruct them to underline one sentence—the best sentence they wrote, not on the basis of content, but on the basis of style or syntax. "Underline the sentence you are most proud of, the one you did the best job of constructing." After hearing me say this a number of times, students make a conscious effort to construct at least one compound or complex sentence or begin a sentence with an opening phrase or clause.
Another way of creating sentence awareness is careful reading of published texts. Pointing out specific sentences in the novels, stories, and essays we are reading teaches analysis and style. That’s the kind of work we ask of AP students all the time, and we need to make sure we ask all of our students to look at the kind of sentences published writers compose. With my sophomores, I usually read Steinbeck’s Of Mice and Men first and work with Harper Lee’s To Kill a Mockingbird the next quarter. I copy a couple pages from each novel and ask students to read them side-by-side specifically to compare the sentences and word choices.
As the year progresses and we continue to rewrite our on-demand essays, I tell students that by now they should have grasped the idea of answering all parts of the question. We are now going to add writing better sentences to our rewriting task. I ask them to pay particular attention to the first and last sentences of their essays and to select two other sentences in the body of their essays to rewrite, to turn into sentences they are proud they wrote.
Work at the Word Level
Signal words or transition words Teaching students to use transition words actually affects the sentences they compose. If students connect sentences with words such as because, as soon as, and as a result, then they create sentences that show relationships. When students do that, they are organizing their ideas, connecting actions, and signaling their readers—and their sentences will become stylistically mature. Teaching students to use transition words will improve their writing because it makes them analyze, makes them think about the relationship of ideas and/or actions. Lists of transition or signal words are available from many sources. Writers Inc calls the list "Connecting Your Details."
Taboo words and word banks Some words are better than others; it’s just so. I make a list of Taboo Words for my students, words they cannot use. I start with one or two: happy and sad. I explain that these are overused and vague, and I want my students to use more specific, accurate, and effective words. We then brainstorm a list of alternatives, such as ecstatic, pleased, content, morose, dejected, and melancholy. We copy the alternatives in our journals, and use this list as a word bank to refer to as we are writing.
The next word I place on the Taboo list is said, and we brainstorm words to use in its place —words such as questioned, retorted, whispered, stammered. As the year progresses, the list of Taboo Words grows to include good, bad, thing, very, and important. Students become aware of the taboo words and stop each other from using them even during group or class discussions.
As they are rewriting their on-demand essays, then, I ask them to become aware of their word choices and to try to avoid using the taboo words. If the taboo words make their way into those first drafts, then they need to be replaced on the rewrite.
Publish on-demand writing
When my students have finished an on-demand writing and I am reading and scoring them, I am on the look-out for models to use in class. I use post-it notes to flag good writing. When I finish a class set, I look over the flagged essays and make a list of the essays I want to use for specific purposes, such as one for transition words, one for strong sentences, one for superior word choices. I make copies of these essays to post in the room, and I have the students read their essays to the class. As they are reading, I am at the board or at the overhead projector writing down the transitions or sentences or word choices. I also keep the copies from the bulletin board to use as models in the future. We publish final drafts of our students’ stories, essays, and poems, and we need to publish strong examples of their on-demand writing as well.
Practice On-Demand Writing
Obviously if students are asked to practice on-demand tasks, they will have opportunities to improve, so give essays tests and ask the teachers of all subjects at your school to ask at least one essay question each time they give exams. Every time I give an in-class exam, I include an essay question or two, and I hear teachers of other subjects talking about "open-ended" questions. Our students need practice with all kinds of writing, so we need teachers in other subject areas to give students practice also.
Our school gives a School-wide Write and this on-demand writing helps students and teachers. Each March, for ten years now, our students have written on-demand essays in their third period classes—across the school, in all classes. Then the administrators, counselors, and teachers score the essays in two afternoon sessions. Session one is for training the readers and scoring the first round. Session two is for the second reading and tallying the scores.
Students get their essays and scores in third period, and hand the essays to each of their teachers over the next few days to get a score/grade in every class. The type of grade—classwork, homework, extra credit—is up to each teacher; we just agree across the school to count the school-wide write as a grade in every class. The school-wide write gives teachers confidence to write sound essay questions and use a rubric to score them, increasing the chances that more teachers will incorporate on-demand writing as part of their usual classroom assessments. This means more students have more opportunities to practice on-demand writing, and other teachers besides English/Language Arts teachers are helping students improve their writing skills.
I give an on-demand writing task to my students very early in the school year, within the first week or two. I read these and write notes regarding the strengths and weaknesses I observe in order to plan lessons, then I clip the essays together and keep them in a desk drawer. Just before THE test, the CAHSEE or AP or state assessment, I give a last practice test. As I hand out the topic, I remind students of all the strategies they have practiced and all the writing techniques they have learned, and then they write. After I have read and scored these last practice tests, I return those first on-demand essays along with the last ones, and I ask students to read both and write in their journals about what they see.
Most every student notes improvement in his/her on-demand writing skills. One student wrote, "I write with a lot more finesse. My sentences are stronger and I use better vocabulary." Another exclaimed, "I’ve improved tremendously—I now answer the question!" I find the next one rather dubious. " I have improved a lot. I used to write good essays only when I liked the topic, but now I can write good essays on any topic." What, exactly, did I teach this child?
Try a Little Magic
Even after teachers and students have done everything humanly possibly to prepare, there’s still test anxiety, so why not try a little magic. I go to the school supply store and purchase magic pencils, pencils in our school colors or with an upbeat saying. Just before THE test, I give magic pencils to all my students and tell them to use them to write the essay question(s). A few students don’t use them, but most do. If nothing else, it gives them a sense of unity to look across the testing room and see other magic pencils writing away. And maybe that sense of unity will give them a boost in confidence and they will score higher. It certainly can’t hurt to try a little magic.
Hillocks, George, Jr.(1995). Teaching Writing as Reflective Practice. New York: Teachers College Press.
Kemper, D., Sebranek, P., and Meyer, V. (2001) Writers Inc: A Student Handbook for Writing and Learning. Wilmington, MA: Write Source.
Language Network: Grammar, Writing, Communication. (2001) Evanston, IL: McDougal Littell.
Olson, Carol Booth (2003). The Reading/Writing Connection: Strategies for Teaching and Learning in the Secondary Classroom. Boston: Allyn & Bacon.
Strong, William (1973). Sentence Combining: a composing book. New York: Random House.
Weaver, Constance (1996). Teaching Grammar in Context. Portsmouth, NH: Heineman
Until thoughtful educators are in charge of schools and allowed to make and enforce sensible decisions about assessment, teachers will continue to live in a world where externally mandated, large- scale, high-stakes tests are an inescapable part of the educational landscape. The current testing juggernaut looms with its inequities, its inconsistencies, and its inherent unfairness.
In our schools and wider communities we can continue to point out these issues, work for modification and change, and, as a last resort as in some regions in America, even encourage the boycott of the tests themselves. But at this writing it appears that high stakes tests and large-scale assessments are not going away anytime soon.
The testing system is undeniably flawed, but until real change is initiated, our students must face those tests and must, if they wish to advance their academic careers, do well on those tests. With that reality in mind, it is irresponsible to leave our students to fend for themselves. Teachers who remain in the classroom must actively help students meet the demands of these high stakes tests and give them strategies for success. In addition, teachers must be aware of and sensitive to the psychological consequences of a test-drenched atmosphere. For many students there is an emotional component to tests and testing, and caring teachers can do much in instruction and in classroom affect to not only help students meet the demand but remain unthreatened by it. When students feel they have a mastery of skills, they are confident, and the perceived threat of tests, testing, and their consequences is far less overwhelming. While no teacher can make stress and tension disappear in the face of an on-demand writing test, he or she can do much to boost student confidence, depersonalize the test and even neutralize testing’s often negative atmosphere.
That said, how can teachers help our students meet the challenge of writing on demand in the context of instruction that meets student needs and satisfies standards for best practice? Drilling students in a disconnected practice is not the best test preparation. Instead, teachers should emphasize strategies, which are both effective in the testing situation and intellectually defensible in light of what we know about how real writers write. In this article I would like to show how teachers can help students work inductively and think backward from rubrics and polished writing to the way such writing is produced.
Thinking Backward: Three Steps
Teachers can begin to help students prepare for writing on demand by inviting them to think backward. Specifically, before they practice any kind of timed, prompt-driven writing that also uses a rubric for evaluation (and a rubric towards which students would write), they should follow three preparatory steps. Over a series of days, both with literature and student-written samples of writing, students should: read and discuss in general terms a range of models; assess in specific terms the qualities of those models; and speculate on the impetus for the models and what they are trying to accomplish. The pattern would look as follows.
It’s worth remembering that students may need instruction and discussion regarding terminology. While teachers might assume that students understand the terms structure, sentence rhythms, organization, transitional words, such may not be the case at all. If students do not have a working vocabulary in these terms, it is important that discussion and example be provided. Otherwise, students will be relatively clueless regarding rubrics or evaluative terms.
When students are comfortable with this kind of consideration and can address texts and answer such questions with a sense of ease and confidence—working backwards from writing to rubric to prompt--they are ready to move forward. Using an established rubric and a prompt, students can create a writing sample response within a time limit. With this kind of background they may well have a better appreciation of how a prompt and a rubric can govern and help create an effective piece of writing.
Thinking backward is part of literary study, and it is what teachers can most effectively do to help students consider and meet the requirements of writing on demand. Rather than have students immediately practice writing to a prompt in a timed situation and then scoring that writing using a rubric, thinking backward takes students from acknowledged models back to their rubric and then back to the prompt. Once students are confident that they can speculate about why a good piece of writing is good, how it fulfills the requirements of a rubric, and how it addresses a prompt successfully, they are then better able to move forward, looking at prompts with an eye to a rubric and to creating their own successful writing sample.
Thinking Backward with “My Name”
The model for this kind of thinking backward is not artificial or even, in the broad scope of the English language arts class, unfamiliar. As noted above, it has precedent in literature teaching, especially when teachers present a text to students and ask them to consider its merits, a consideration that often occurs without the support of author biography, history, or even, initially, specific consideration of the context of the writing. In English classes, teachers and students look at the piece and generally discuss it, decide how and in specific where it is effective and well written, and then imagine the conditions—political, personal, historical, sociological-- that inspired the writer to create this poem, short story, play, novel, or essay.
Imagine: a ninth grade class reads “My Name,” one of the brief sections from Sandra Cisneros’ widely read House on Mango Street. While students know that when a work is published and presented in class it is likely to be considered worthwhile, they are often encouraged to discuss their reactions—both positive and negative--and, essentially, to assess general quality and to discuss why the selection is a strong piece of writing. In the case of “My Name,” students would most likely conclude that it is short but effective and establishes not only the main character as a person but also gives some family background, cultural details, and a hint of what is to come in Mango Street. The humor of the piece might be a standout, as well as the professed individuality of the writer.
In this ninth grade class, beyond a general discussion, it would be routine for a teacher to ask students to explore specifically why they think the selection is an effective one—in answer, students might look at the use of vivid characters, the specific details, the figurative language. Essentially, then, students would consider what elements of craft this piece uses and how it fulfills these expectations of excellence.
Certainly in “My Name” many qualities of language and general appeal are present: Esperanza’s great grandmother is “a wild horse of a woman” and spent years in depression where she “looked out the window her whole life, the way so many women sit their sadness on an elbow.” The pronunciation of Esperanza’s name is made both poetic and a comment upon cultural differences: “at school they say my name funny as if the syllables were made out of tin and hurt the roof of your mouth.” The end of the piece hints at how Esperanza would like to change: “I would like to baptize myself under a new name, a name more like the real me, the one nobody sees.”
Using a Rubric with “My Name”
What rubric could be used to assess this piece? For instance, does “My Name” tell a fully developed narrative (no); does it use poetic language (absolutely); does it provide detail (yes); does it give us a full character sketch of Esperanza (yes and no). If we had to create a rubric, which would fairly evaluate “My Name,” what would it look like? Students might consider a sample rubric, using the heads of narrative, word choice, use of detail, and general impression. Pertinent questions related to each heading could prompt students to examine different parts of “My Name”:
Creating Prompts for “My Name”
Finally, students might be asked to consider why and for whom the speaker wrote this piece—what, essentially, was the motivation or intent (or prompt)? If we had to write a prompt that might have generated something such as “My Name,” what would we include in it? Let’s look at some possibilities.
Is this a memory of the family? Certainly a father, great-grandmother, great-grandfather, and sister are all mentioned. The connections are strong as is the narrator’s relation to these various people in her family group. If this is true:
Or: is this an effort to set the facts right? Surely Esperanza gives us her interpretation of her great-grandmother and great-grandfather’s life; she critiques how her Anglo schoolmates pronounce her name as well as critiques her sister (at least through her less attractive name). If so:
Or is “My Name” more of a literal discussion of the name Esperanza, where it came from, how it is used, and why it is, at least to the speaker, less than ideal? If we approach the piece from that angle, we might conclude:
The point is: if we had to write a prompt that might have generated something such as “My Name,” what would we include in it?
All of these considerations are similar to what teachers can ask students to do with writing models, working backward to consider general writing quality, possible rubric, and, finally, prompt. By considering what kinds of factors a writer might have had in mind, it may be clearer to students that writing can be truly purposeful, directive, and targeted. Further, the connection of prompts to rubrics can also be explored although, in most situations, the rubric is broad enough to accommodate a fairly wide range of prompts. Teachers could assign a journal entry and invite students to respond:
Thinking Backward and Considering Student-Written Models
For many students, the frustration and fear of responding to test-situated prompts is the sheer uncertainty of the enterprise: students are often unclear regarding what is required of them and what they should be focusing on or concentrating on in such an essay. It is, accordingly, vital that students be exposed to numbers of sample essays that try to meet the demands of a prompt and attempt to fulfill the expectations of an evaluative rubric. Literature is one thing—students expect that what is in the anthology or published in a book is what teachers consider the real thing, worthy of reading and critique. They may not always like what they read, but the unspoken assumption—assented to by most students, even the disaffected—is that the literature presented is something that someone thinks is good.
Not so with writing in a test situation. What constitutes an effective or acceptable response for a test may, for some, be far murkier. Accordingly, students must have chances to see, analyze, and discuss sample—if not always exemplary-- pieces of peer-level work. They need, in class, to spend some time examining not just what could be improved but what makes a good response or essay good and how a good one addresses effectively a prompt and conforms to the criteria of an evaluative rubric. Certainly while many students are adept at pointing out (for some, even hunting out) writing weaknesses—identifying fragments, for instance, or citing a weak conclusion or insufficient detail—they may not have much practice in discussing what in specific contributes to the success of a strong piece of writing.
If students are to think backwards and consider what makes a strong essay, they might start by looking at pieces of writing, which are good, if not exemplary, writing. Taking a number of essays that are either descriptive or persuasive (the dominant rhetorical modes required in most test situations), students can read these in class, respond individually, by writing in a journal and then discuss openly—in either large group, small group, or both—what makes the piece effective. While it might be tempting to coach students into categories of evaluative classification--precise language, clear transitions, and unity of development—it is important that students have an initial opportunity to describe in their own words, without using pre-set structures, what makes the piece effective.
After this general discussion, students can go back and specifically discuss the writing. Only then and using the characteristics that come from the discussion, should they be invited to craft a rubric for the essay/response itself that would reflect its characteristics. The question is: if we were to devise a rubric that would capture the distinctive marks of the essay, what would it include?
Finally, students can go backwards another step and create a possible prompt for which these essays might be written. Our question to students is: what kind of question/questions/prompt inspired or guided this essay?
Considering A Student Essay
Let’s take a look at an essay from an actual tenth grader who responded in class to a yet undisclosed prompt. The setting was a practice for the state writing exam.
Throughout my school career I feel the most valuable course I have taken is keyboarding/computer applications. Not only did I learn to type faster, but I was taught the correct way to type. I also gained general knowledge about the computer and some of its important programs. This course has greatly helped me in my high school career, but I guarantee it will help me in the future.
So far, I have been able to type my essays and reports quickly, which helps with time. Also, I’ve been able to produce amazing powerpoint presentations for school projects. This all benefits from the keyboarding/computer applications course I took. However, I also feel this course will benefit me in the future. Such as, I will know how to use databases and spreadsheets to help me organize important information, like tax payments or bills. Also, as time progresses computers are used more and more everyday. So, when applying for jobs or college, once they see I’ve taken a course in keyboarding/computer applications, that will be a positive factor, and might help me get the job or get into that college. I’ve also heard that college professors like for papers to be typed. This will also decrease my time writing papers in college.
Keyboarding/computer applications has definitely got to be the most valuable course I’ve taken. Not only has this course been a help to me now, but will also provide much help in the future. I’m very happy that I chose to take this course. After taking the course my knowledge of the computer grew. This is why I feel it is the most valuable.
As a first step, students would be encouraged to, for the moment, ignore length and usage issues and simply read and discuss the essay (note: usage and spelling are reproduced as in the original). Students can consider the following:
After a general discussion, students will certainly note that this writer presents his or her ideas directly and makes them clear throughout the essay. Students then might get specific: while there is a mechanical kind of restatement at the end (This is why I feel it is the most valuable), this is a solid job and, essentially, gets the point across. It is well developed and clear, giving more than one or two reasons for the utility of a keyboarding/computer class and features clear transitions (also, such as, so), some interesting sentence construction (not only has this course; not only did I learn), a definite introduction, and a conclusion.
Using a Rubric with the Student Essay
After students have generally and then specifically discussed the essay, they might be invited to create a rubric that could be used to evaluate it. Certainly, from the class discussion, it would emerge that the paper is doing a number of things:
With regard to weaknesses in the paper, it might be helpful to include in the rubric headings such as:
These headings could be organized and could then be used to score the paper. A sample rubric such as the following might serve as a guide:
After establishing these headings, students could also discuss weighting these five—they may wish to emphasize the argument and detail more than flavor in writing; usage, however, for some students, is perennially important. Here’s a possible rubric:
Sample Rubric with Weighting
Students could use this rubric and score the paper; the weighting could be changed, obviously, but the rubric will show students how, in varying situations, different components can “count” more than others.
Creating Prompts for the Student Essay
Finally, students could be asked to create a prompt which might encompass the piece. From the essay itself, it would probably emerge that the prompt is to make a case for an important course in school—what it was and why it was valuable. Students might create something such as the following:
While #3 is the actual prompt used with this tenth grade student, creating variations of differing prompts based on slightly differing perceptions of the essay widens students’ writing horizons and shows them how an essay can relate to and respond to a prompt.
Writing on Demand: Practice and Evaluation
Once students have discussed a relatively successful essay, created a rubric and a prompt, they might benefit from quickly looking at other examples that are more of a mix and that might provide a more stark comparison and contrast. Students can then compare and contrast essays through the rubric and have a better sense of what makes an effective response to a prompt. At this point students will be more ready to move forward, not backward, going now from prompt to writing to scoring, practicing writing to a specified prompt and then evaluating using a set rubric. Using a variety of prompts and opportunities to practice (interspersed with other class work), students will be better prepared for writing on demand. Further, they will also become better writers, linking solid test preparation and good teaching.
After such preparation and practice, students can, in small groups or in pairs, score each other’s papers and also benefit from a teacher’s response and his or her discussion and scoring. Finally, if student-constructed rubrics do not contain the following heading areas, it might be good to present these to a class, as they represent, in the main, the qualities that are routinely used in rubrics:
Sample Standard Rubric
Writing on demand is an artificial construct. Despite that undeniable fact, it is not, however, an insurmountable task nor a mystery. Guided by teachers, students regularly discuss literature using a structure that invites them to work and think backward; that same structure can be used to coach students and prepare them for writing on demand. With experience and practice students can become comfortable addressing prompts in an effective manner and with an eye to a set evaluative rubric.
Certainly there is a connection between test preparation and good writing instruction, and with our help, students will achieve both. The strategies they use to write efficiently and effectively and meet the needs of a predetermined topic can be used to advantage in many other academic and writing situations. While success is never a foregone conclusion, by working backward and then practicing writing forward, students can acquire confidence and use specific strategies to meet the challenge of writing on demand.
Have you seen “the cartoon”? Published in the Washington Post last May, it depicts a student hunched over a piece of lined paper, hard at work as he composes his response to his SAT writing test. His essay begins: “Reports say that the highest grades on this test go to the longest essay regardless of content. Is this a good test of writing ability? I have a lot to say about that. Oh, and it turns out that factual errors do not count against the essay. With that in mind, I’d like to say this is a good test of writing skills. How good? Really, really, really, really, really, really, really, really, really. . . .” The “reallys” continue, line after line, until the page is filled (Toles).
The cartoon, humorous as it is, raises some big questions about the role of standardized tests of writing in our nation today, and the new SAT—with its mandatory writing section—has become the focus of much of the controversy. Les Perleman, MIT professor, made headlines around the country this spring when he reported his study of the test’s scoring: He found a near perfect correlation between the number of words written and particular scores—with no attention paid to errors of fact within the essays. “If you just graded them based on length without ever reading them, you'd be right over 90 percent of the time," he explained (Winerip).
The National Council of Teachers of English has raised even more questions about the test, summarized in a report entitled, “The Impact of the SAT and ACT Timed Writing Tests.” The release of this report last May led to a new round of media coverage in newspapers and on the radio as representatives from the College Board, English teachers, college and university spokespersons, and administrators around the county began to weigh in on their feelings about the test and whether or not they believe it can accurately measure a student’s ability to write. As members of the NCTE Task Force that spent months studying the new test , we have to say we are delighted by the amount of discussion that has surrounded the launching of this new test. We are encouraged that people around the country are talking about writing: about the purposes for writing in our worlds, about what effective writing might look like, about the processes that lead to effective writing. We are heartened that newspapers and radio talk shows are discussing writing in ways that recognize the complications inherent in putting pen to paper and producing substantive prose. We are pleased that universities are weighing in, talking about what purposeful writing looks like at the college level, raising the possibility of increased understanding between K-12 schools and colleges. And if the College Board’s decision to include a mandatory, 25-minute, standardized writing prompt on the newly revised SAT did nothing more than that—raise important, complex discussions about writing—we’d feel pretty good about the direction writing instruction is headed in our country.
Unfortunately, this test does much more than merely raise the level of conversation. . . and, as English teachers ourselves, we fear that the negative reverberations from this test will be far-reaching, even damaging to students and teachers.
We’re concerned that a large scale test of writing, created by a testing company, will begin to define writing in our schools and determine what counts as good writing in ways that will narrow the broad range of writing experiences that students should have. We worry that the meticulous and thoughtful research and practice of teachers and researchers over the past several decades—work that literally has created a discipline known as composition studies and that has slowly changed the way writing is taught in our nation’s schools—will take a back seat to a high-stakes test. And we worry that veteran and new teachers alike, knowledgeable about the intricacies of writing instruction, will increasingly have their classroom practices threatened by an over-emphasis on test prep curriculum.
As we studied the test itself, analyzed the sample essays provided at the time by the College Board, reviewed the available research about this kind of testing, and drew on our own knowledge about writing instruction, we came to a unanimous conclusion: We think the test is problematic for a number of reasons (from issues of reliability to issues of equity), reasons we’ll discuss in some depth below.
Beyond our own concerns, we also believe this: It’s vital that students, parents, teachers, English educators, college admissions offices, and even legislators understand this test better and realize its potential implications. Only through that knowledge can we individually and as a community take a stand about it—a stand that may be manifested in how teachers teach in the classroom, how parents respond to the College Board, how students view themselves as writers and learners, how administrators deal with curricular issues, how colleges decide on its usefulness, and even how legislators connect it to current high stakes testing.
Why do so many high school students even take the SAT? And why do colleges require this test (or the ACT)? We’ve been told since we were high school kids ourselves that the tests help predict how well students will do in college; thus, millions of students across the country take these tests (literally: about 1.5 million take the SAT and 1.2 million take the ACT), and many colleges rely on them—in some cases quite heavily—in their admissions criteria. When our Task Force began to look at available studies to see how accurately a writing test of this sort could predict either students’ success in college more generally or even success in their first year composition courses, we found little to no correlation between the students’ test scores and their writing performance. In fact, in terms of assessing how well students might do as college writers, doing well on a timed, standardized writing test even may be a negative indicator of their ability to do college writing; one study noted how the expectations of college teachers depart quite radically from the kinds of writing encouraged in the SAT essay: the complex, thoughtful, and nuanced writing most college teachers encourage, as opposed to the kind of formulaic, superficial writing that too often receives high scores on standardized writing tests. (Yancey, et.al.)
So if the writing test really doesn’t help in predicting success for student writing in college, why else might the College Board want to make this test a mandatory component of the SAT? Representatives from that organization have articulately expressed the same concern that many of us hold: Too many schools in the U.S. simply don’t do enough with writing. Because teaching writing effectively is challenging and time-consuming , writing often gets short shrift in the classroom. Thus, the argument goes, by requiring a writing test that does more than merely test grammatical constructions (although the SAT test does that, too), schools that are not doing enough writing instruction might begin to change their instructional patterns. The possible result: more students spending time writing and receiving effective instruction in writing.
But will a 25-minute writing test that calls for one-draft writing on a topic that students might never have thought about before and about which they have little knowledge or interest—lead to effective, research-based, best practice instruction in the schools that will truly help student writers develop effective writing skills and a sufficient understanding of the nature of writing? Will teachers—already faced with severe time constraints in their classrooms—be able to spend the kind of time necessary to help students understand issues of audience, purpose and genre or the kinds of processes needed to produce insightful writing? Or will they take that precious time and use it to prepare students for a very different kind of writing—the narrow one-draft type that must be hastily conceived and written?
To be fair, the College Board agrees that test prep writing should not be all that students encounter in school. According to Wayne Camara, vice president for research at the College Board, “The SAT essay should not be the primary way kids learn to write . . . It's one basic writing skill. If that's all the writing your high school English department is teaching, you have a problem." (Winerip)
We agree. But the realities of time and the pressures of high stakes testing can lead to a real conflict for teachers trying to do the best job they can to prepare their students. Realistically, many teachers will see no choice but to cut back on or eliminate instruction in important elements of writing that are just not tested on the SAT. And the disparity between the assumptions about writing that underlie this test and those that inform the profession more generally (through such organizations as NCTE and the National Writing Project—organizations devoted to research based best practice pedagogy) is indeed great. When NCTE, for example, names its assumptions in its statement, “NCTE Beliefs about the Teaching of Writing,” it emphasizes such beliefs as the connection between writing and thinking, noting that writing is less “prerecorded tape” and more a process of discovery and exploration. Thus, in order to produce effective writing, writers need to move through multiple drafts, drafts in which issues of conventions and editing may arise toward the end. Good writing, they argue, is rewriting. This set of beliefs sits in sharp contrast to the assumptions underlying the SAT (beliefs that shine through when you read the sample essays provided): that writers can be given a topic they’ve never seen before and in 25 minutes be able to write coherently, cohesively, and insightfully about that topic, that effective writing conforms to only a few narrow criteria related to the form of the essay and the correctness of the writing, that writers should focus on producing something that looks good—long enough, with the right kinds of transitions, introductions and conclusions.
These contrasting assumptions and their potential impact for classroom instruction concern us greatly. As we explain in our task force report, “Because such assumptions about writing can be ‘invisible’ and therefore unexamined, they can powerfully influence attitudes about writing instruction among teachers, students, parents, and administrators. In this sense, the potential for the SAT timed writing test to reinforce problematic attitudes about writing is a serious matter.” (p. 7)
We are also concerned with what may be some unintended consequences of the test. A number of states, we discovered, are considering using the SAT or ACT as their high school exit tests (growing out of the need for testing mandated by the Bush Administration’s No Child Left Behind educational reform legislation). If this trend becomes widespread, we will see even greater impact of these standardized writing tests on high school curricula: The pressures on teachers to have students prepared for test-writing (as opposed to other kinds of writing) grows exponentially as, in some cases, their reputation as a teacher (or their salary increments or even their job) is linked to the number of students who can pass the tests. Who can blame the teacher who, at that point, chooses test prep rather than high quality writing instruction?
Underlying all these concerns are the issues of equity and diversity. The timed component, for example, certainly disadvantages speakers of other languages whose need for time to translate and respond is well documented. In addition, the single and specific prompt for the test may create unfair challenges for students who bring varied background experiences to the test. In one large scale study, for example, researchers found that gender and race, when combined with choice, had an impact on student scores; Black and female students tended to score higher when given a choice of topics (Gabrielson, et al.) Other studies have found that less experienced student writers in particular have difficulty responding to writing prompts that lack authentic purposes and have no clear audience (Ketter and Pool). And, of course, issues of equity lead into our concern about costs: The new SAT test is almost double in cost from the previous version, in part, it’s been suggested, because of the expense of scoring the writing test.
So what’s a teacher to do?
Our deepest fear, of course, is that teachers (and thus students) will begin to see “test writing” as “all writing”—a true step backward in our progress to encourage strong, thoughtful, lifelong writers among the students we all teach. So what can a teacher do to keep those research-based, best practices alive and well in schools—and still help students deal with the reality of this and other tests? Briefly, let us lay out a few ideas:
Be knowledgeable about the test itself. Take the time to read the test, noticing the language used by the test makers in the directions and rubrics and by the test readers in the responses to the sample essays. Pay special attention to how the test readers explain their scores and therefore reveal what they consider to be the characteristics of “good” writing. Try writing your own response to the prompt (in 25 minutes!) so that you are aware of the sections that are easy for you and those that are more challenging. Read as much as you can about the test, both pro and con, so you can decide for yourself if you think it is a useful or problematic means for promoting writing.
Keep research-based best practice writing at the center of your teaching. Think carefully about the kinds of writers you want your students to be when they leave your class. Students who write a lot, who write from what they know, who know how to revise, and who understand the craft of language, will be the kinds of flexible writers who will be able to use writing in their lives beyond school.
Examine test-writing as a genre of writing. Just as you teach students about the choices and constraints available in particular genres (such as memoir, poetry, analysis, business letters, etc.), invite them to consider the choices and constraints inherent in the genre of test writing. Spend some time immersing students in that genre, having them infer the purposes of the test and the intended audience. Read the rubrics and the sample essays with them to discover what the test writers believe about “good” writing and explore the similarities and differences between test writing and other kinds of writing they do in your classes. Encourage them to try some practice tests under timed conditions and then discuss what they discover about themselves as writers from this exercise (and share what you have discovered in your own response to the prompts.) Help them see the similarities between this genre and other genres: what counts as good writing across the board, what is peculiar to this genre.
Talk to others about your concerns. Knowledgeable teachers can have a voice in what happens in their school or district. If you are concerned that your school or district may be adapting a particular writing curriculum simply to meet the needs of this test, start by talking to others about your concerns. You might form a study group with other teachers and administrators, using the multiple resources available (including the NCTE Beliefs about Writing document and the Task Force report) as a way to think together about what quality writing instruction entails. Gather samples of your students’ writing to share with your group, especially examples which demonstrate your students’ ability to write in multiple genres and situations and which show how revision aids in writing. Work with this group to develop some shared beliefs about writing, beliefs that might then find their way into a strong curricular statement or a presentation for upper administration or the school board. You might even create some inservices for parents to help them understand what quality writing instruction looks like and the differences between it and test preparation.
Times are tough for teachers who are truly committed to writing. We believe, though, that the best way for teachers to maintain high quality writing instruction in their classrooms in the face of test-based curricula is for them to be informed, knowledgeable, and proactive. We encourage you to take on the task of understanding these tests from the inside out, think carefully about their implications, and take a stance toward helping others understand. Only then can we maintain ways of teaching writing that truly are in the best interests of a whole generation of students.
As a confirmed control-freak, the words “on-demand” have a certain satisfaction for me so it is no surprise that my classroom is a place where students practice on-demand writing assessments. And now, due to the ever-increasing pressure on middle and high school students to produce this type of writing, requiring it from them becomes more than justifiable - it is mandatory. As I see my struggling writers and English Language Learners gain confidence to approach this task, the more convinced I am that they must be given opportunities to practice. Furthermore, sidestepping arguments about “teaching to the test,” the expectation of on-demand writing assessment must become a part of a school’s culture.
At a CATE 2005 conference workshop, I was thrilled to see almost half of the participants’ hands go up when asked if their schools conducted an in-house on-demand writing assessment. Some of the participants confirmed that they had a pre- and post-test for each grade level. Almost all of the teachers said that their assessments were scored not only by English teachers, but that all departments and staff were invited and encouraged to participate in the scoring. Those fortunate teachers are part of a culture that sends a strong message to all their students: We are committed to supporting you in accomplishing this task successfully. As part of our commitment, we are giving you opportunities to try to perform this task well.
In the fall of 2004, our continuation high school enthusiastically embraced the idea of a school-wide on-demand writing assessment. All English teachers administered the test on the same day and in the same way, trying to mimic conditions the students would face in taking the CAHSEE. Not surprisingly, when our staff met to score the essays, we found many papers blank or with only a few sentences. Some were screeds on the “pointlessness” of the exercise. Using the four point CAHSEE Response to Literature Rubric to holistically score the 198 papers that had writing on them, only 38 of our students received a score of three or four.
The somewhat dismal student results of this in-house on-demand assessment were secondary to the benefit the exercise produced for the staff. The calibrating of our scoring of the papers and the discussion about what is competent on-demand writing in order to help our students pass the Exit Exam was invaluable. English, social science, special education, math, and science teachers, the career counselor, and principal all participated in reading and scoring the papers. We are aware, now more than ever, of the need to practice, practice, practice, those skills we need to improve. The staff result was so overwhelmingly positive that we have scheduled a second assessment date for later this year, armed with a new resolve for all staff to teach academic writing and a commitment to the endeavor as a part of our school’s culture.
The CAHSEE is a dreaded reality for many of our students. Most of our juniors have been absent on the days it has been administered. When questioned, many are fatalistic about the outcome. “I know I’m going to fail,” is often the student mantra. The results of the November CAHSEE testing showed that of the forty-six juniors tested, nineteen, or 41% passed. Even among those passing, the highest essay score was 2.5. Seven of the forty-six students received an “NS” or non-scorable on the essay portion.
But last week a minor miracle occurred. As I was trying to teach, again, how to embed quotes in a response to literature essay, a young person sighed loudly, “This is boooooooooring.” As I struggled to put on my most pleasant and patient face, preparing to explain, again, why the ability to do this may have some importance, Becky, a student who had been in and out of my class since the beginning of the year, came to my rescue.
“Hey, leave her alone,” Becky intoned, “Just keep on going, Cismowski. I took the (expletive deleted) Exit Exam in November because I thought that if I just tried to remember all this (expletive deleted) she told me I could do it, and I did. I wrote that (expletive deleted) essay no problem because she’s telling us just what to do. So if you don’t like it, change your schedule, but I know I’m glad I knew what to do on that thing.” The room was quiet; my detractor nodded. I thanked Becky for her kindness and started writing again at the overhead projector.
Coincidentally, later that week the exit exam results came in. Becky had indeed passed the English portion of the Exit Exam! Her essay score, however, was only a 1.5. She had passed the other portions with flying colors. Even though her essay scored a 1.5, I am still a firm and passionate believer in teaching to the essay portion of this test. She attempted to take the test this time, when previously she did not have the confidence to do so. She at least attempted the essay, when she may not have attempted it before, because she felt confident she had the ability to do what needed to be done. Becky overcame this barrier to high school graduation because she had the confidence she could. I am proud of her and believe that as we continue to assimilate the idea of on-demand writing into our school culture we will have more and more students passing the English Language Arts portion of this test.
This represents a shift in thinking for me about what I can reasonably expect to accomplish with my students in one year. No English teacher can single-handedly turn the on-demand writing assessment tide for all of her students. English departments and, in fact, entire schools, must work together to create an environment that supports on-demand writing, across curricula and grade-levels. To learn more about making this happen at your school, contact the Writing Project or Writing Project Teacher Consultant nearest you. If you aren’t already, consider being a part of changing your school’s culture about writing.
This is the dream: you’re in your classroom, surveying your thirty-or-so students busily revising their essays. They’re working independently except for brief conversations when they gather in twos or threes to make sure a turn of phrase or transition or conclusion they’ve just added enhances what they’re trying to say. You’ve managed to build time into your schedule to allow three weeks for this project, and you’ve only stopped students from their writing to introduce some fresh prewriting activities (drawing, perhaps, or interviewing a classmate) and to review the form and function of a counterargument. You’ve had the luxury to spend time (during the daily hour reserved for writing) conferring with individual students about their drafts. This period is popular with almost all of the students, all of whom you’ve been able to engage—opening their writing to some interesting topics, coaching the hesitant writers via frequent mini-conferences. The sweet-oiled machine of your writers’ workshop is humming smoothly.
You’re lucky. This vision of a classroom writing program that provides focused attention to individual writers seems a remote ideal for most of us. Faced with increasing demands to prepare students for high stakes multiple choice tests and required to implement increasingly prescribed literacy programs (making individual instruction almost impossible), many teachers have had to abandon well-established, thematically-driven, student-centered writing programs. We teachers have less time to craft instruction around current high-interest topics that motivate students to write. Instead, we feel compelled to prepare students for spontaneous writing in response to impersonal and generic topics, robbing them of opportunities to see the integral link between reading and writing, to develop their writing over multiple drafts, and to practice writing for different purposes. Students simply don’t develop as writers.
Meaningful writing instruction has been shortchanged as “assessment readiness” has taken a lead role in many schools. Day after day of classroom instruction is spent of “drill and kill” test readiness booklets, force-feeding students, their eyes glazed, with information national or state bodies have deemed important.
The result? Writing programs almost everywhere have suffered. As states have adopted lists of standards and multiple choice tests, meaningful writing instruction has devolved into a series of fragmented assignments driven by testing priorities. Formerly student-centered, meaning-based instructional programs have been replaced by prescriptive textbook series or formulaic writing agendas, leaving little room for students to move in directions of their own interests, to find their own voice, and still less room for teachers to branch off into areas of local or contemporary relevance. Such “one size fits all” approach to writing has become common in many districts nationwide. As a result of these conditions, we’re already witnessing decreased student interest in writing as well as a diminished ability on their part to use writing as a way to extend and deepen their thinking. Students resist writing more than ever and do not have practice (or see the value of) developing ideas over time. Test results in many states (both performance measures and standardized test scores) show increasingly lower results in writing.
Why Teach Assessment-Based Writing?
Are we advocating a test-free education? Not at all. While writing tests help monitor and improve our teaching, they’re also a reality for students applying to high schools, colleges, for scholarships, and of course, for jobs. They’re required components of many courses, school and district “exit exams,” and we’re witnessing a renewed emphasis on writing tests as a “window” into the thinking and problem-solving abilities of students at all ages across the curriculum.
Though they take different forms, writing assessments abound. Curriculum standards in all states identify narrative and expository writing types as priorities for instruction and assessment. These genres are tested in a variety of ways at many different levels. Sometimes the assessments serve as “benchmarks” to signify whether or not students are on track for meeting graduation or exit requirements. Sometimes they are the “gates” that permit students to pass to the next level, gain admission to a school, or demonstrate proficiency in a subject to circumvent a required class. In addition to local varieties of writing tests administered by schools, districts and states, there are large-scale national writing tests. These are generally high-stakes assessments upon which important decisions are made. They’re also used to monitor the writing performance of large groups of students, both in the United States and internationally. The College Board has recently replaced the analogies section of the SAT with a writing sample, and almost all states administer formal writing tests. What all have in common is the expectation that students will respond to a reading selection or series of questions by writing a substantive, coherent, correct essay in a limited amount of time.
We need to prepare students to perform at their highest levels in these situations. And, of course, many skills required on these tests are necessary for more than test-taking. Much writing is done under time constraints. The ability to write succinctly and spontaneously is important for conveying and recording information, communicating effectively, and demonstrating what the writer has learned or experienced. Writing on demand is a useful lifelong skill.
The dilemma that we as educators face is to know where instruction and assessment meet: that is, what the criteria are for assessments that not only measure student achievement but also enhance and promote real learning. Unfortunately, many tests are poorly designed; they’re disconnected from students’ lives and from what students learn in the classroom. Further, they fail to provide timely information that teachers or students can use.
Our challenge becomes daunting. How do we, teachers who believe in the power of writing and the attendant search for meaning in young students' lives, preserve a quality instructional program when external demands rob us of the necessary time and focus such a deep program requires? While nothing can replace a flexible, personalized and process-oriented writing program, we do believe that it’s possible to continue to implement a quality writing curriculum through a series of comprehensive, custom-designed performance assessments. Our years as a curriculum coach, performance assessment coordinator, and teacher have shown us that integrating quality authentic writing tests into a classroom, a grade-level or a school-wide writing program can be powerful. It can drive whole school reform, teacher collaboration, and student writing improvement in profound ways.
In such a program, students develop versatility, and they progress as writers while at the same time teachers gather data about the effectiveness of their instruction. This program depends on lessons that are carefully designed to guide students through a series of activities that develop literacy, engage, and simultaneously prepare students for on-demand writing tests. Such lessons connect students’ previous knowledge and experiences to the writing topic (they build schema) have students read with, listen to, and talk with classmates to expand their thinking on a topic generate relevant vocabulary through guided exercises consider the purpose of the writing task and the audience, adjusting formality and selecting vocabulary appropriately help students plan and organize their ideas for writing promote revision and editing in the context of a timed task provide a scoring guide and examples to use as instructional tools
Carefully-crafted writing prompts can double as high quality writing lessons by embedding preparation for the writing task by presenting a series of activities, each progressing gradually from familiar to unfamiliar, easy to difficult, simple to complex, culminating with the writing assignment itself. The final activity, the writing, is a synthesis of what students have practiced and prepared for. For example, students identify related experiences, learn associated vocabulary, observe another writer’s organization of ideas, and trade reactions to a reading selection with a classmate. Through such activities, students become “fluent” in the topic before they have to write and are much better equipped to do well than if they were simply given their prompt “cold.” This “breaking down” of one complex task (the final writing assignment) into smaller discrete skills is called, as you know, “scaffolded instruction.” On-demand writing prompts can and should carefully scaffold writing to maximize student engagement and the students’ best writing.
In addition, students have access to the scoring guide at the onset of the pre-writing section, mid-way through, or immediately before writing the essay itself. Giving students the opportunity to understand and internalize expectations for their writing helps them write more skillfully. Student examples representing a range of responses can “translate” the features of the rubric into real student writing. Presenting clear descriptions and student examples makes learning—and subsequent improvement—tangible, and therefore possible. The goal, ultimately, is to have characteristics of excellent essays become “second nature” for each student—each not only conversant with rubric characteristics, but also able to integrate them into dynamic essays. By moving through the writing process in a compressed period of time, by exercising all language skills (listening, speaking, reading, writing), and by using multiple learning modes and styles, students are both interested and motivated, and tend to perform at their highest level.
Quality Writing Prompts
There’s an art to designing successful prompts. Just as in high quality instructional units, prompts must tap student interests. They must provide a focus that offers critical thinking opportunities. They must be written to a real, living, breathing, life-experiencing reader. A study conducted alongside the 1998 National Assessment of Educational Progress writing exam points to three critical components of well-crafted writing prompts: a balance between too much and too little information in the assignment, selecting topics that engage students in complex thinking, and identifying an authentic audience.
More and more, teachers are trying to design and/or request prompts that engage students and meet them at their developmental levels to enable the best "snapshot" of their writing and thinking. Obviously, since student interest and ability levels are so varied (even in the most homogeneous of classrooms), good prompts must be focused but at the same time “open” enough to allow for multiple departure points, to allow students’ bright, individual “selves” to shine through. They’re best when they revolve around current, relevant topics and embed a series of activities that galvanize students to want to write. Students need time, even in an “on-demand writing test,” to think and talk and read and plan. They need to generate vocabulary, remember relevant experiences and figure out how they really DO feel and what they really DO think about a particular controversial issue as they get ready to write.
Good prompts are also characterized by what they produce. Direct, authentic, “process-conscious” writing tests yield diverse and lively writing interesting for teachers (and other students!) to read. In addition, good prompts make student performance expectations explicit by providing rubrics or scoring guides and samples of scored student work. Finally, instructionally sound prompts provide important information about group and individual student progress for both students and teachers.
In addition to honoring interesting topics, prompts must be accessible to both boys and girls. From many years of designing, implementing, and evaluating writing assessments, working with teachers, interviewing students, and administering hundreds of writing assessments, we‘ve found that in general, prompts that boys can relate to girls relate to as well. The opposite, however, is not true. Boys have a more difficult time relating to prompts and topics that we know girls favor. Therefore, sensitivity to gender preferences is critical in designing prompts that will produce earnest writing — a sensitivity that, no matter what gender predilections exist, can appeal equally to either boys or girls.
In summary, good prompts:
Finally, good writing lessons and good writing tests provide important information about individual students that few other vehicles provide. That information can show us individual student progress and enable us to further refine instruction based on score results. When writing prompts reflect instruction, they link learning and testing and make the time spent on assessment not just worthwhile, but richly valuable.
More and more, students are expected to write on demand. The California STAR assessments in grades 4 and 7, the California High School Exit Exam, NAEP writing assessments, English Placement Tests, and the new SAT and ACT all require students to demonstrate their writing skills by producing a piece of prose under controlled circumstances. This issue of California English focuses on how these exams are influencing writing instruction and offers suggestions for helping students meet the particular demands of this type of writing.
In her essay “Thinking Backward: Meeting the Challenge of Writing on Demand,”Leila Christenbury poses a key question, “How can teachers help our students meet the challenge of writing on demand in the context of instruction that meets student needs and satisfies standards for best practice?” Christenbury goes on to explain how teachers can “emphasize strategies which are both effective in the testing situation and intellectually defensible in light of what we know about how real writers write.” Together with Anne Ruggles Gere and Kelly Sassi, Christenbury has recently published Writing on Demand (Heinemann 2005). I highly recommend this book to anyone interested in learning more about teaching writing in a time of testing.
Approaching the issue from a different perspective, Cathy Fleischer, Jean Ketter, and Robert Yagelski warn of the ways emphasis on timed writing can distort good writing instruction. In “Teach Writing, Not Testing: Some Worries about the SAT Writing Test,” these members of the NCTE Task Force charged with examining the new SAT, express their concern that “a large scale test of writing, created by a testing company, will begin to define writing in our schools and determine what counts as good writing in ways that will narrow the broad range of writing experiences that students should have.”
California English invites CATE members to join in this conversation about on-demand writing by writing to us at email@example.com. Your classroom experiences can provide important insight to other teachers struggling to help students perform well on state tests as well as develop as multi-faceted writers. How have timed writing exams affected your English language learners? Do you have a strategy to share? CATE welcomes letters to the editor.