April 3rd 2008
The first session I went to dealt with revision. “New Perspectives on Revision: Discourse and Practice” distilled several years of research into the hour+ session relatively efficiently. The previous years’ research questions were foregrounded: 1) Does revision improve student scores on final portfolios? (the answer is yes, based on roughly 5000 essays surveyed), 2) What kinds of changes work best and/or fail to work as well as can students articulate their revision strategies? (the answer is below, based on a resampling–450 essays–from the original 5000) and finally, 3) How often and how much do First Year Comp students actually use revision strategies, rather than just say they do?
The researchers first gathered key-words from both a model prompt available to all faculty for a required end-of-term eportfolio and the standard rubric (I was surprised to see that it is in fact the rubric I use in English 1!) mandatory for all composition classes. They distilled 49 key-words from these texts and labeled the prompt-based words “Process” and the rubric-based words “Product.” They had 16 “Process” terms, such as “revision,” “audience,” “draft,” etc., and 33 “Product” terms, such as “unity,” “coherence,” etc. They hypothesized that surveys of student reflective portfolio introductions would reference these key terms and that there might even be a tendency to parrot the wording of phrases and sentences in the rubric or prompt and that this would then indicate a sort of “writing to the prompt” approach rather than real articulation of actual revision strategies. They wanted to see if there was a correlation between the articulation of real strategies and successful final portfolios.
The answer was yes and no! First, they determined there was virtually no parroting of phrases or sentences–in fact, beyond two-word pairings, there were very few matches and none of these were longer than three words. Secondly, though they expected the standard rubric, to which all students were constantly exposed in all comp classes (and which generated the most key-words), to be the most-referenced terminology, in fact “process” terms were most often used by students describing what they had done to their essays, in spite of the fact that the model prompt was just that, a model adapted to each class by different instructors (so presumably it had different wording in different classrooms). Finally, they discovered there was no direct correlation between students’ ability to articulate their rhetorical moves and the relative success of their final portfolio, and in this finding lies perhaps the most interesting struggle.
The raw data showed a roughly 4% overall usage of key terms in the student reflective writing, so virtually no parroting of either the prompt or the rubric, and that “process” terms were the words used 64% of the time vs. 36% of “product” terms. The highest scoring final portfolios showed a high level of key-word usage (5% vs 3.6% for low-scoring portfolios) and in the reflective introductions for these portfolios, 82% of the terms were “process” as opposed to 51% “process” terms in the low-scoring portfolios. So, yes, high-scorers more ably described their strategies than low-scorers. However, the researchers also looked at successful vs unsuccessful revision (i.e. the second paper received a higher or lower score in the final portfolio than it had in its last draft) independently of overall score, and here the numbers are closer. Successful revision used keywords only about .5% more than unsuccessful revision and the “process” term percentages were also much closer.
The researchers determined there were several reasons for this narrowing of the results. First, there were students, and they had specific examples, who though they were able to articulate the strategies they used well, they had started from such a low spot that the revision didn’t move them up enough to pass. Then too, some made unfortunate choices under the assumption that they were meeting audience expectation, for example, or something similar–they knew what they were trying to do (and articulated that in the reflection), but they did not succeed. Finally, the researchers discovered that, contrary to teachers’ constant admonition to address “global” concerns, in fact the faculty readers of the portfolios (the graders) favored micro (grammar) revision over thematic or paragraph-level revision moves. The better the grammar in the end result, the better the rating regardless of the global issues.
This was fascinating work and had broad implications. Do we really value global revision or are we a sucker for a well-heeled line–and what does this mean for our 2nd language learners? But we should not make hasty conclusions–the revision movement was sometimes quite small simply because the survey did not track the essays through all revisions, but only through the last two, so it was less likely to find global revisions. And, grammar presentation is important.
One last speaker did some micro-level research on student papers asking students to eliminate as much as possible all “to be” verbs (the passive tense) and prepositional phrases. The results she quantified showed a net gain in word-count for all the surveyed essays by about 25%. And while word-count alone does not necessarily indicate successful revision, it does require the rethinking of sentence strategies and it at least gets students doing revision. It was an interesting item.
This was how the day started–I had two pages of notes and my mind was racing. Session 2 coming up!