April 16th 2008

4Cs NOLA Friday — I Session

“What Students Really Do With Feedback” was one of the better sessions I attended. These researchers, like the ones Thursday morning, designed their study based on Nancy Sommers’ work at Harvard assessing what students really do with instructor comments. They had two assumptions as they started out:

  • That students have poor (or lax) attitudes toward writing, which contributed to a lack of attention to revision, and
  • That past experiences (i.e., poor experiences) with writing functioned to discourage them from taking revision seriously

Dodie Forrest and Carolyn Calhoun-Dillahunt teach at Yakima Valley Community College in Washington State, a Hispanic-Serving Institution very similar to COS in both ethnic make-up and local economic situation. Three years ago, they began asking research questions to see what they could ascertain about student decision-making in response to instructor comments. They were initially interested primarily in students’ attitudes toward teacher comments. Their findings indicated:

  • students appreciated praise, but praise comments didn’t help them improve–they didn’t know what to do with it
  • compared to questionnaire responses at the beginning of the course, fewer students at course end wanted directive (edit-centered) comments and more students preferred end-comments
  • jargon comments were unhelpful (words such as “tweak” and “analyze”)
  • symbols and abbreviations were unhelpful
  • students had difficulty understanding teachers’ questions (when open-ended or rhetorical questions were part of the comments)

In the next year’s study, the research question was why students made the revision changes they made. They compared the actual changes students made to the comments teachers left. They found:

  • most students really do attend to instructor comments, though the moves they made did not necessarily improve the work
  • students were often confused by comments
  • some (at least) personal decisions go into the changes students make

The research they reported on this year was based on in-depth case studies of students in the same English classes–our English 1 level courses. I won’t go into their methodology, though it was in-depth, but they came up with a number of important implications that, combined with their previous results, say interesting things about students and about educators’ assumptions of students. They found:

  • Students, in spite of what we might think, generally have a positive attitude about writing and about their abilities.
  • Students do attend to teacher comments and/or genuinely believe they did address teacher comments in their revisions.
  • Even when an instructor comment led to subsequent text revision, students tended to see the revision as coming from the student rather than the teacher–in other words, no matter what sparked the revision, students felt they owned the revision decisions they made.
  • Students intend to make content changes, not just editing changes, in revision.
  • Students generally attend to marginal comments and prefer comments that allow they to make choices about their writing (as opposed to directive comments).
  • Students do not rely solely on their instructor for revision decisions.
  • Students feel that their ideas are important–sometimes this leads them to make revision decisions that instructors feel are not productive. The point is that some student revision decisions should be viewed through a lens of resisting instructor authority, rather than laziness or inattention.
  • Students are not sure how to use praise, although they like it. In interviews, students demurred from responding to praise comments.
  • Students ignore comments they don’t understand or don’t want to address.

Many of these findings were quite interesting and suggested that we don’t know enough about what students think about when we say “revise.” The researchers’ initial assumptions of students were all debunked–and who among us doesn’t share some of these assumptions? It is likely that we are approaching our students as if we understand them, when we don’t.

For future reference, I will be emailing Dodie and Carolyn to get copies of some of the research tools they used, which were well-developed and could help us a lot in our own work at COS. Their presentation was exceptional and I just wondered how they managed to have lives, too, while they did this work!

No Comments yet »

April 3rd 2008

4Cs NOLA Thursday — 1st Session

The first session I went to dealt with revision. “New Perspectives on Revision: Discourse and Practice” distilled several years of research into the hour+ session relatively efficiently. The previous years’ research questions were foregrounded: 1) Does revision improve student scores on final portfolios? (the answer is yes, based on roughly 5000 essays surveyed), 2) What kinds of changes work best and/or fail to work as well as can students articulate their revision strategies? (the answer is below, based on a resampling–450 essays–from the original 5000) and finally, 3) How often and how much do First Year Comp students actually use revision strategies, rather than just say they do?

The researchers first gathered key-words from both a model prompt available to all faculty for a required end-of-term eportfolio and the standard rubric (I was surprised to see that it is in fact the rubric I use in English 1!) mandatory for all composition classes. They distilled 49 key-words from these texts and labeled the prompt-based words “Process” and the rubric-based words “Product.” They had 16 “Process” terms, such as “revision,” “audience,” “draft,” etc., and 33 “Product” terms, such as “unity,” “coherence,” etc. They hypothesized that surveys of student reflective portfolio introductions would reference these key terms and that there might even be a tendency to parrot the wording of phrases and sentences in the rubric or prompt and that this would then indicate a sort of “writing to the prompt” approach rather than real articulation of actual revision strategies. They wanted to see if there was a correlation between the articulation of real strategies and successful final portfolios.

The answer was yes and no! First, they determined there was virtually no parroting of phrases or sentences–in fact, beyond two-word pairings, there were very few matches and none of these were longer than three words. Secondly, though they expected the standard rubric, to which all students were constantly exposed in all comp classes (and which generated the most key-words), to be the most-referenced terminology, in fact “process” terms were most often used by students describing what they had done to their essays, in spite of the fact that the model prompt was just that, a model adapted to each class by different instructors (so presumably it had different wording in different classrooms). Finally, they discovered there was no direct correlation between students’ ability to articulate their rhetorical moves and the relative success of their final portfolio, and in this finding lies perhaps the most interesting struggle.

The raw data showed a roughly 4% overall usage of key terms in the student reflective writing, so virtually no parroting of either the prompt or the rubric, and that “process” terms were the words used 64% of the time vs. 36% of “product” terms. The highest scoring final portfolios showed a high level of key-word usage (5% vs 3.6% for low-scoring portfolios) and in the reflective introductions for these portfolios, 82% of the terms were “process” as opposed to 51% “process” terms in the low-scoring portfolios. So, yes, high-scorers more ably described their strategies than low-scorers. However, the researchers also looked at successful vs unsuccessful revision (i.e. the second paper received a higher or lower score in the final portfolio than it had in its last draft) independently of overall score, and here the numbers are closer. Successful revision used keywords only about .5% more than unsuccessful revision and the “process” term percentages were also much closer.

The researchers determined there were several reasons for this narrowing of the results. First, there were students, and they had specific examples, who though they were able to articulate the strategies they used well, they had started from such a low spot that the revision didn’t move them up enough to pass. Then too, some made unfortunate choices under the assumption that they were meeting audience expectation, for example, or something similar–they knew what they were trying to do (and articulated that in the reflection), but they did not succeed. Finally, the researchers discovered that, contrary to teachers’ constant admonition to address “global” concerns, in fact the faculty readers of the portfolios (the graders) favored micro (grammar) revision over thematic or paragraph-level revision moves. The better the grammar in the end result, the better the rating regardless of the global issues.

This was fascinating work and had broad implications. Do we really value global revision or are we a sucker for a well-heeled line–and what does this mean for our 2nd language learners? But we should not make hasty conclusions–the revision movement was sometimes quite small simply because the survey did not track the essays through all revisions, but only through the last two, so it was less likely to find global revisions. And, grammar presentation is important.

One last speaker did some micro-level research on student papers asking students to eliminate as much as possible all “to be” verbs (the passive tense) and prepositional phrases. The results she quantified showed a net gain in word-count for all the surveyed essays by about 25%. And while word-count alone does not necessarily indicate successful revision, it does require the rethinking of sentence strategies and it at least gets students doing revision. It was an interesting item.

This was how the day started–I had two pages of notes and my mind was racing. Session 2 coming up!

No Comments yet »