A recent post to the Writing Program Administrators email listserv questions how grading works in a MOOC. The post’s author, Ed White, says that the goals of grading are to sort students and to help students self-assess. He goes on to observe that “some MOOCs seem to be ambiguous about both purposes.” I have to agree, but I think he’s understating the issue. The American obsession with standardized, “objective” tests has created an unhealthy focus/reliance on grades in our classes. On one hand, grades are essential; on the other, they can be corrosive and distracting. Can classes improve if we remove grading from the picture?
In mid-December 2012, I participated in my department’s program assessment. I joined a team of sixteen teachers who read and evaluated the quality of writing portfolios from selected students in our first-year writing classes. I’ve been a part of this process three or four times before, and the chatter during the day is predictable. Silence for a bit, then laughter as amusing student comments get shared around the table, then quiet again as we try to stay focused and meet our afternoon deadline with determination.
But the conversation at the end of the day interests me. Teachers invariably say how much they appreciate reading portfolios from other classes. They value the glimpse into other teachers’ assignments, into seeing the writing quality produced in other classes, and they are reassured when they see other teachers assigning the same grade to a portfolio that they assigned.
By the end of the day, the consensus is always that the portfolio-assessment process, tedious though it may be, is beneficial and well worth the time. Indeed, we are now planning to do a department-wide shared read so that all teachers can see what a portfolio could look like, and so we can all get a general sense of where our grading standards should be. It’s an awful lot of time, labor, and paper just to ensure we see eye-to-eye on writing evaluation, but that’s a pretty significant alignment for a composition program. Grades, like them or not, are important in schools.
Why do we assign grades? Generally, they are designed for two goals: to indicate achievement (granting credentials) or to sort/rank students (by putting in grade groups and identify a bell curve). In the course of a single semester, we typically grade individual assignments, add points for various tasks, and create (and score) tests. Essay grading, for those in the humanities, is a great way to see how students think … and a great way to take up too much of a teacher’s time. Could we benefit students even more by training them to do the credentialing and ranking on their own?
These two tasks do not require a teacher, but the credentialing requires training. If we train our students recognize the standards required for achievement, they should be able to assess the performance of one another. When teams of people work together to grade essays or portfolios—for the SATs or AP exams, for instance—the graders are given sample papers that illustrate the required abilities or characteristics. Then, each sample is scored by two raters to ensure greater consistency. When scores on a paper diverge, the paper is graded by a tie-breaker.
The people who score SAT and AP exams are typically English teachers (or professors), yet they still need specific training to properly assess the tests. Would that same training work for students who are not yet members of the teaching corps? If a college student is being trained in a field such as writing studies, a part of that training should include learning how to assess, rather than just to produce.
For the goal of granting credentials, or indicating that someone has done something, students are often good at determining when a thing has or has not been done. Provided they have examples and specific guidelines for expectations, students can make those determinations. In my classroom experience, students often struggle to identify the quality of something, not whether something has been achieved. If our assessment criteria are phrased—as is often the case with writing rubrics—in terms of beginning, competent, good, or excellent, our students do not have sufficient experience reading student writing to be able to confidently distinguish among performance levels. However, if students are granted training similar to the standardized-testing preparations, our students could serve as assessment teams in our classes. Participation in an assessment team would give students valuable experience reading the work of others, would help them see the variety of writing being done in their classes, and would refine their discernment regarding writing ability or quality.
As for ranking, students are quite capable of saying when (and why) one thing is better than another, if they have two available samples to compare. How often do we give students that opportunity? And what if they were given more than two samples to discern? I’d like to see how much students are able to distinguish between papers when they have four or five to work with. Rather than traditional me-to-you peer review, I wonder how much richer me-to-many, or even whole-group we-to-many, conversations could be. Give students the resources and opportunity to look at a collection of products, and ask them to identify why one is better or worse than the others. Those differences would be phrased in terms of the desired qualities of the task or of the style of writing…which is exactly the kind of writing feedback our students need.
This spring, I’m going to try just that. My students will be assessing one another. They will rank one another’s papers and identify why each paper was ranked in its place. They will also have the authority to grant credentials: Students will assert whether a paper has met minimum requirements. I will work on ensuring the quality of the feedback, getting students to look for the right kinds of writing traits, and helping them to negotiate their responsibilities. I’m looking forward to outsourcing my grading this term.