Student expectations about grading
On the first day of my classes, I have my students write down any questions they have, and I leave it open: Ask about class, about campus, or about life in general. I want to establish a relationship of trust and show them that I listen to their concerns. This semester, two students asked questions about grading that relate to a disturbing assumption I commonly see in education:
- What is the expected study/writing time in order to achieve an A in this course?
- What are the chances I totally ace this class if I actually do everything?
My concerns over the first question are rather simple, and they lead to my major concerns over the second. This student is under the impression that every studies the same amount of time for the same content, or that writing takes everyone the same basic amount of time. First, I do not have access to those numbers—I don’t time my students while they study—but I also suspect these numbers vary tremendously from one student to the next, even among students who earn the same grade. Writer’s block is neither selective nor predictable. I cannot tell when it will strike or for how long. There’s no way to tell how many drafts a particular writing assignment will take. My student is opening himself up to frustration and disappointment by thinking that writing is that predictable.
But it’s how that notion of predictability carries into the second question that really concerns me. My student hopes to “ace” a writing class—which, from casual observations of teacher comments about student writing, simply doesn’t happen. I suppose that, for courses where assignments have straightforward solutions to find, such absolute success is possible. But for something as rhetorically sophisticated as writing, I’m not sure there’s any equivalent. Is it possible to “ace” a single writing assignment? Even this blog post might get my readers to think about the expectations of assessment in writing classes, but I’m even more certain that my readers will have thoughts on how it could be more effective in its goal.
Let’s go with that as an assumption—that “perfect” cannot apply to writing. What does that mean for how we grade writing? How do we align or calibrate our scoring scales? When perfection is possible, it becomes the standard for the 100% mark with little debate or surprise. If the attribute of “perfect” cannot apply, then it cannot be the standard of measure. In writing courses, we must approach measurement standards from a different perspective, and we must do so deliberately. I’ll get back to this issue.
The standards of grading writing
As part of a discussion in the MOOC MOOC (a course about massive online courses), I noticed the obvious fact that checking a submission from every student becomes prohibitively tedious as enrollment grows. The MOOC response to the problem is essentially to ignore it—if it can’t be checked, don’t check it. One of our readings was an article from Cathy Davidson, co-founder of the Humanities, Arts, Science, and Technology Advanced Collaboratory (the HÃSTAC) titled “How To Crowdsource Grading.” The principle, at least from the title, is alluring: have students grade themselves and eliminate the burden on teachers. But I found a major problem in the details of the solution:
Grading itself will be by contract: Do all the work (and there is a lot of work), and you get an A. Don’t need an A? Don’t have time to do all the work? No problem. You can aim for and earn a B. There will be a chart. You do the assignment satisfactorily, you get the points. Add up the points, there’s your grade. Clearcut. No guesswork.
This is a problem. It is not, as the author purports, a solution.
By stating, “Do all the work…and you get an A,” we devalue the A grade, inflate student expectations, and—most importantly—make the goal of our courses completion, not quality. This sentiment is confirmed six sentences later: “You do the assignment satisfactorily, you get the points.” Grading, in this system, becomes an issue of yes/no choices, not of rating quality. It’s a done/undone standard that functions as a checklist, not an evaluation. (See my post on the functions of rubrics for more details.) Debbie Weaver, Coordinator of Composition at the University of Central Florida, summarized the problem this way: “If an A is completion, there’s no way to account for excellence.” If full credit is issued merely for completion, there is no way to earn more credit for completing a task better than expected. Nothing above “satisfactory” can exist. Indeed, on a completion-based assessment, the quality of the work, if it is considered at all, gets measured as “okay enough” or “not okay”.
By the time students arrive in college, they know how to write. They may not write well, and they may not write appropriately in academic situations, but they can write. Their writing has been “okay enough” to get them into college. If in college, their writing is held to the same standard, they aren’t being given the opportunity for growth. If we don’t expect more, we won’t get excellence.
Setting the bar for writing quality
If the students entering first-year composition (FYC) programs in college can all write, then the average performance is completion of all writing assignments. Completion is the basic expectation of a FYC course, and it is the average assumed achievement level of all incoming students. Let us call that “average” a grade C. By moving completion from an A to a C, we suddenly create a wide range of meaningful feedback with our arbitrary and overly simplistic letter-based system. D means attempted by not successful; F means not attempted. B is reserved for good (above-average) work, and A is used for exceptional work.
My brief grading outline should not seem unusual or revelatory, but it surprises me how quickly we willingly set it aside when presented with a completion-based scale like the one espoused in Davidson’s post. The simplicity of completion-based scales appeals to our need for efficiency and speed, yet they completely dispense with our students’ need for evaluative feedback. In an effort to be transparent and fair in our grading, we have eliminated the assessment from our assessment methods; completion-based systems serve as an accounting or a tabulation, not an assessment. They do not identify what has been done well or what could be improved upon.
The implications of properly balanced grade systems, with a C as the central measure of completion or basic success, go beyond an individual classroom. A sound grading system can change how a department assesses itself, places students in classes, or sets standards of quality. The Department of Writing at Grand Valley State University integrates balanced grade scales into their approach to directed self-placement:
Placing students in a basic writing course to help them get an A or a B instead of a C in the regular course is patronizing to the students. If a C is not good enough, it should not be a C. If it is good enough, we should allow students to make do with it. Forcing C students to become B or A students is at best a way to prop up a potentially unnecessary developmental course or program, and at worst a way to wring out extra tui
tion dollars from already cash-strapped students. If students writing at a C level in your “regular” composition course are, by the standards of your college or university, poor writers, then you’ve got a grading problem—which may be quite different from a placement problem.
The problems are different, yes, but they are surely related. We owe our students the respect, credibility, and feedback that comes with proper assessment. We need to ensure that an A means “excellent”, rather than simply “finished”.