For Thursday’s installment of MOOC MOOC (see this post for clarification), we were to explore assessment and learning outcomes in a massive course environment. Because I’m low man on the institutional totem pole (I’m a grad-student teaching associate), I’m not in a position to suggest—or even run—a MOOC. Instead of considering how we could better assess in MOOCs, I chose to think about how I could improve my own face-to-face classes with the techniques I’ve seen and read about that work for the MOOC environment.
I concluded that our traditional system puts students at a disadvantage, and semantics could be holding us back from improving traditional courses. In this post, I’ll explore these assertions: Assignments are a matter of translation, and grading is a matter of perspective. Both work against our students, not in their favor. If we stop calling it “grading”, we could hand over much of the process to our students, relieving the grading stresses for teachers, improving the evaluative skills of students, making classes more collaborative and transparent, and ultimately reducing the translation/perspective roadblocks.
The Nature of Assignments
When teachers give assignments, they start with an image in their minds of what they want students to accomplish. The initial image can be in the form of a specific objective, a specific kind of thinking, or even a specific finish product. That image also includes some form of quality standards, wherein the teacher identifies—perhaps subconsciously—what a good example of that anticipated outcome would be, do, or look like. From this initial image, teachers create an assignment sheet: a set of instructions for guidelines or discussions that clarify and define the teacher’s expectations in a way that helps guide students toward successful completion of the task.
Anyone who has been present in the classroom the day before an assignment is due can attest to just how poorly assignment sheets tend to work. Students are often full of questions, uncertainties, and concerns that run the spectrum from insightful to myopic. Students get frustrated because they believe they have not been told what the instructor wants; instructors are frustrated because they believed students have neither paid attention to the assignment sheet nor taken ownership for their own work habits/ethic. Students panic; teachers grieve; and it is in this emotional arena that the assignment is completed and subsequently assessed.
The Nature of Grading
To the student, and assignment gets graded by disappearing into a black box wherein the teacher refers to instruments about as refined as a Magic 8 Ball, producing a final grade/score based on little more than random chance. Occasionally, comments are attached to the returned work; however, these comments are not reliably legible, interpretable, or helpful.
But for the teacher, grading is a protracted process of reading, considering, deliberating, evaluating, commenting, and labeling. It’s tedious and painful. We try to be fair and helpful. We sometimes get frustrated that students don’t produce work that matches the expectations we had in mind when creating our assignments. After all the time we spend in class working on the skills that go into the assignment, we often wonder at how blatantly the target gets missed.
One major difference between the teachers’ scenario here and the students’ above is that of perspective. Students get to see the work that they create, and their work exists as an isolated incident in the midst of an intellectual vacuum. If they are lucky, students see examples of what’s expected before they make their own attempt. Sometimes, they get the chance for peer review, during which they see one or two other examples. But students don’t have the vast perspective of the teacher, which includes not only the work of every student in the class, but also the image of the outcome that prompted the assignment to begin with.
The FYC Spin
Regular readers of my blog have been expecting this: I’d like to look at the situation from the angle of first-year composition (FYC). Traditional composition courses arguably entail a greater-than-normal workload of grading, given the volume of essays and other writing students submit for these classes. FYC teachers stand to gain an awful lot of extra time if they can incorporate the grading process into the operation of the course, rather than leaving it as teacher homework.
Current discussions about composition curriculum point toward collaborative writing. Carol Spiegelman did fascinating studies about student perspectives on peer workshops, and Andrea Lunsford and Lisa Ede frequently write about the benefits and implications of collaborative authorship. When these and other encouraging authors combine with a broader view of intertextuality, we have an obligation to help our students learn to write collaboratively. Greater adeptness with balancing and integrating multiple sources becomes more important as the sources of information become more available–and overwhelming. (Many participants in the MOOC MOOC have commented on the difficulty of keeping up with the volume of information being produced by those working in the course, so clearly this concern is only magnified and larger open classrooms.)
Additionally, the work of Amy Devitt, Anis Bawarshi, and Mary Jo Reiff positions critical genre awareness as a threshold concept in composition studies. I can’t do their theories justice in this brief discussion. But to start, the three argue that composition instructors need to teach students to better identify and navigate changing genres they encounter with each writing situation. In order to identify a new genre, students must gather multiple examples from various sources and compare them, looking for traits/characteristics/trends. This is a problem in many classroom writing situations, as students are expected to take an assignment sheet and create from it a one-off document that essentially serves as a solitary example of a genre the student has never seen. Because the instructor is familiar with the genre (and, in fact, defines the characteristics of it), the instructor is aware of exceptions to those genre characteristics. Without examples to draw from, students are stuck guessing how their documents are expected to function. Again, we are left with a problem of perspective.
The “What If…” Factor
What if we could give our students more of the perspective that teachers bring to each assignment? Would it be helpful to them? Could it relieve some of the burden of grading while simultaneously empowering students and encouraging learning?
Without even diving into full-scale collaborative composition, face-to-face classes could start with more collaborative evaluation. Instead of telling students to submit their writing to a black box and hope for useful feedback from the instructor, why not have students submit their work to one another? This would create a collection of genre samples that students could use to determine appropriate characteristics for how the documents function; they could detect outliers and be tasked with identifying what, if anything, about any nonconforming documents needed to be changed. By articulating the rhetorical characteristics of one another’s work, students will be applying what they learn in their composition class to an evaluation of writing from their peers.
The last time I taught first-semester composition, my students told me the greatest benefit they receive from peer review was not the feedback they got on their own work but rather the ability to see what other students have done. In other words, they appreciated the perspective. If we give that perspective to our students on a large scale or consistent basis, they would feel more comfortable with their writing while also learning how to critique and help improve drafts in progress.
But let’s take this a step further. Instead of starting with peer review and revision, what could students do to more formally or officially assess one another? One essay-evaluation strategy I have heard others use is to create piles of papers based on general quality, perhaps by letter grade, before adding more specific scores/feedback to each. Students should be able to do this general sorting process without much trouble. If we give students a small collection of documents, say 5-6 of them, students should be able to rank them in order of quality; them, the trick would be to have students justify and explain their ranking with specific examples from the text and using concepts from their learning. Or if we give students larger collections, such as the entire classwork of documents, students should be able to group papers in large “piles” a generic categories such as poor, acceptable, good, and outstanding. Again, though, students would need to justify their categorizations using the concepts under discussion.
Could this sort of crowd-sourced evaluation be used for official grades? This is where my certainty and comfort level begin to break down. Given FERPA guidelines and general privacy considerations, how much can we allow other students to take control of things that ultimately go in our grade book? How much can we abdicate the responsibility for evaluation?