Chapter 8: Evaluating Student Work
Writing RubricsThe University of Connecticut maintains a robust set of documents about assessment, which includes not only assessing students but also designing and assessing programs, which may be of use if you are designing a DH certificate, minor, or program. Education Services Australia hosts a very concise, to-the-point introduction to rubric creation.
If you are ready to make your own rubric but shudder at fussing with the table functions on your word processor, there are many free, automated tools that will help you quickly generate a beautiful-looking rubric. iRubric, possibly the most popular, includes options for filling out the rubric (i.e., grading) on your device, obviating the need to print out and rubrics and ink them in individually. You can browse from a premade set of rubrics, as well as save rubrics you have made so that you can use them or revise them later on. If you are working with a team, training graduate teaching assistants, or wish to collaborate remotely with another digital pedagogue, the Faculty Groups function can facilitate rubric-sharing and discussion. If you are not quite ready to produce your rubric but have exhausted the resources in the above paragraphs, this resource at Annenberg Learner provides clear, step-by-step prompts that will teach you how to write rubrics at the same time you are creating one.
For rubric models, don't forget that we have provided sample rubrics for our six assignment sheets:
- Archive Review: Assignment Sheet with Matching Rubric
- Digital Life-Writing: Assignment Sheet with Matching Rubric
- Digital Mapping: Assignment Sheet with Matching Rubric
- Mediated Text: Assignment Sheet with Matching Rubric
- Style Lab Report: Assignment Sheet with Matching Rubric
- Digital Edition: Assignment Sheet with Matching Rubric
Grading in the DH CommunityDigital humanities assignments, as you might expect, often present specific challenges for rubric writing. EdTechTeacher maintains a carefully organized link roundup that collects guides and sample rubrics for grading technology-centered assignments, such as blogs, wikis, podcasts, websites, design projects, and digital portfolios. To sample some of the actual rubrics used by teachers in the digital humanities, try Carrie Schroeder’s general final grade rubric for her cross-listed course in the Introduction to Digital Humanities, or Jessica Pressman’s very precise final project rubric for her course in Digital Literature. If you are opting for the “point” system for rubrics mentioned in our chapter (each assignment is awarded 0-4 points per grading criterion), see this five-point rubric for blog posts in Stockton’s GAH 1095: Introduction to Digital Humanities.
The task of grading blogs has by itself spurred conversations in the DH community. In the chapter in our book, we mention Jeff McClurken and Julie Meloni’s ProfHacker post, “How Are You Going to Grade This?” This post acknowledges that blogs can serve many purposes -- as a journal to record reactions to course readings, as a medium for formal mini-essays, as a platform for class discussion -- and that this may influence your grading policies. Furthermore, the size of the course will influence the type and amount of grading you do on students blog posts. Mark Sample’s 2009 post about grading blogs was an early intervention that made use of his extensive experience in student blogging and explained how he adopted the 0-4 point grading rubric out of a desire for transparency. He later shared a shortened version of that post on ProfHacker.
Beyond the specific case of grading blogs, we suggest that you adapt any of the professional standards for the evaluation of digital work in your field. The Carolina Digital Humanities Initiative maintains a list of various standards that are used for many purposes, including peer review, tenure/promotion, and grant funding. These standards will a bit of revision, of course, to be made appropriate for grading, but what better way to analyze student work in DH than to apply our own professional standards? The requirements for meeting each criterion of judgment may be less stringent, but they should be similar in the kinds of intellectual tasks they demand and scholarly values they reflect. Just as you might respond to graduate student seminar papers by putting yourself into the position of a peer reviewer for an academic journal, you could put yourself in the position of a peer reviewer for a DH grant or journal (see this editorial from DHCommons to enter the conversation about project review) or for a disciplinary DH consortium (such as NINES).
Experimental Grading MethodsWhereas some people can really wax enthusiastic over the technical aspects of grading, some people are more excited when the grading is over for the day (or semester!) DHers are no exception here, and of course, in our chapter we advise that you pay the right amount of attention to each piece of student work -- but then move on without guilt. Similarly, Hook & Eye’s Aimée Morrison has written some great advice about how to grade faster, including advice about using averages, assigning tasks that are inherently fast to check, and designing tests to minimize grading time.
Beyond trimming the time you to take to grade, you could also delve into some experimental methods of grading to kickstart your interest in assessment. Peer-calibrated grading and contract grading are two popular experiments in the DH community. Cathy Davidson has been a leader in the spread of contract grading. In a blog post on HASTAC, Davidson explains that she was motivated to find solutions for “alternative credentialing mechanisms” that displace some of the burden of assessment from the teacher while fostering a student-centered classroom. This other post, also on HASTAC, provides concrete details about how you can adapt her method: contract grading plus peer review. There, you will also find an interesting explanation to share with your students; it details how, with contract grading, students decide what grade they want, then sign a contract detailing exactly how much work and what quality it needs to be to fulfill the contract. Writing to her students, Davidson explains, “The advantage of contract grading to the professor is no whining, no special pleading…. If you complete the work you contracted for, you get the grade. Done. I respect the student who only needs a C, who has other obligations that preclude doing all of the requirements to earn an A in the course, and who contracts for the C and carries out the contract perfectly.”
Alex Reid has also used contract grading in his ENG 399: Journalism; his course wiki describes the number of assignments required for each grade, as well as the consequences if students do not fulfill their contracts. In essence, the greater the number of assignments students complete (satisfactorily, of course), the higher the grade. Adeline Koh, by contrast, has constructed her contract for AMST 5011 in the opposite way: she first defines the requirements for an A grade, and then works downward to B and C grades, specifying the effects of absences as well as any errors or diminished quality of work the student has turned in to merit a B or C. The other typical grades, D and F, are reserved for students who have not fulfilled the terms of their chosen contracts. If you are still unsure if contract grading is the right choice for you, Billie Hara explains the advantages and disadvantages of contract grading in her post on ProfHacker.
A third post in HASTAC by Davidson fills in details about the place of peer review in her classroom, as well as lists the pedagogical tools she uses (such as badges and GoogleDocs). Some instructors take peer grading even further, placing more grading tasks on other students. Much of the urge to develop peer-calibrated grading comes from MOOCs or for classroom sizes that have been ballooning past instructors’ ability to keep up with assessment. This short paper from the DH 2014 conference in Lausanne investigates peer grading in MOOCs, for example. Many are skeptical about only using peer grading techniques in such courses; Jonathan Rees is vocal in questioning the spread of peer review in posts like “Can peer grading actually work?” and “Peer grading (still) can’t work.” Most of us, however, are probably not teaching MOOCs but simply would like to ease some of our grading burden or bring our students into the assessment process, so a judicious application of Davidson’s techniques is not what Rees and other critics are so skeptical of.
Lee Bessette has also been experimenting with student-driven classrooms. In this post on Inside Higher Ed, Bessette explains her “peer-driven classes,” which allow students to (for example) determine readings and the formats of some graded work, all in the service of encouraging collaborative learning. These shifts in classroom management complement non-traditional grading policies very well, as they reflect a consistent perspective on the relationship between authority, pedagogy, and motivation. In a series of posts on his personal blog, John Victor Anderson reflects on his similar attempts to create a “collaborative classroom” while teaching technical communication. The introductory post explains his rationale for this experiment, while part 2 covers group work, part 3 covers grading, and the final part describes a sample assignment. Parts 2 and 3 are especially useful, particularly if you are wondering about how other teachers grade group assignments; part 2 is more about helping students make the most of group work, while part 3 is more about his “workflow,” or technological solutions, for grading group work. Finally, Andrea Rehn has created a very helpful Collaboration Self-Assessment Tool, in which students essentially evaluate the quality of their own work done in group settings.