Here, I’ll expand on and adapt my response to Reilly and Atkins’ “Rewarding Risk” from Digital Writing Assessment and Evaluation. (See the original post here.) This chapter builds on works on multimodal assessment, but adds a level of “risk” and risk-taking for students and “aspirational” assessment for instructors. The authors use “deliberate practice” as a model for creating open-ended assignment criteria that encourage individual and collaborative exploration. This model has as its goals encouraging exploration and aligning assessment practices with pedagogical practices.
Reilly and Atkins adapt Lee Odell and Susan Katz’s assertion that multimodal assessment should be “generalizable and generative.” To this, the authors add that multimodal assessment should also be “aspirational, prompting students to move past the skills they have already learned to bravely take on unfamiliar tasks and work with new tools and applications that may cause them to re-vision their composing practices. So, they’re not so much challenging Odell and Katz as adding to them; but the idea of generalizability seems to push back pretty hard against their assertion that assessments should be designed for individual assignments. They also call for an addition to Michael Neal’s four criteria for responding to “hypermedia,” which they argue are useful and productive, but do not address “how to encourage risk-taking and experimentation in conjunction with or through assessment processes.”
I’m intrigued by the way the authors propose a formative, rather than summative, approach to assessment. This means that composition students are taught about assessment and included in the creation of assessment criteria at the outset of an assignment. The thought here is that students will approach their assignments and take risks, knowing from the beginning what assessment model will be used while also knowing that they, themselves, had a hand in determining the assessment used.
I find Reilly’s guidelines for practicing aspirational assessment particularly interesting and, perhaps, helpful for future use in my own classroom:
• Allow time for play, exploration, and gains in proficiency prior to the discussion of assessment for a particular project.
• Look at (preferably externally identified) examples of excellent projects.
• Develop criteria in groups after reviewing the project description, client needs (if relevant), and the course student learning outcomes pertinent to the project.
• Allow student criteria to stand even if you, as the instructor, would have chosen other items on which to focus.
• Make room for peer review and revision time following the development of the assessment criteria.
I’m especially intrigued by the idea of “play, exploration, and gains in proficiency.” Here, I’ve always approached multimodal assignments a little backwards, by my own admission. I’ve resisted providing students with examples of exemplary work for fear that it might lead to unnecessarily narrowing the scope of what they could do. But, I’ve also found that this leads to confusion on expectations. I also like the idea of allowing criteria to stand even if I wouldn’t have chosen to focus on them myself. I’ve always been a fan of having students help me to create rubrics for assignments, but I’ve also generally steered the discussion so that the rubrics turn out more or less the way I want them to. Multimodal composition is not easy to assess because it is almost never apples to apples; instead it’s apples to oranges to kiwis to cars to shoes to houses. There’s nothing generalizable about it, but if multimodal assessment is contextually driven and built on these ideas of rewarding risk and exploration, I think that assessing multimodal assignments at the classroom level is quite doable.