Blog post
Making Excellence Visible: Clarifying Standards and Building Self-Assessment Skills
Blog Post 2 of 4 | Principles 1 & 2 of Nicol & Macfarlane-Dick (2006)
No matter how innovative or progressive we try to be in the classroom, when it comes to assessment, we often default back to conventional tasks and tools. Some studies (Buck et al., 2007; Ogan-Bekiroglu & Suzuk, 2014) have suggested that new teachers in particular may have a good theoretical grasp of formative assessment and assessment-for-learning practices, but revert to conventional assessment tools and evaluation in practice. The shift to online teaching gave many of us the impetus to experiment with different ways of assessing students, if for no other reason than our tried-and-true assessment practices were just not pandemic-practical.
In my last post, I gave an overview of Nicol and Macfarlane-Dick’s seven principles of effective feedback. In this post, I’ll focus on the first two of these principles and look at ways we can apply those principles in our own teaching.
Principle 1: Clarifying what constitutes good performance
The challenge
One of the courses I teach with the Université de Sherbrooke Master Teacher Program, called Thinking Through Text, begins with an exploration of what knowledge means in our respective disciplines. Consider, for example, the word “logic.” Does it mean the same in your discipline as it does in mine? Is it a formal proof, a Boolean structure, a persuasive statement, or argumentation? For Sadler (1989), this is “guild knowledge:” the tacit understanding within a discipline that we, as experts in our fields, seem to just inherently know.
The trouble is, the longer we’ve belonged to the guild, the harder it becomes to remember what it felt like to be on the outside of it. Heath and Heath (2007) called this the “curse of knowledge”—once we know something deeply, we lose access to what not knowing it feels like. We mark a student down for a lack of rigour, or logic, or originality, and we know exactly what we mean. Our students, more often than not, do not.
This is where Nicol and Macfarlane-Dick’s first principle becomes less a teaching strategy and more a kind of cognitive corrective. Making excellence visible means taking what lives in expert intuition—the guild knowledge we’ve accumulated over years of disciplinary immersion—and deliberately, carefully, bringing it to the surface.
How, then, do we make the invisible visible in assessment?
The seven principles are presented in chronological order. Clarifying what constitutes good performance is the first one because it is all about setting the students up for success—so we’re not showing them the gold standard after they have already submitted, but before, so they know what to aim for.
You may be asking yourself how something we do before an assessment task can be feedback. Clarifying our expectations and standards from the start means that our feedback is set up, too—if we’re all clear on the meaning of “excellent,” then we as teachers know how to focus our feedback, and our students know how to interpret it. Think of it this way: even if you are new to ski jumping, it helps to know that judges are looking at distance, stance, and landing, and even more to know what specific distance, what exact stance, and what kind of landing they’reexpecting.
Here are some ways to stick the landing:
Strategy 1: Screencasts and video feedback
We often bemoan the amount of time our students spend looking at screens. We can, however, take advantage of the medium and use screencasting and short videos to support their learning and our assessment tasks. For instance, we can use screencasting tools such as Loom (www.loom.com) to make assignments and criteria more transparent, and at the same time, develop a “personal connection” (Waltemeyer & Cranmore, 2018, para. 2) with students who see our faces and hear our voices. You can replicate the Loom approach using Yuja or Teams or Zoom, in a recorded meeting with screen sharing.
For an upcoming assessment, screencasts become a resource that students can return to as they work through the task. I first used Loom during the pandemic to provide an overview of my course for each class and a walkthrough of each major assignment; for one course, with about 30 students, my overview video was viewed over 80 times in the semester, suggesting that students returned to it as they worked through the term.
Once you get comfortable with screencasting, you can use it post-submission as well, and walk individual students through your reading of their work. I do an audio version of this with Notability, which allows me to record my feedback and links the audio at the relevant point in the submission—but we’ll look at this phase in the next blog post!
Screencasting quick start
Tools: Loom (free tier), Screencast-O-Matic, Zoom recording, or Voice Memos for audio-only; Moodle’s Poodl feature allows for audio or video recording
Suggested length: pre-assessment, 5-7 minutes; individual feedback, 3-5 minutes per student (research supports this; longer videos see engagement drop-off — Guo et al., 2014)
One tip: narrate what you’re looking at, not just what you think — “I’m reading this sentence and I’m noticing...”
Strategy 2: Assessment documentation
Assessment documentation—instructions, templates, rubrics, and checklists—provides an opportunity to engage students in the assessment process. It can be challenging for newer teachers to invite students to discuss, question, amend, or even co-create assessment documentation; however, being open to these engagements creates a sense of “authenticity, reciprocity, and inclusion” (Smith et al., 2021, p. 123) between students and teachers.
This approach to assessment can help students see assessment as another learning tool, rather than a verdict on their ability. Shifting from a focus on the top-down relationship of evaluator and performer to a more democratic, learning-focused view brings students into the conversation, and gives them the tools they need to make judgments about their own work (Yan & Carless, 2022, p. 1117). No matter how well we write our instructions, simply discussing an assignment with students before they leap in measurably improves their understanding and performance (Bloxham & Campbell, 2010).
We can’t just post the assignment and hand them a rubric and expect them to know what to do. Asking for their feedback, their input, and their concerns about the task and our criteria is a chance for us to hear and correct misunderstandings, as well as ensure that students know the expectations, the scope, and the purpose of each task. When students understand not only what we’re asking of them, but why, they’re more motivated and engaged.
Strategy 3: Rubrics — The essential tool
Perhaps the single most important pedagogical tool we have to guide students through assessments is the rubric. Most teachers—and students—have seen more than their fair share of rubrics, but there are good reasons that rubrics are everywhere. Stevens and Levi (2013) point out that rubrics save time, help us focus our feedback, show students what’s expected, and foster equity.
With the traditional analytic rubric (Excellent/Good/Needs Work), we can enhance understanding and motivation by bringing students into the discussion of what goes into the rubric (Yan & Carless, 2022; Struve, 2006). In my Technical Writing course, for instance, my evaluation of all assessments includes application of the principles of effective technical communication—and in the very first class, the students research, discuss, debate, and determine what those principles are.
But there’s more to rubrics than the standard five-by-five analytic table. While I started my teaching career using those detailed rubrics, I have experimented more recently with different approaches and have now developed my own version of the single-point rubric, which I will explore in more depth in a later post. Suffice it to say that there’s no hard-and-fast rubric template—find a format that works for you, make sure students know how to work with it, and keep refining it over time.
Three rubric types at a glance
Holistic rubric: one overall score — fast to grade, but less informative for students
Analytic rubric: criteria x levels grid — detailed, but can overwhelm and pre-describes failure
Single-point rubric (SPR): one standard column, space for feedback — explored fully in Post 4
Source: Gonzalez (2014).
Rubrics also serve as peer or self-assessment tools: students can use the rubric as they work, and to revise when they are ready to submit. If your rubric is weighted (i.e., some criteria are worth more than others), students can see where best to focus their revision. Using the rubric grounds their self-assessment in your evaluation criteria and helps them focus on the relevant aspects of their own work.
Principle 2: Facilitate self-assessment
Why teaching self-assessment matters
At all levels and across disciplines, self-assessment is a valuable skill to develop. Papanthymou and Darra (2019) found that self-assessment had comparable benefits for learners at different levels. In higher education, they noted improvements in academic performance and learning, better self-regulated learning, and better motivation.
But self-assessment must be more than just asking students to reread their work before submission. Poorly defined self-assessment, or self-assessment without subsequent revision, doesn’t appear to influence learning or improve performance (Andrade, 2019). Self-assessment is a skill that, like any other skill, needs practice and guidance. Building this skill begins by showing students how to “effectively seek, process, and use feedback from different sources” (Yan & Carless, 2022, p. 1116).
Over time, we want our students to develop autonomy—self-assessment is thus a sustainable skill that ultimately makes them less dependent on teacher feedback (Boud & Molloy, 2013). Just as we use training wheels with the explicit aim of helping a new rider become comfortable enough to take them off, our guidance for self-assessment follows the same principle: we’ll show you how to do this so effectively that we make ourselves redundant.
Strategies for effective self-assessment
There are many ways to encourage and guide effective self-assessment, including:
having students keep self-assessment diaries (Yan et al., 2020);
directing them to set personal learning goals (Zimmerman, 2008);
including them in setting assessment criteria and creating rubrics;
breaking complex assessments into scaffolded stages with time for reflection and self-assessment between steps.
Adding technology and digital tools to the assessment toolbox means having even more ways to support self-assessment. Models, templates, rubrics, and other parallel resources can be made available online during some or all of the assessment cycle, and the approaches listed above can all happen online as well; in fact, strategies such as personal learning journals work very well online, and digital platforms, such as Moodle’s Journal or MS Notebook, allow teachers and resource staff to interact with students individually through their journals.
In my own courses, I use the Moodle journal tool to have students write responses to a weekly prompt, and I write a short response to each one (really short—sometimes just a word or two, at most two sentences). Typically, I give them a choice of three prompts: two based on material covered in the week, and one “what other questions do you still have?” to give them a chance to check in. When major assessments are looming, I often make one of the prompts relevant to the task, or simply ask them, “what are you worried about?”
Tools such as e-portfolios (Chang et al., 2018) have proven to be effective in self-assessment and self-regulated learning. Developing and articulating individual learning goals helps students feel more engaged and invested in their assessment tasks; recording, reviewing, reflecting on, and revisiting these goals in an e-portfolio, online journal, or digital notebook does a lot to support students’ learning.
Self-assessment journal prompts to try
“What are you most proud of in this submission?”
“What would you change if you had one more week?”
“Which criterion on the rubric do you feel you met most fully? Least fully?”
“What kind of feedback would be most helpful?”
“What question do you have for me after completing this?”
These prompts are adaptable across disciplines and immediately usable.
These principles work together
The first two principles of effective feedback are two sides of the same coin. Students can only self-assess if they have a clear idea of the standards. Those standards only become internalized when we ask students to apply them to their own work. Both principles ask us to be explicit about what we expect, and about how students can evaluate their own progress toward it. It’s work that comes before the assessment task, but it’s what makes the task, and our feedback on it, more meaningful.
Of course, even when students know what excellence looks like, and know how to assess their own work honestly, they still need to hear (and know how to apply) our feedback. That’s where the next principles come in—so stay tuned!
References
Andrade, H. L. (2019). A critical review of research on student self-assessment. Frontiers in Education, 4, Article 87. https://doi.org/10.3389/feduc.2019.00087
Bloxham, S., & Campbell, L. (2010). Generating dialogue in assessment feedback: Exploring the use of interactive cover sheets. Assessment & Evaluation in Higher Education, 35(3), 291-300. https://doi.org/10.1080/02602931003650045
Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698-712. https://doi.org/10.1080/02602938.2012.691462
Buck, G. A., Macintyre Latta, M. A., & Leslie-Pelecky, D. L. (2007). Learning how to make inquiry into electricity and magnetism discernible to middle level teachers. Journal of Science Teacher Education, 18(3), 377-397. https://doi.org/10.1007/s10972-007-9053-8
Chang, C.-C., Liang, C., Chou, P.-N., & Liao, Y.-M. (2018). Using e-portfolio for learning goal setting to facilitate self-regulated learning of high school students. Behaviour & Information Technology, 37(12), 1237-1251. https://doi.org/10.1080/0144929x.2018.1496275
Guo, P. J., Kim, J., & Rubin, R. (2014). How video production affects student engagement: An empirical study of MOOC videos. Proceedings of the First ACM Conference on Learning @ Scale, 41-50. https://doi.org/10.1145/2556325.2566239
Heath, C., & Heath, D. (2007). Made to stick: Why some ideas survive and others die. Random House.
Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218. https://doi.org/10.1080/03075070600572090
Ogan-Bekiroglu, F., & Suzuk, E. (2014). Pre-service teachers’ assessment literacy and its implementation into practice. The Curriculum Journal, 25(3), 344-371. https://doi.org/10.1080/09585176.2014.899916
Papanthymou, A., & Darra, M. (2019). Student self-assessment in higher education and professional training: Conceptual considerations and definitions. European Journal of Education Studies, 6(3). https://oapub.org/edu/index.php/ejes/article/view/2495
Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119-144. https://doi.org/10.1007/BF00117714
Smith, S., Akhyani, K., Axson, D., Arnautu, A., & Stanimirova, I. (2021). Learning together: A case study of a partnership to co-create assessment criteria. International Journal for Students as Partners, 5(2), 123-133. https://doi.org/10.15173/ijsap.v5i2.4647
Stevens, D. D., & Levi, A. J. (2013). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning. Stylus Publishing.
Struve, M. E. (2006). “Why do you want to co-create rubrics?” Relationship between co-created rubrics, student motivation, self-efficacy, and achievement [Master’s thesis, California State University San Marcos]. ScholarWorks. https://scholarworks.calstate.edu/downloads/6h440s858?locale=en
Waltemeyer, S., & Cranmore, J. (2018). Screencasting technology to increase engagement in online higher education courses. eLearn, 2018(12), Article 3302261.3236693. https://doi.org/10.1145/3302261.3236693
Yan, Z., & Carless, D. (2022). Self-assessment is about more than self: The enabling role of feedback literacy. Assessment & Evaluation in Higher Education, 47(7), 1116-1128. https://doi.org/10.1080/02602938.2021.2001431
Yan, Z., Chiu, M. M., & Ko, P. Y. (2020). Effects of self-assessment diaries on academic achievement, self-regulation, and motivation. Assessment in Education: Principles, Policy & Practice, 27(5), 562-583. https://doi.org/10.1080/0969594X.2020.1827221
Zimmerman, B. J. (2008). Goal setting: A key proactive source of academic self-regulation. In D. H. Schunk & B. J. Zimmerman (Eds.), Motivation and self-regulated learning: Theory, research, and applications (pp. 267-295). Lawrence Erlbaum Associates. https://doi.org/10.4324/9780203831076