Instructor or Peer assessment: Why can’t we have both when assessing coursework?
Because of the large number of participants on MOOCs, peer assessment, as opposed to instructor assessment, is usually considered the only viable option to assess any participants’ work that is of a more substantive nature, such as a digital artefact or an essay. This led us to the question: are peer assessments perceived just as useful as instructor assessments?
Before we examine this question in more detail, it is important to note that using peer assessment in MOOCs is not just done for reasons of efficiency. Indeed, instructors cannot review the works of hundreds of participants. But the active process of assessing and giving feedback to someone else is also an important learning process in itself!
For example, by examining other educators’ work, you might get concrete ideas on how to implement course concepts in your own classroom. Furthermore, reviewing your peers’ work requires you to actively engage and make use of rubrics, thereby developing a deeper understanding of the course concepts being assessed. Finally, course participants giving each other feedback can support a sense of community which is especially important when the course is happening solely online.
Perceptions of peer assessment
Nevertheless, understanding the perceptions of participants of peer assessments, in contrast to instructor assessments, is important: these perceptions have a significant impact on the value participants place on our MOOCs more generally. To better understand these perceptions we wanted to see how peer assessment differed from assessments by so-called experts, those who might act as a course instructor (experienced teachers and pedagogical experts).
As part of the TeachUP project a series of courses was organized on four different topics: formative assessment, personalised learning, collaborative learning and creativity. To understand & compare (perceptions of) peer assessments and instructor assessments, 106 learning scenarios created by course participants from these four courses were chosen at random. The selected learning scenarios were assessed by the experts using the same rubric as the rubric used by course participants for the peer assessments.
A comparison of expert and peer assessment scores showed that peer assessment scores were slightly higher than expert scores. With scores out of four, peer assessors overall gave scores of 3.6 while expert assessors gave a 3.2 on average. While this difference is notable it is not substantial, suggesting that scores provided by peers provide a reasonably accurate reflection of the quality of the work assessed. The fact that experts gave lower scores is not necessarily so surprising if you consider that peers often saw the process equally as being about assessment as being about supporting and motivating their colleagues.
While the quantitative scores given by peers were found to be slightly higher, there was little difference between the peer and experts’ qualitative feedback. In line with the scores, peer feedback was slightly more positive in tone. Expert assessments were found to include more actionable items that educators could use to improve their learning scenario.
We have an idea of the pros and cons of both expert and peer assessment. But what are we doing now to improve the (peer) assessments in our MOOCs? Find out in next week’s blog!