Subscribe to our bi-weekly newsletter and fill your mailbox with exclusive EdTech content of educational articles, practical pedagogical tips & tricks, event updates, invites and more. Unsubscribe anytime.
Peer feedback has proven itself to be an effective educational approach to stimulate collaborative and active learning in students. However, in practice the feedback produced often remains superficial and lacks critical depth. That's why teachers from Wageningen University & Research came up with an innovative approach called Participation Grading - to increase the quality of peer feedback, level of participation, and offer a safe learning environment. In this article, we will explore this innovative approach: its development, implementation, and impact.
No one can deny the benefits of peer feedback in improving students' final work while promoting the transfer of lifelong skills. However, there are still plenty of hurdles to overcome. Implementation often faces two main issues: 1) low feedback quality - peer feedback comments are superficial and not critical enough and 2) free-riding - students lacking motivation contribute relatively less than their teammates.
A common solution is to rate the peer-assigned grades based on the quantitative aspect (the timely delivery of feedback and the amount of feedback delivered). This solution is often criticized for lack of depth and qualitative aspect .
Instructors from Wageningen University & Research came up with a solution to increase students’ motivation to engage in the peer review process, while enhancing the quality of the feedback produced. Let’s take a look at what Participation Grading is and how it works.
Participation Grading is grounded in the principle of “Virtual Action Learning” (VAL), which emphasizes the demonstration of students’ competencies via delivering information about their learning activities  .
Therefore, the approach involves grading only the best contribution to a peer feedback assignment, selected by students. These ‘best contributions’ function as evidence of a student’s competence as a reviewer.
Participation Grading is believed to:
Participation Grading can be implemented without a specific learning platform. However, in order to optimize both the selection of the best contributions and the grading process, Wageningen University and FeedbackFruits joined hands to develop a platform for Participation Grading, which is now available as a beta feature in the Group Member Evaluation tool.
A complete process of Participation Grading can be summarized as follows:
The Participation Grading was adopted in 4 BA and MA courses at the Wageningen University & Research. Each of these courses involved a Peer Review and/or Online Discussion assignment to which Participation Grading was added in either a cross-over or nonequivalent group design . Details of the course architecture can be found in the image above.
At this point, you might wonder what the effects of the Participation Grading were or whether the approach was successful. From interviews with instructors at Wageningen University, we produced a use case to document these results and the impact of Participation Grading. Here, we will share with you this story.
In her bachelor’s course ‘Food Quality Design', Dr. Cora Busstra wanted to help students develop problem-solving and critical thinking skills besides subject specific knowledge. She set up a group assignment where students worked in groups of four to co-write an in-depth report and then provided peer comments on other groups' writings. To enhance students' participation level and feedback quality, Dr. Busstra adopted FeedbackFruits Peer Review tool with its beta function, Participation Grading.
Within the Peer Review environment, students were able to submit their group report, individually gave feedback to the work of another group, and selected their best feedback comments for the instructor to grade. This guaranteed that their errors or mistakes did not influence their course grades as long as they did not select those as their best contribution. This gave students a chance to iterate and improve on their feedback-giving skills without fear of being marked down. The implementation of Participation Grading proved to be a success, as acknowledged by Dr. Busstra:
"Students felt they had improved their understanding of the topic by giving feedback using Best Contribution Grading [Participation Grading]."
By enabling Participation Grading, the instructor was able to create a safe learning environment for students where they could learn from their mistakes as well as freely raise questions. Overall, this set both students and teachers up for an effective feedback process, one where self-doubts and insecurities were minimised, and critical thinking and deep reflection were made easier.
You can read the full use case detailing the set-up and evaluation of Dr. Busstra's course here.
 D. Nicol, A. Thomsonb and C. Bres, “Rethinking feedback practices in
higher education: a peer review perspective.” Assessment & Evaluation in Higher Education, 2014 Vol. 39 (1), pp. 102–122.
 J.J.M. Baeten, “Virtual action learning: an educational concept on
Collaborative Creation with ICT.” Amsterdam: KIT publishers, 2011.
 J.J.M. Baeten, “The Power of Peer Feedback. Research on the Learning
Process within Virtual Action Learning.” Delft: Eburon Academic
 M. C. Busstra, F. K. Garcia, K. A. Hettinga, L. Huijgen, M. C. Gresnigt and B. Hintemann, "Improving peer review quality by grading the best contribution of each student: educational principle and evaluation design," 2019 18th International Conference on Information Technology Based Higher Education and Training (ITHET), 2019, pp. 1-4, doi: 10.1109/ITHET46829.2019.8937372.