All tools for free, for the rest of your career. Get selected →

Reduce free-riding and low-quality feedback with Participation Grading (Feedback series)

Nhi Nguyen
July 7, 2021

Peer feedback has proven itself to be an effective educational approach to stimulate collaborative and active learning in students. However, in practice the feedback produced often remains superficial and lacks critical depth. That's why teachers from Wageningen University & Research came up with an innovative approach called Participation Grading - to increase the quality of peer feedback, level of participation, and offer a safe learning environment. In this article, we will explore this innovative approach: its development, implementation, and impact.

What's wrong with peer feedback?

No one can deny the benefits of peer feedback in improving students' final work while promoting the transfer of lifelong skills. However, there are still plenty of hurdles to overcome. Implementation often faces two main issues: 1) low feedback quality - peer feedback comments are superficial and not critical enough and 2) free-riding - students lacking motivation contribute relatively less than their teammates.

A common solution is to rate the peer-assigned grades based on the quantitative aspect (the timely delivery of feedback and the amount of feedback delivered). This solution is often criticized for lack of depth and qualitative aspect [4].

Instructors from Wageningen University & Research came up with a solution to increase students’ motivation to engage in the peer review process, while enhancing the quality of the feedback produced. Let’s take a look at what Participation Grading is and how it works.

What is Participation Grading?

Participation Grading is grounded in the principle of “Virtual Action Learning” (VAL), which emphasizes the demonstration of students’ competencies via delivering information about their learning activities [2] [3].

Therefore, the approach involves grading only the best contribution to a peer feedback assignment, selected by students. These ‘best contributions’ function as evidence of a student’s competence as a reviewer.

Why is Participation Grading effective?

Participation Grading is believed to:

  • Enhance the quality of peer feedback and the level of participation as it requires students to pay more attention to producing good feedback for their peers,
  • Offer a safe learning environment: when students select their best contributions themselves, this guarantees that their errors or mistakes do not influence their grade,
  • Deliver a scalable teaching method which can be applied in different learning contexts,
  • Save time for teachers when grading the peer feedback, which can be applied in different learning contexts. [4]

How to exercise Participation Grading?

Participation Grading can be implemented without a specific learning platform. However, in order to optimize both the selection of the best contributions and the grading process, Wageningen University and FeedbackFruits joined hands to develop a platform for Participation Grading, which is now available as a beta feature in the Group Member Evaluation tool.

A complete process of Participation Grading can be summarized as follows:

  • Assignment submission: Students complete and hand in the assignment, which can be in written, graphical, audio, video, or powerpoint format.
  • Peer review: Students provide peer feedback for their peers’ work based on a teacher-designed rubric.
  • Best contributions selection: From all the produced peer comments, the students select which they think are their best ones.
  • Peer feedback processing: Students study the received reviews and respond to the reviewers’ comments (if required by teachers). Teachers provide feedback on both the original assignment and the peer feedback.
  • Best contributions grading (teacher): Teachers grade best contributions within the peer review context
  • Reflection: Students write reflections based on received feedback [4].

The Participation Grading was adopted in 4 BA and MA courses at the Wageningen University & Research. Each of these courses involved a Peer Review and/or Online Discussion assignment to which Participation Grading was added in either a cross-over or nonequivalent group design [4]. Details of the course architecture can be found in the image above.

Measuring the impact of Participation Grading: Use case

At this point, you might wonder what the effects of the Participation Grading were or whether the approach was successful. From interviews with instructors at Wageningen University, we produced a use case to document these results and the impact of Participation Grading. Here, we will share with you this story.

In her bachelor’s course ‘Food Quality Design', Dr. Cora Busstra wanted to help students develop problem-solving and critical thinking skills besides subject specific knowledge. She set up a group assignment where students worked in groups of four to co-write an in-depth report and then provided peer comments on other groups' writings. To enhance students' participation level and feedback quality, Dr. Busstra adopted FeedbackFruits Peer Review tool with its beta function, Participation Grading.

Within the Peer Review environment, students were able to submit their group report, individually gave feedback to the work of another group, and selected their best feedback comments for the instructor to grade. This guaranteed that their errors or mistakes did not influence their course grades as long as they did not select those as their best contribution. This gave students a chance to iterate and improve on their feedback-giving skills without fear of being marked down. The implementation of Participation Grading proved to be a success, as acknowledged by Dr. Busstra:

"Students felt they had improved their understanding of the topic by giving feedback using Best Contribution Grading [Participation Grading]."

By enabling Participation Grading, the instructor was able to create a safe learning environment for students where they could learn from their mistakes as well as freely raise questions. Overall, this set both students and teachers up for an effective feedback process, one where self-doubts and insecurities were minimised, and critical thinking and deep reflection were made easier.

You can read the full use case detailing the set-up and evaluation of Dr. Busstra's course here.

References

[1] D. Nicol, A. Thomsonb and C. Bres, “Rethinking feedback practices in
higher education: a peer review perspective.” Assessment & Evaluation in Higher Education, 2014 Vol. 39 (1), pp. 102–122.

[2] J.J.M. Baeten, “Virtual action learning: an educational concept on
Collaborative Creation with ICT.” Amsterdam: KIT publishers, 2011.

[3] J.J.M. Baeten, “The Power of Peer Feedback. Research on the Learning
Process within Virtual Action Learning.” Delft: Eburon Academic
Publishers, 2016.

[4] M. C. Busstra, F. K. Garcia, K. A. Hettinga, L. Huijgen, M. C. Gresnigt and B. Hintemann, "Improving peer review quality by grading the best contribution of each student: educational principle and evaluation design," 2019 18th International Conference on Information Technology Based Higher Education and Training (ITHET), 2019, pp. 1-4, doi: 10.1109/ITHET46829.2019.8937372.

FeedbackFruits Educators Initiative supports educators with lifelong free access to our pedagogical tools.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Filling out this form means you agree to our Privacy Policy.

Join the community of innovative educators.

Subscribe to our bi-weekly newsletter and fill your mailbox with exclusive EdTech content of educational articles, practical pedagogical tips & tricks, event updates, invites and more. Unsubscribe anytime.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Filling out this form means you agree to our Privacy Policy.

Copyright © 2020 FeedbackFruits Corp.
Back to Top