In this semester-long project on data science and artificial intelligence, students worked in groups on research projects based on various subjects within these topics. In a problem-based learning approach, the groups chose projects based on topics provided by third-party clients, and at the end of the course they delivered a report and final presentation. The course took place in the third year of the degree program, and was split into three phases across the semester.
The instructor chose to incorporate Group Member Evaluation in order to address the problem of free-riding in group work, give students a platform where they could give feedback to each other, and also give them a chance to improve based on the feedback they received. With a strong focus on group project management, planning, and evaluation throughout the course, it was imperative to provide students with a space to complete these feedback tasks, while simultaneously allowing the instructor to maintain oversight of the process.
• Students can practice giving and receiving constructive feedback, collaborating and communicating in a group setting, and reflecting on the group process.
Other learning objectives are part of the program but are not related to the peer feedback component.
This course consisted of three phases, with the first of these covering the setting-up of the project. At the end of this first phase, lasting seven weeks, groups hand in their project plan and give a presentation to both the clients who provided the topics and the internal examiners. The second phase, also lasting seven weeks, follows a similar structure, and the third and final phase, lasting three weeks, consisted of a final presentation and evaluation whereby students were asked to defend their research directions and choices made. Group Member Evaluation was use twice, in the second and third phases, and a final feedback discussion took place afterwards, where students reflected on what they had learned from the feedback. Group consultations also took place, providing a chance to personally discuss feedback on group processes and performance. The feedback given within the tool was based on three questions: what did the reviewee do well; where could they improve; and what did you especially appreciate in this person? Additional comments were also allowed but not mandated. Reviewers could see who they were reviewing but these ratings were anonymous to the recipient.
Learning activities, according to Bloom’s Taxonomy, were mainly at the level of:
As a pilot course, and a pilot feedback activity, the Group Member Evaluation was a formative component and not graded. However, in order to meet the attendance and participation requirements both instances of the activity needed to be completed.
“What made me happy is that some students really had some good ideas about what they could improve on.”
• The instructor left detailed instructions inside the tool about how and when, and what sort of feedback should be given, with particular emphasis on the benefit of considerate, constructive feedback.
• Announcements were also made to all students which signalled that the Group Member Evaluation tool would be used, as well as its purpose.
• The student activity was checked inside the tool before project meetings, as well as at the end of period 2, to see if anything in particular needed to be discussed with the co-examiners. In period 3, the data from the second feedback moment was compared with the previous one to see if there was a trend.
Group dynamics can be harder to keep an eye on in online courses when compared to face-to-face sessions. As well as making available a digital overview of peers’ grades and performance, Group Member Evaluation makes class management simpler for the instructor as every student’s self-assessment and feedback comments are available in one interface.