As the table shows, in course A, the tool helped instructors evaluate students’ group performance when participating in in-class discussion. The evaluation rubric focuses on a few items, namely “Why are you there?”, “What did you prepare for?”, and “Participation”. For course B, the instructor used Group Member Evaluation for the group evaluation factor in a semester long project, with a more extensive rubric. Course C, unlike the other 2, was a shorter course, with Group Member Evaluation being used in 4 group assignments to let group members evaluate each other by the end of the course.
Group Member Evaluation offers options to optimize the peer and group evaluation process
Not only did the tool allow for rubric and criteria setting for peer and group member evaluation, Group Member Evaluation offered instructors several other options to modify and enhance the assessment activity.
The first functionality John mentioned was the seamless Canvas integration, allowing for automatic synchronisation of course groups from the native LMS. This, according to John, “is obviously extremely useful”.
Other features that John and his team utilized were:
- Anonymous rating: makes comments of the reviewers anonymous
- Reflections on assignment: lets students write a reflection on the process of giving feedback
- Grading: instructors can assign different grade weighting to different course components.
- Configurable group contribution factor: allows instructors to adjust the threshold of what the outcome of the grade will be according to the group contribution factor or group skill factor
For further information on these features, check out this article.
Students’ perception towards self/peer assessment and Group Member Evaluation
Once the 3 courses ended, the instructors gathered feedback from students in terms of their experience with self/peer assessment and Group Member Evaluation. The survey items were adapted from Elliot and Hggins (2005). Overall, the responses revealed that the implementation of course B was more successful than that of course A and C.
The result then caused John to question: “What is the reason for such a difference among the courses?”
To answer this, John and his team decided to look into the relationship between the survey items, using the factor analysis method. Factor analysis is based upon the idea that for a collection of observed variables, there is a smaller set of underlying variables called factors. These factors can explain the interrelationship among the observed variables - in this case, the survey items.
Firstly, the team did was to pull all the survey data from the 3 courses. From this the instructors were able to identify the items that were related and group these into 5 distinct factors:
- Perceived learning values, which indicates students’ opinion on the learning values of the peer assessment activities, along with their intention of having self/peer assessment in future courses.
- Previous experience asks for students' past participation in self or peer assessment activities, with the help of pedagogical technology.
- Motivation represents how self/peer assessment encourages students to be more active learners.
- Practical aspects focus on students’ attitude towards technical aspects of self and peer assessment: instructions, understanding of the process, experience with the tool, and suitability of the rubric.
- Perception of fairness involves how students think GME ensure the fair contribution and evaluation of the group members
The correlation analysis among these 5 factors revealed notable findings:
First of all, there was a positive relationship between perceived learning values, satisfaction of practical aspects, and perception of fairness. This means that as perceived learning values increased, “so does motivation, satisfaction with practical aspects, and perception of fairness”. John then elaborated more on this finding:
“The reason why students are motivated to engage in self and peer assessment, even in the future, is that they see an added learning value in terms of developing group work skills and effective team work behavior.”
Second, satisfaction with the practical aspects has a significant influence on motivation and perception of fairness. In other words, if students are positive about the technical factors of the course (the set-up, instructions, support, etc.), they tend to be more encouraged to engage in the learning process. This also explains why course B received higher levels of satisfaction. According to John, this course had a teaching assistant who helped monitor the implementation and results of the peer and self assessment activity. Also, the rubric of course B could be used as a template for course A, C, and for the future courses.
On the other hand, previous experience didn’t show any correlation with the other factors. Quite often, lack of prior knowledge or training is considered a key barrier to students’ performance in self/peer assessment. The situation is different in the case of Reykjavik, probably because the instructors offered students “multiple opportunities to engage in self and peer assessment”, as commented by John. Furthermore, they also “provided students with a dry-run, a practice opportunity when introducing the group work assignment.”
Group Member Evaluation encouraged both students and teachers to be more mindful about group work and peer assessment
Besides the survey questions, John asked instructors and students of the three courses to reflect on their experience using Group Member Evaluation for self and peer assessment activity. Despite some technical issues, the received comments were positive, as both teachers and learners were satisfied with the tool.
“The tool makes it possible to implement the group member evaluation factor, and also allows me to follow all groups and provide grades.” - noted the instructor from course A. In addition, the instructor remarked that Group Member Evaluation contributed to an increase in engagement. John then emphasized: “Previously you’d encourage students to come prepared, say how important it was, but now there is actually a way of documenting the process and making students alert.”
What the instructor liked most about Group Member Evaluation, is that it formalized the process around preparation for and participation in class, thus making group contribution grading feasible.
Similarly, instructors of course B showed great satisfaction with the tool. “They found[the tool] really useful in terms of spotting group dynamics issues, presenting a clear process once explained properly, helping to lighten the grading load, and keeping everyone on their toes,both teachers and students.”
In terms of the tool features and interface, the fact that the entire group evaluation process happened in real time made it “super transparent, easy to flag issues and follow up on the learners.”
Most importantly, the teachers acknowledged that using the tool encouraged both the students and themselves to be more mindful about what good group work is and what goals they are working towards.
All instructors agreed that they would definitely use Group member Evaluation for their future group assignments, and also recommended the tool to other colleagues.