This first-year program, English for International Business, ran for 11 weeks and comprised 24 groups of around 15 students. As a language course in the economics and business faculty, the focus was on improving skills such as academic writing and speaking, as well as collaboration and teamwork. These skills were addressed and assessed with reference to the Common European Framework of Reference for Languages (CEFR), in the form of individual and group written projects.
There were several motivating factors to start using Automated Feedback in this course: the instructor wanted to spend less time correcting common errors in students’ work, and also wanted students to take the initiative to do their own proofreading before submitting their final written projects.
Automated Feedback is used several times over the course to help students review their own writing before submitting final projects. On the first day of class, a benchmark essay is completed, which provides the instructor with an idea of students’ average level of academic writing. In the next assignment, students choose from a range of topics and write a 500-word report, which is assessed according to CEFR levels. For this assignment, a rubric consisting of 22 criteria (the most common mistakes in students’ academic writing) is used to assess students’ reports. These documents could be partially checked with Automated Feedback, which in this iteration checked only one of these criteria: the use of first-person. The other criteria were checked manually by the instructor and students. A second paper written on a specific paper may also be partially checked with Automated Feedback before handing in as a final version. The final project is a group writing project, where some groups submitted separate parts individually and some submitted the whole paper as a group. Again, this could be checked with Automated Feedback
to generate suggestions for improvement before the final version was handed in.
Feedback is given on areas needed to improve with regards to the learning objectives. Students are marked according to the 22 commonest errors in academic writing. Automated Feedback could be used to check one of these criteria (the use of first person) prior to the final delivery, letting the instructor carry out a final review. Based on the marked errors, a decision is made as to which CEFR level the student has achieved, according to range, coherence, and accuracy. Each project counts for 25% of the total grade.
"My enthusiasm over it is that it’s fantastic! We will definitely keep using Automated Feedback in our online classes." - Jane Mahoney, Teacher of English & Program Coordinator, University of Groningen
The instructor can incentivise more elaborate feedback by requiring students to include extra questions while giving feedback ratings to peers. In a summative assessment, this extra element of feedback can also contribute to the overall grade of the assignment.
Imperial College London utilizes Group Member Evaluation to elevate self and peer assessment process for the students
Texas A&M University School of Public Health decided on a campus-wide adoption of the FeedbackFruits tool suite to enhance student engagement and implement authentic assessment.
Explore how FHSU utilized FeedbackFruits’ solutions to elevate the peer feedback process and stimulate collaborative learning, and what encouraged FHSU to let FeedbackFruits be a supporter in achieving their pedagogical goals.