This first-year program, English for International Business, ran for 11 weeks and comprised 24 groups of around 15 students. As a language course in the economics and business faculty, the focus was on improving skills such as academic writing and speaking, as well as collaboration and teamwork. These skills were addressed and assessed with reference to the Common European Framework of Reference for Languages (CEFR), in the form of individual and group written projects.
There were several motivating factors to start using Automated Feedback in this course: the instructor wanted to spend less time correcting common errors in students’ work, and also wanted students to take the initiative to do their own proofreading before submitting their final written projects.
Automated Feedback is used several times over the course to help students review their own writing before submitting final projects. On the first day of class, a benchmark essay is completed, which provides the instructor with an idea of students’ average level of academic writing. In the next assignment, students choose from a range of topics and write a 500-word report, which is assessed according to CEFR levels. For this assignment, a rubric consisting of 22 criteria (the most common mistakes in students’ academic writing) is used to assess students’ reports. These documents could be partially checked with Automated Feedback, which in this iteration checked only one of these criteria: the use of first-person. The other criteria were checked manually by the instructor and students. A second paper written on a specific paper may also be partially checked with Automated Feedback before handing in as a final version. The final project is a group writing project, where some groups submitted separate parts individually and some submitted the whole paper as a group. Again, this could be checked with Automated Feedback
to generate suggestions for improvement before the final version was handed in.
Feedback is given on areas needed to improve with regards to the learning objectives. Students are marked according to the 22 commonest errors in academic writing. Automated Feedback could be used to check one of these criteria (the use of first person) prior to the final delivery, letting the instructor carry out a final review. Based on the marked errors, a decision is made as to which CEFR level the student has achieved, according to range, coherence, and accuracy. Each project counts for 25% of the total grade.
"My enthusiasm over it is that it’s fantastic! We will definitely keep using Automated Feedback in our online classes." - Jane Mahoney, Teacher of English & Program Coordinator, University of Groningen
The instructor can incentivise more elaborate feedback by requiring students to include extra questions while giving feedback ratings to peers. In a summative assessment, this extra element of feedback can also contribute to the overall grade of the assignment.
The University of Arizona utilized student-centric approach and FeedbackFruits solutions to ensure compliance with the Quality Matter Standards
Explore Temasek Polytechnic’s journey with FeedbackFruits, from initial implementation to plans for leveraging AI to enhance peer evaluation and group member assessments
Learn how NHL Stenden made the transition from traditional portfolios to a streamlined, user-friendly approach that enables students to concentrate more fully on learning