Dr. Adam Cardilini’s 2nd-year bachelor's course in science and society brought together around 250 students of diverse international backgrounds for one trimester, exploring the interplay between scientific knowledge and societal issues. Students were encouraged to be innovative in their choice of medium and approach in completing their assignments, with some students submitting infographics or podcasts. A summative written report and portfolio marked the final assessment of the course, asking students to communicate their developed understanding of a controversial issue.
Dr. Cardilini ultimately aimed to provide detailed, real-time, and actionable feedback on each students' written assignment, as an essential part of the development of their writing and argumentation skills. In practice however, larger student cohorts make this aim increasingly impossible, resulting in a lack of individually-tailored guidance. Wondering whether at least a partial solution was possible, the instructor decided to use the AI-powered Automated Feedback tool to give students the chance to receive personalised feedback on their written work. Throughout the course, students could opt to use the tool inside their D2L environment to generate feedback suggestions on lower-order writing skills such as grammar and style. Using these suggestions, they could iterate on an improved final submission, as well as increasing their autonomy and self-guidance throughout the learning experience.
- Unit learning outcomes (ULOs) included, for example, asking students to articulate views and construct critical arguments relating to scientific knowledge and societal issues.
- Global learning outcomes (GLOs) related to the development of students' communication and critical thinking skills.
In this course, students practiced their essay writing, critical thinking and argumentation skills in various written assignments, deepening their understanding of the relationship between scientific and non-scientific knowledge. Students could choose whether to use the Automated Feedback tool to receive instantaneous feedback on their writing, according to criteria which the instructor determined. The tool parsed each uploaded document and highlighted areas of potential conflict with the established criteria, giving an explanation and suggestion to the user, and letting them rate whether the feedback was helpful or incorrect. The criteria chosen were, for example, sentence length, correct use of abbreviations, use of passive voice, and present tense.
In total, there were two assignments for which students could make use of the tool, and it was possible to use the tool multiple times per assignment, for example to re-check work after making suggested edits. Comments from the Automated Feedback tool could be included in students' final portfolios, as evidence showing how they responded to feedback and improved their work.
Learning activities based on the Bloom taxonomy are mainly at the level of:
The written report made up 30% of the overall grade, with the portfolio also being mandatory to hand in at the end of the course. The rest of the grade was made up from other assignments such as quizzes.
• Automated Feedback allowed students to receive comprehensive, personalised feedback on their writing skills without having to rely on the instructor's feedback. This allowed skills to be improved without causing extra work for the instructor.
• Despite not being mandatory for students, a significant proportion made use oft he tool to check their writing, with some students using it several times. This suggests that students perceived value in the generation and implementation of this feedback.
• Certain criteria were found to be more useful than others. For example, word-count was checked by the tool but can also usually be checked in most modern word processors, meaning students often defaulted to this instead. On the other hand, passive voice was found to be an important stylistic element where Automated Feedback proved useful for checking and providing suggestions.
• Students were reported to have been happy to take the AI-generated feedback 'with a grain of salt'. Where the generated comments were not accurate, they still encouraged students to think critically about their writing.
"Ultimately I'd like to provide detailed feedback for every single assignment but that's unrealistic. Automated Feedback did something I couldn't provide for students." - Dr. Adam Cardilini, Lecturer, Deakin University
• A mention was made by the instructor at the beginning of the course, telling students that the tool was available to check a limited number of elements in their writing.
• For students who had checked their work with the tool, analytics were generated allowing the instructor to get a sense of general student activity and performance on the assignment.
• The instructor remarked that they would like to make the use of the tool an explicit part of the assignment process, asking students to show received feedback and how it was used to make improvements. This is already in place in other courses delivered by the instructor.
The instructor noted that the tool was more accessible than similar writing-check tools such as Grammarly, pointing to the consistency of grading and the assignment-marking process, as well as heightened visibility on student activity. Giving students the opportunity to take responsibility of their own learning process with this tool heightens student autonomy and allows writing to improve without requiring the instructor to individually address each student's work.