We’re joined today by two researchers from Wageningen University in the Netherlands. Associate Professor Omid Noroozi and postdoc researcher Kazem Banihashem told us all about their investigation into the effects of peer review on the quality of students’ argumentative essays. We talked more generally about the space, place, and cultural connotations of feedback in education, and later on touched on the Automated Feedback tool as another innovation in the feedback front. So tune in to find out more from Omid and Kazem about their experiences with feedback!
The Learning Experience Lab is made possible by FeedbackFruits.
Hello and welcome to our tenth episode of the Learning Experience Lab. Thank you for your patience waiting an extra week for this one to come out. Last week the team and myself were putting full focus into preparing and delivering our first ever virtual conference, inspirED 2021 at FeedbackFruits, which turned out to be a joy for all of us involved, letting us hear about the insights and innovations from North America to South Australia. And with it a week behind us now, we're already getting excited for inspirED 2022.
But for today we're going to revisit our roots in the Learning Experience Lab and take another look at something related to feedback. We got in touch with two researchers currently at Wageningen University in the Netherlands who've been looking at the relationships between peer review activities and the end quality of students' argumentative essays. Dr. Omid Noorozi and Kazem Banihashem were kind enough to talk to us about their research around FeedbackFruits tools and the effect on students writing quality, and we also had the chance to touch upon the new and exciting Automated Feedback tool which they'd just recently gotten a first glimpse of. My thanks also to José of the FeedbackFruits Automated Feedback team for their contributions in the latter half of the conversation.
Looking through the backgrounds of our guests, I saw countries on every side of the world so my first question was, what have you learned from studying and working in such a large variety of cultural contexts? So here's how Kazem answered that question, and the rest that follows.
---
[Kazem] Well, I received my PhD in educational technology from allama Temple University, which is located in Tehran, but I also had one year experience as a researcher at the University of British Columbia in Canada. So if I consider it from an educational perspective, and from the cultural perspective, in my country, you know, it's not that much culturally diverse, we have the same background almost. So, from intercultural communication, we don't have such I would say problems in terms of misunderstanding or you know, not having a clear understanding of what learning means, what, you know, how the education should be, in terms of supporting students or these kinds of things, but when I was back in Canada, one of the most important aspects of their education was mainly focus on cultural differences, since the Canada basically is naturally diverse in terms of the background that people have, they come from different countries, so they main focus, they are focused on the, you know, the cultural differences, how they should be nicely and decently communicated between these cultures. So for example, I remember that in one of the courses that I've been, like a teaching assistant, there was some misunderstanding about the symbols that, you know, different countries have. So I saw some people, some students were mistaken, you know, become, you know, anxious, whether to use these kinds of symbols in my I mean, the communication or not. So, I think one of the things that should be noted here is the cultural attention that we should have in education.
[Dan] Okay, very succinct. And thank you very much. And, Omid, maybe you could take the question.
[Omid] Yes. So, I think I have been experiencing different education type in different countries. And my background is that I did my Bachelor and Master in Iran, at the Siena University and Tarbiat Modares University. So the education culture is really different from what I actually did during my PhD in the Netherlands, at Wageningen University. So in Iran at that time, so there was not that much focus on the group work, not much focus on the feedback kind of thing. So it was mostly a one way street, a top down approach from teacher to student, so the transmission of knowledge, but when I came to the Netherlands, this situation reversed so it was a more student centered approach in which the students really communicated with the teachers with their fellow students. And they were a lot of emphasis on the feedback on the collaboration on the argumentation on the higher order skills, basically. So then, I moved to a couple of other universities around the world, for example, the University of Michigan in the United States, so I was there for about more than six months actually. Then the University of Oslo in Finland Then the ESL school in France and also some other countries. So I think then over there, the situation was a little bit in between Iran and the Netherlands. So again, there was focus on the feedback on the group work on, you know, things like the student centered approach, but not to the extent as in the Netherlands. So can also another difference that I saw was that in those countries, especially in the United States, there is much focus on the theoretical and conceptual type of work. For example, if teachers say something, if researchers say something that is the document, but here in the Netherlands, we try to do empirical work. And that also makes the educational setting a little bit different. So this is a summary of my actual observation about these three different kinds of countries with regard to education, basically.
[Dan] Yeah, it resonates, especially with the Dutch picture, I can't speak for the American one or the Canadian one personally. But the sheer fact that you address teachers by their first names here, I've become so used to it. And so I did it with you, Professor, even though where I'm from, we were very much more polite to only use second names and titles to refer to our elders. But having been here for long enough, now it started to rub off on me. Thanks for that. General summary, the background, I think it's interesting to know kind of where we've come from, to see how we go forward and how we take these approaches. So I really want to hear about the research that you've been doing, about improving argumentative essay writing with peer feedback. So what made you want to start doing that project?
[Omid] Okay, so I think over the last, maybe 13 years, I've been doing research on argumentation and learning and how we can use technology to facilitate students' collaborative learning, argumentation, based learning, etc. So in the past, I mostly focused on this oral argumentation, oral collaboration. And I think the students have improved and we have even introduced a couple of courses at Wageningen and again, with regard to argumentation which could help them actually acquire the argumentation competence and apply that in oral settings. But then, I talked to a couple of teachers in England and they said, Okay, so now, they are good, in terms of oral argumentation and oral collaboration kinds of things, but when it comes to writing, then they have some problems. So they do not have that much structure. So we want them to write, for example, these essays, because we are dealing with a lot of controversial issues in the field, especially in bartering. And that deals with a lot of Life Sciences issues like environmental education, biotechnology, there are many controversial issues in these fields. But when we ask them to write essays, or their essays do not have a structure, their essay lacks solid argumentation strategies. And as a result, they said, Okay, what can we do now, to actually also improve the quality of essays of students. And then we started to actually implement some tools, which, actually, you know, they had some features that we wanted, for example, they allowed us to embed some kind of questions that we wanted from the students to respond to, but they didn't have all the features together. And that's why we then shifted to this FeedbackFruits. And we were very happy that this feedback tool came in. And it allowed us to, actually, you know, have all the students, you know, to write their essays, and then based on the questions that we wanted, they could give each other's feedback to improve the quality of the essays, especially the argumentative part of the essay, and also to be able to revise their words based on the feedback and then resubmit. So this was the good point and also the automatic, you know, assignment of the submissions to one another for giving each other's feedback. That was also a very good feature that we basically use from this FeedbackFruits. But now, we are busy with actually analyzing many of these, you know, courses that we have used the FeedbackFruits to improve the argumentative essay, and the results are coming up. Actually in a few months, I guess?
[Dan] And are those gonna be quantitative data that you've been gathering from these courses?
[Omid] Yes, the quantitative data, which means that students wrote their essays before, you know, giving them any instruction. So they just wrote and then we tried to actually use this feedback, this argumentative essay, feedback that they received from two other learning partners, and then they had to revise their essay based on this feedback. So we want to know, to what extent students have been improving in terms of having structure in terms of making a solid, you know, argumentation strategies in their essay, basically, to what extent they are able to convince people now with their final argumentative essay, so we have a lot of data that we have gathered so far. And we are now using different types of rubrics both for the quality of the essay and also for the quality of the feedback. So we are now linking this to see for example, what kind of feedback has led to significant changes from you know, from the beginning to the end? Or what, what are the typical patterns of successful feedback? What are the typical patterns of less successful feedback, so we are linking the learning process to the outcomes, basically, and hopefully, this will give us quite a lot of rich data for this experiment, right,
[Dan] I'm sure my colleagues will agree, it's a fantastic thing for us as well at FeedbackFruits us to be able to look at the data and look at the link between the before and after situation that you've been documenting, because, of course, we want to promote ourselves and talk about the value that we can bring to education, but it's research like yours, which is actually putting that into some kind of tangible, workable system, rather than what I do, which is a few random use cases here and there, and anecdotal evidence, now you're looking at the hard data.
[Omid] it's more a researcher's perspective. And also the good thing is that this is done in different courses with different backgrounds, we are doing that with the bachelor students and with the master students.
[Dan] How many students in total?
[Omid] We are talking about now for 500 students, and also we are still collecting data next year, which we expect to be around 500 more students. And also we are doing cat with you know, domains, the better domains and the gamma domains there and also the combination of beta gamma domains. And so, we are redoing that in different domains. For example, biotechnology, global health, environmental education, what are the other courses, actually, there are many courses that we are dealing with. So this will really give us a very good input that is not domain specific anymore, that is just general because we are collecting data from different courses basically.
[Dan] Okay. And Kazem, where do you fit into all this research?
[Kazem] Okay, so as Omid said, the review of the literature, if we go even look at the literature, we can see that teachers are not really satisfied with the quality of the argumentative essay in higher education. They don't really provide very structured theories of the day that cover the pros and cons, integration of the, you know, different ideas and finally, make a conclusion based on those arguments. So we so this is kind of a place that it needs to be worked on to do some research to see what's the problem is so based on the research that they gain that especially in classes with large cohort of students, it is not almost possible for teacher to provide the feedback because it's, you know, it's required so much of the workload. So the peer feedback is actually promising and affordable, let's say, educational strategy to make it happen. So, in this research, actually, we use the peer feedback as a tool to, to provide the, you know, the, the information by, you know, students provide actually this information to each one to another, to see, okay, what's the problem in your essay? What is not the problem, how you can make improvements. What were the points for improvement? So the feedback for it in this case actually helped a lot. So. And I say that where I came for how I joined this, I mean the project well, so we had, like the conversation with Dr. Amit Norris in terms of these projects. So that was very fruitful. And somehow I ended up with this project. And now we are in the middle of collecting our data. And I assume that as we said, there are a couple of courses that we are involved in from masters students that they are in the course and also the bachelor students from different courses, which shows us that the peer feedback that we are using is not course dominance specific. So we can say that, okay, this peer feedback can actually be generalized for different courses. So, which means that we can scale it up for the whole lesson again, if the results are positive? So yeah, that's it, I would say that. Okay, we're doing well, so far.
[Dan] Good to hear. And you both mentioned domain specific feedback. That's something that actually came up when I was speaking with a previous guest on the podcast, John McCormick. And we puzzled for a little bit about whether there is a universal framework of feedback or rubrics, where you can say, these are always good elements to have in a feedback process? And we can find an answer, but I'm happy to hear that you've been thinking of this and building into the research design a little bit taking it into consideration. And it's nice that you're able to get data from so many different sorts of courses.
[Omid] Yeah. So, the, this is a little bit difficult to answer and the thing is that I believe that there are some sort of general feedback features that could be applied in you know, a lot of cases and that is not domain specific dependent, but there are also some specific feedback that are designed for you know, a content with the type of assignment the type of tasks that exist, but in this case, for the argumentative essay writing in our case. So, normally, there is a typical feature for each essay. So, we know that because we have interviewed many, you know, teachers and also many experts in the field of argumentation and we have come up with a general feature of the argumentative essay, which we expect that for example, when the students write an essay in the these controversial issues, they should follow this structure. So, for example, they have to first you introduce the topic, then they have to take a position on that topic and they have to support their position, then they need to also come up with the counter arguments against the position they need to respond to those position and then they need to actually integrate and make a conclusion. So, this is a typical general human argumentative essay and the feedback is also designed in such a way to actually help the students follow this structure. For example, then the feedback is okay to what extent your learning partner has been able to provide scientific evidence for the position that he has taken. So then they say okay, not that much or too much or you know, very little, then again, it is we ask them, okay, what could you add to that, what kind of, you know, feedback would you give in order to actually make this better for the scientific evidence in favor of the position. So, again, you see, the feedback is very, how to say directly related to the type of the essay that we expect from the students. So that is why in this specific case, we have a general feedback, which could be applied to many courses, many domains, regardless of actually their background and domain.
[Dan] Right. And peer review has become the main tool, the peer review tool, which you've used for this, but maybe you could tell me a bit about Automated Feedback and where you first heard about that, and what you thought.
[Omid] Okay, this Automated Feedback is something that we just actually heard about, I think, two months ago. And then I was curious to see what it is and how we can apply that in our work etc. And so this automated feedback is something that, you know, helps the students to actually, you know, follow some basic aspects of the essay of the, you know, the thesis or article or whatever, depending on the type of the task that the teachers want, actually. And I found that quite useful, because normally, we ask the students, for example, to come up with five pages of this that includes these elements, etc. But they do not follow for any reason for different types of reasons. So this will give them a very good overview of the first draft that they have provided. But for this argumentative essay, this is not something yet that we can count on too much. Because in the argumentative essay, we go for the higher order level with, which requires argumentation. And that is a little bit difficult at this point to make it automated, because that is a language processing thing, type of work. But we think that it's very useful for many other courses that we offer here at the Education Learning Sciences courses. And as a result, I think we have a meeting. Because it's next week, I expect that we are going to give a demonstration to all the teachers in our group. And they have shown their interest. So I'm going to give them a demonstration about the feedback, automated feedback, which should be, well, the task of one of your colleagues, but I'm doing it on their behalf because I think they know me, and they know that I'm an educational innovator kind of thing. And maybe that has a little bit more impact on them than if I tell them, okay, this is something that you could use. That's why I decided to do it ourselves. Because we already had one demonstration with one of your colleagues. So they showed us exactly even though I know how these things work, because I'm in the field of educational technology myself. But I'm going to now give a presentation to my colleagues who are interested and next week, we will do it and perhaps then they will contact your people. Because I assume that there might have been no extra follow up questions or things like that. And also you need to activate that for their, you know, FeedbackFruits thing. So you will hear, I think, quite a lot of questions from my colleagues later after we give this presentation to them.
[Dan] Alright. I already have a lot of questions for you right now about that. But I wanted to ask Kazem. Did you have any experience or thoughts about Automated Feedback?
[Kazem] Yeah, I worked a little bit with our automated feedback because as we said, we are going to present this feature for our colleagues next week. And one thing that I really like about the Automated Feedback is that it has the potential actually to save the teachers time for some basic background feedback. I mean, like, okay, you have to meet these word limits for your work, you have to remember that your reference should be in APA style, or for example, okay, you have some problems in wording, the grammar, this is kind of a thing is not really it doesn't require higher order thinking skills or calm, complex, complex cognitive thinking skills, it could be actually done by machine learning or the AI I would say. So, the one main benefit, I would say, for the Automated Feedback is that it could save the teachers time, save the time, teachers time for these basic feedbacks. And so, they can mainly focus on something more important like as Ahmed said, the quality of the work, the assignments that cannot be easily provided these kinds of feedback by the AI. So this is one of the things that I realized by, you know, by doing some work with these Automatic Feedback; however, I would say that I think it is still in progress. They're working on it to see how it actually will be more fitted for courses that they are going to use this function. Well. But yeah, that's the potential that I can see in Automated Feedback so far.
[Dan] Okay, thanks. And while Jose is here, maybe we could go a little bit more into depth because they'll say works closely with Automated Feedback into some of those lower orders. spelling, grammar style, semantic checks, what are some of the most useful ones, which really save the most time for some of the most of the commonest things which come up?
[Jose] Well, so Automated Feedback currently has 11 stable checks. And it's always growing, with more text being promoted. So right now, from teachers, usually what works best is more than taking the references, making sure that it's the same style, from the beta functions that are the pronouns that are in development. I've heard that personal pronouns are a big one, for example, to be able to check where students use AI, or we, that's a really formal language. For example, recently, as of last week, with promoted grammar, as stable, I'm pretty sure I can double check with my team. But professors also really required, we'll use that as a beta function, we're really requesting that to be promoted, because it was quite useful.
[Dan] Thanks. So say, Did you have any questions for Jose while he's here to provide any more information on Automated Feedback.
[Omid] So we have not yet experienced it in a formal setting in the classroom. But I think, with regard to our work, what could be very beneficial from this is that you know that this element should be in there, for example, thesis or in the essay, or the number of words, for example, for each of those elements that could be useful, or the number of words in total, or the references style. So that's my guess. But still, we have to actually try it, and to see which function works better. And, of course, we didn't know the level, or the kind of things, I think this is fantastic. As a kind of educational innovator, I know that this will be quite appreciated by teachers, because that also makes their lives much easier. And they can focus on the real things, the higher order aspects of the thesis of the fork instead of just, you know, going there for this, you know, grammar for the language or the spelling for the structure kind of thing. So they could really focus on the most important part and let this job be done by the automated feedback.
[Dan] Yeah. So something I heard recently is that when a student receives the work of their peer, to give feedback to, the first thing they do is they look at the document and they they make a value judgment: Is this work better than mine, they will look through it and say, they did this, and I didn't do this. They want an internal feedback process, right? And make a comparison. So that's one thing. And also, when you're reading through someone's work, the cognitive load of being able to easily scan it is made much more difficult if there are grammatical stylistic things. So what I see as the added benefit of automated feedback for peer review, is that if students are able to sort out all of those things which might slip you up or take away from the attention of the content, then peers can provide better feedback to each other's argumentation to each other's essays. With all of the research you've done, and all of the looking at argumentation, and peer feedback, have you seen either of these themes come up value judgments, students making improvements on their papers based on their peers, anything like this.
[Omid] I think this is a good thing that you also mentioned not only for the teachers, but also for the peer feedback, this Automated Feedback would be very helpful. And then in that case, again, because what we want in this kind of whole peer feedback setting is that the feedback goes towards the real things. Not that this, let's say low level, or the kinds of things like the spelling, the grammar, the things and then also for me, because I'm also reviewing quite a lot for the top international journals. To be honest, these things have consequences for my mind. So when I see a paper or an article that has, for example, some of these simple grammatical mistakes or simple spelling mistakes or you know, a little bit of a problem with the structure, then my mind tends to actually judge the work negatively. So I just look for the reasons to reject the paper, because that already, you know, has something in my mind. So, negatively. And I assume that this is also the case with a peer review. So when the students, then they say: Oh, what is it again, grammatical mistake, again, you know, this spelling problem. So then again, this might lead to some judgmental, that type of, you know, review, which is not what we want from students. So we could say, okay, despite the fact that maybe, okay, this has some problems, still, you give your feedback on the real thing, and that is what we appreciate. But with this Automated Feedback, I think things could become much easier. So that if we get rid of those kinds of things, then this judgmental type of review would disappear to some extent.
[Dan] Let's hope so a bit. Yeah, I absolutely understand where you're coming from with making this happen to your mind and making this judgment happen to me all the time, when I'm reading either work at school, or a published article in a respected journal. If you see these errors, it takes away from the content, it takes away from the meaning of what you're actually reading I think.
[Omid] Indeed, and the reason is very simple. Because we are not only rational beings, we rely on emotions. And that's as simple as this is.
[Dan] Yeah. And I think we need to acknowledge that in your feedback process as well. Well, thanks for sharing your thoughts. Nhi, Josese, are there any questions you'd like to ask at this point?
[Jose] During the automated feedback, you haven't used this in, let's say, student assignments yet, right?
[Omid] Not yet. Not yet. All right. So I think, yeah, I was talking to one of your colleagues, and then we just came across this tool. And then I thought, okay, it's interesting. And let me figure this out a little bit more. And then we had one demonstration. And then I became even more convinced that this is going to be useful for teachers. And that's why we have set up an appointment with all the teachers in our care group to actually show them what it is? What are the functions and how can we use it in education? And from then maybe in period six, we will be using that in some of our courses.
[Jose] That's amazing.
[Dan] Yeah. Very cool. Okay. I think we've rounded things off quite well there. And we'll reach the end of how long I usually like to spend on this 35 – 40 minutes. So are there any things that you still want to get off your chest? What should we be focusing on in peer feedback research? What do people need to pay more attention to? What's something that surprised you from your research, something unexpected? that's come up perhaps.
[Kazem] There's one suggestion, I would say that we I think, based on our experiences with feedback fruit, we would like to see maybe in FeedbackFruits. Well, for now, we have the first submission of the original saying in FeedbackFruits. Students, again, can provide feedback based on the submission, they have, again, the FeedbackFruits platform, but for the second submission, that is the revised version of the essay writing, they have to go outside the FeedbackFruits platform and uploaded in the Brightspace in general, I mean, so this is kind of, you know, a problem for us to have the whole package over study design in the FeedbackFruits.
[Dan] So this the if there were such a function to have, again, another submission in the feedback for the after the actually they provide the feedback that would be, I think a nice thing is that we can have for our research and also maybe general for the feedback for desert. I think I have good news for you. It's coming. And it's called chaining. And what you said is right on the money, the reason I love feedback for it is because I started using it in a course in new traffic, where I'm still enrolled. Because our peer feedback assignment beforehand meant that we had to download the document in one program, open it in another, email it through our email server, and have so many tabs and actions and parts of the of the learning trajectory, which one about the learning, but they were about locating, sharing and exporting documents. And now you mentioned that a lot of things can happen within FeedbackFruits for its, but if you need to go back to the Brightspace, back to the learning management system environment, and come out of that interface, then you've already lost the flow, you've already made it twice as hard to locate all the documents, but it's coming, it's called chaining. And the idea here is that revisions to documents, multiple uploads can happen within the same activity, or the same activity flow.
[Omid] That's great. That's perfect. And apart from this chaining, or actually, you know, having everything in one package. So I think I also have another suggestion, which is about the reflection on the feedback. Okay, so now there is a place in which students could reflect, you know, generally about, you know, the type of feedback that they received. And this is optional, also, for the teachers if they want to actually have it on or off. But what I also would like to see is that the system allows a teacher to actually ask specific questions in the reflection cycle in the reflection part, because now it's very general. So just, you know, the teacher could, you know, put a text there. So, reflect on the feedback that you receive based on these. But, again, my experiences with all these previous studies show that if we have a specific type of questions that the teachers could put there, and the students could reflect on each of them in one box. Okay, then that is much easier for the teacher and also for the students, because the students typically forget the whole thing. And they only focus on the first question, or the second question, but if we have them, you know, structured in each box, then they could add. Also, sometimes the teachers have very specific questions, for example, sometimes I want the students to reflect on one specific thing. So I want to put it in a question and give them a box to respond to that. So now, it's only one general box, which makes it a little bit difficult for me to focus on the real things that I want as a teacher.
[Dan] And I can imagine in terms of wanting to collate and extract that data, if you have reflection fields, which are both addressing, which are each addressing a specific question, then it can be easier to collect that data over many NCS.
[Omid] Basically, like a feedback part itself that, you know, you could use the different questions in the feedback, and then the students could respond to those questions when they want to give feedback. So I also want this to be implemented in the reflection part as well, because not only the feedback is important, the reflection is even more important. I think, if we put the emphasis on that,
[Dan] Well, I'll make some calls and see what I can do. Yeah. Thank you both so much for your time. It's been a pleasure to hear about your research and your experiences.
---
The fact that these 'feature requests' as we call them, these ideas from instructors and researchers at all the different institutions we work with, that we can take them on board, work with them, and co-create a better-tuned tool at the end of the day, it's a beautiful thing to see unfold. And you could probably hear my enthusiasm when I was able to relay that some of these ideas were already in the works and are currently in testing. It's not tech companies, support or service providers that know best how to teach, it's teachers. And that's why I love to be able to find out all these different approaches and methods with regards to teaching styles so we can build our instructional design around them, rather than the other way round. Another huge thanks to Omid and Kazem for sharing their research, and I wish you both the best of luck rounding things off with the project. In the meantime, it's been a pleasure to have you with us, listener, for 10 episodes of the Learning Experience Lab - so take care, stay safe, and I'll catch you in two weeks for the next one.
Associate Professor at Wageningen University
Dr. Omid Noroozi is an Associate Professor at Wageningen University and Research. He has contributed heavily to the field of pedagogical research and technology with numerous papers in publications such as the British Journal of Technology.
Researcher at Wageningen University
Seyyed Kazem Banihasem is an Educational Technology Ph.D researcher at Wageningen University and Research. He also previously carried out research at the University of British Columbia concerning learning analytics and student engagement, among many other topics.
Got any questions or feedback?
Would you like to be a guest on the show?