Low End-of-Term Survey Response Rate Got you Down? Consider the Approach of Two Members of the Drexel Community to Increase Responses
Student feedback is a critical piece for faculty to gauge whether or not a course is effective and whether or not all course objectives have been met. While some faculty have feedback loops built into their coursework including topic relevance discussions, many rely solely on end of term evaluations for student reflection on the coursework and their teaching style. This makes these evaluations immensely important for the instructor. Questions on these evaluations can range from those pertaining to the instructor’s delivery methods to whether or not the student would recommend the course to another student. Likert scale questions can often be mixed with qualitative, or open-ended questions that give the student more opportunity to reflect.
The vast majority of courses across the Drexel campus and many other universities have been using online evaluations in order to cut down on the administrative work involved with processing these evaluations as we as the enormous paper usage. Helene Engel, Department Administrator for the Performing Arts Department states “It used to take me three weeks to process all of the evaluations and get the results to faculty. It was just not sustainable.” Engel moved the course evaluations for her department onto the university supported tool called AEFIS beginning in the summer of the 2013-14 academic year. According to the last Assessment Inventory Audit which are conducted each October, 70% of programs are now using AEFIS to collect course evaluations. to an online survey method comes with a price though and that is by and large, the response rates have shown to be much smaller than when paper evaluations were used.
Despite the importance of student feedback, many students question whether the results are acted upon or even read. Engel, recently asked students about their feelings towards end of course evaluations and discovered some startling, but not surprising information. “Student’s don’t see a value in filling them out. They don’t know what happens to the comments or if anyone reads them.” Engel stated. “And if someone did read them, they wanted to know what was in it for them, since they were finished with the course.”
Engel went on to say that students were asking for some sort of incentive to complete the evaluation suggesting extra credit. Some universities actually require the completion of these evaluations for the students to receive their grades. Incentivized or required evaluations could possibly have a skewing effect on the data that students are providing. Neither of these concepts are feasible at most universities including Drexel.
So how do we get students to fill out these evaluations?
Perhaps, one way is to get the students to still fill out the evaluations is to utilize mobile devices in class. Some of the professors on campus provide time for students to do this and they tend to have a higher completion percentage than those who do not. Barbara Hornum PhD, Associate Professor in the in the Department of Culture and Communication, does allow the students time in one of her classes to complete the evaluations. “It is difficult, though, because of the 10 week terms at Drexel to give up class time to complete this evaluations,” Hornum stated. “If I didn’t think that the evaluations were necessary, I would not give up class time for it.”
Engel suggests that this idea is imperative for higher response rates. “My dream would be that a professor could check out 30-50 iPads to take to class with them. That professor would open up AEFIS and they could watch the completion percentage rise while the students were completing the evaluations.”
The gifting of class time to complete evaluations is one way instructors are able to demonstrate to students that these are valuable. Perhaps, another way is for the professor to demonstrate the importance of the evaluations by referencing their own usage of past and future evaluations. Engel suggests that the faculty member could follow up via an email to the whole class after the evaluations were complete summarizing some key themes that were suggested and how they may be used in future iterations of the course. “This way the students would know that the professors were using and reading the evaluations and thinking about them critically.”
Hornum takes it one step further in her courses. She references changes that she has made that have been driven in large part by student feedback early and often. Her current spring term course features a new text and workbook because of feedback that she had received from students over a number of terms. She also encourages the students to provide feedback throughout the term to try to ensure learning. “To me, the end of course evaluation is only one small piece of gauging student learning. You need more opportunity to engage with the students during the whole term to see if they are actually learning what you want them to know.” The feedback culture that is created in these courses makes the completion of the end of term evaluation a natural thing. Dr. Hornum generally has a response rate between 80% to just over 90%.
Since the end of term evaluations are used by faculty to review the courses that they teach and also in faculty evaluation, their importance cannot be stressed enough. The issue seems to be communicating that importance to the students. Hopefully, with more focus and feedback stemming from the evaluations, the faculty will be able to raise their response rates and increase the wealth of information provided by student feedback.