It’s that time of year when our students are receiving communications telling them to complete their course evaluations, and we’re actively push-push-pushing them to evaluate. Don’t wait – evaluate! we’re telling them. We email them every few days with reminder after reminder after reminder, but still we have response rates hovering around 50%. Ideally, we’re shooting for 70%, which is where we were at with the paper evaluations. To try to bridge this remaining 20% of student responses, we’ve extended the time to complete the evaluations all the way through final exams, which is a change in policy for us; we had typically ended the evals before finals because faculty did not want students’ experiences in the final exams to “cloud” their overall perception of the class at the very end. But students –and the data–have told us otherwise. They want to evaluate the ENTIRE class, from start to finish and the final exam is part of that course experience. And preliminary results from last semester are telling us that student evaluation ratings overall do not change from before finals vs after finals. In fact, as I’ve said here before, significant research has been done on how and when students form opinions about their courses and instructors and while many faculty don’t like to hear this, it is not after they’ve delivered all of their brilliant lectures. Some studies have shown that student evaluations after the first lecture are equivalent to evaluations at the end of the course.
We are continuing to follow this and will work with our faculty and governance to evaluate whether the evaluation end dates affect the ratings, so we’re open to hearing others’ thoughts about student evaluation behaviors and what really drives their responses and response rates.