Accountability is a Two-Way Street (Part 1)

Another semester has come and gone and course evaluations are finished, with reports ready to be released to faculty. Our campus made the decision to keep course eval surveys open through the end of final exams, which was a tricky decision because our campus also has rolling grade release.  Instructors submit grades within 72 hours of the completion of the final exam and when they click ‘Submit’, the grades become available to students, which if the final exam is early in the week, they could potentially see their grade before they evaluate the course.  This is where one of the accountability pieces come in: one of the reasons we evaluate courses/instructors is to hold them accountable to the students for improving their courses and teaching.  Students want to provide feedback that can improve the student learning experience.

Yikes!  From all of the literature and “best practices” I have read, this practice is strictly taboo, for the following stated reasons:

  • Surveys should close before final exams because student perception of the course/instructor could become biased after taking the final exam (if he/she thought the exam was hard or if the student thought that they performed poorly).
  • Surveys should close before grades are released because students would unfairly evaluate a professor based on the grade he/she received.

For the past two semesters, we’ve thrown caution to the wind and have done both, under the rationale that the final exam and even the course grade are part of the course and that students should have the ability to evaluate the course in its entirety. Which has become cause for consternation among a few vocal faculty who believe that the students will unfairly evaluate them and bias their overall evaluation results because of the grade they receive or the difficulty of the final exam.

To perform due diligence, we are able to parse the evaluation data by date submitted and are now looking at a sample of courses to compare on the question, “What overall grade would you give this course?” (Scale: A-F).  Early indications are that there is no significant difference between the course rating before the final exam/grade release and after the final exam/grade release.  In fact, there appears to be no correlation whatsoever.  Bear in mind that we’ve pulled a small sample to provide administration with some preliminary findings on this issue, but we are now beginning to look at the data en masse to determine if any biases are seen in among specific courses, disciplines, class sizes, etc.

Stay tuned for more results and data!

 

Evaluations and Grades

It’s that time of year when our students are receiving communications telling them to complete their course evaluations, and we’re actively push-push-pushing them to evaluate.  Don’t wait – evaluate! we’re telling them.  We email them every few days with reminder after reminder after reminder, but still we have response rates hovering around 50%.  Ideally, we’re shooting for 70%, which is where we were at with the paper evaluations. To try to bridge this remaining 20% of student responses, we’ve extended the time to complete the evaluations all the way through final exams, which is a change in policy for us; we had typically ended the evals before finals because faculty did not want students’ experiences in the final exams to “cloud” their overall perception of the class at the very end.  But students –and the data–have told us otherwise.  They want to evaluate the ENTIRE class, from start to finish and the final exam is part of that course experience.  And preliminary results from last semester are telling us that student evaluation ratings overall do not change from before finals vs after finals.  In fact, as I’ve said here before, significant research has been done on how and when students form opinions about their courses and instructors and while many faculty don’t like to hear this, it is not after they’ve delivered all of their brilliant lectures.  Some studies have shown that student evaluations after the first lecture are equivalent to evaluations at the end of the course.

We are continuing to follow this and will work with our faculty and governance to evaluate whether the evaluation end dates affect the ratings, so we’re open to hearing others’ thoughts about student evaluation behaviors and what really drives their responses and response rates.