Accountability is a Two-Way Street (Part 1)

Another semester has come and gone and course evaluations are finished, with reports ready to be released to faculty. Our campus made the decision to keep course eval surveys open through the end of final exams, which was a tricky decision because our campus also has rolling grade release.  Instructors submit grades within 72 hours of the completion of the final exam and when they click ‘Submit’, the grades become available to students, which if the final exam is early in the week, they could potentially see their grade before they evaluate the course.  This is where one of the accountability pieces come in: one of the reasons we evaluate courses/instructors is to hold them accountable to the students for improving their courses and teaching.  Students want to provide feedback that can improve the student learning experience.

Yikes!  From all of the literature and “best practices” I have read, this practice is strictly taboo, for the following stated reasons:

  • Surveys should close before final exams because student perception of the course/instructor could become biased after taking the final exam (if he/she thought the exam was hard or if the student thought that they performed poorly).
  • Surveys should close before grades are released because students would unfairly evaluate a professor based on the grade he/she received.

For the past two semesters, we’ve thrown caution to the wind and have done both, under the rationale that the final exam and even the course grade are part of the course and that students should have the ability to evaluate the course in its entirety. Which has become cause for consternation among a few vocal faculty who believe that the students will unfairly evaluate them and bias their overall evaluation results because of the grade they receive or the difficulty of the final exam.

To perform due diligence, we are able to parse the evaluation data by date submitted and are now looking at a sample of courses to compare on the question, “What overall grade would you give this course?” (Scale: A-F).  Early indications are that there is no significant difference between the course rating before the final exam/grade release and after the final exam/grade release.  In fact, there appears to be no correlation whatsoever.  Bear in mind that we’ve pulled a small sample to provide administration with some preliminary findings on this issue, but we are now beginning to look at the data en masse to determine if any biases are seen in among specific courses, disciplines, class sizes, etc.

Stay tuned for more results and data!

 

Evaluations and Grades

It’s that time of year when our students are receiving communications telling them to complete their course evaluations, and we’re actively push-push-pushing them to evaluate.  Don’t wait – evaluate! we’re telling them.  We email them every few days with reminder after reminder after reminder, but still we have response rates hovering around 50%.  Ideally, we’re shooting for 70%, which is where we were at with the paper evaluations. To try to bridge this remaining 20% of student responses, we’ve extended the time to complete the evaluations all the way through final exams, which is a change in policy for us; we had typically ended the evals before finals because faculty did not want students’ experiences in the final exams to “cloud” their overall perception of the class at the very end.  But students –and the data–have told us otherwise.  They want to evaluate the ENTIRE class, from start to finish and the final exam is part of that course experience.  And preliminary results from last semester are telling us that student evaluation ratings overall do not change from before finals vs after finals.  In fact, as I’ve said here before, significant research has been done on how and when students form opinions about their courses and instructors and while many faculty don’t like to hear this, it is not after they’ve delivered all of their brilliant lectures.  Some studies have shown that student evaluations after the first lecture are equivalent to evaluations at the end of the course.

We are continuing to follow this and will work with our faculty and governance to evaluate whether the evaluation end dates affect the ratings, so we’re open to hearing others’ thoughts about student evaluation behaviors and what really drives their responses and response rates.

 

 

 

 

Standing Ovations

Students giving standing ovation to professor

Students give Tim Evans, an associate professor of toxicology at MU, a standing ovation as he is presented a 2013 William T. Kemper Fellowship for Teaching Excellence award on Monday in the Adams Conference Center.

I have heard from several colleagues (including my husband, also a faculty member) that they have received applause on the final day of the semester–what an honor!  I have never received applause from college students, but I have from my other students, my faculty, at the end of a seminar or workshop (and yes, it was for doing a good job, not as a polite ‘thank you’ for presenting!). While I do teach undergrads and graduate students, I consider my primary “students” to be the faculty, TAs and postdocs whom I teach, provide advice, consultation and in some cases, serve as mentor.  Interesting; these are my colleagues but also my students and I am proud to participate in their learning and progression as teachers in this time-honored profession.

It is rewarding when a student comes to you at the end of the course, or even years later and tells you that they remember what you said to them, and it is often something you never remembered saying.  But the fact that this person remembers and it made a difference to them and their outlook on the world is really an impactful thing for us in the classroom.

Many, many faculty tell me that the course evaluations come and go, but the students who return after semesters or years to tell us that we made a difference in their lives is all the validation we need that our jobs are important.  We may feel somedays like we’re standing up there in the front of the room and no one cares, but then we receive a message like this one, from a former graduate student at the Mountbatten Institute:

“Dear Patricia, I wanted to thank you for your class this evening and to let you know that I was listening and I really liked what you said about…”

This student went on to share with me her experiences with the topic of the class, her career aspirations and her blog http://www.mysiteanniemetcalfe.com/.  It was refreshing for me to hear that even though she hadn’t actively participated that first night that she WAS listening, she was learning and she was engaged with the content.  And she’s continued to be successful in her academics and her career, and I was fortunate to join her for part of her journey.

Because I’m an adjunct instructor (and full time administrator), I don’t have the pleasure of interacting with hundreds of students each semester like my husband does.  He is a dynamic, engaging instructor who has what I affectionately call his “groupies” — those students who seem to hang on his every word, take as many of his classes as possible and come regularly to his office hours, hoping to glean a nugget of wisdom.  He regularly receives applause and even a standing ovation from time to time, but not because he’s easy or buys them food or gives extra credit points, because he doesn’t–nothing, nada–he’s actually a real tough nut and has extremely high expectations, it sometimes takes students most of the semester to move from hating him to thinking he’s the best.

So while we can’t focus on accolades every time we teach, we need to keep in mind the end goal: the student learning experience.  That’s what they’ll remember long after they’ve forgotten the facts and figures, they’ll remember you and your efforts to engage them.

 

 

 

 

 

 

Making Soup: What Teaching and Cooking Have in Common

soup chefEvery fall and spring semester, many professors conduct what we call “mid-semester assessments,”  where students provide their instructors written feedback on how the course is going so far.  Sounds like a reasonable thing to do, right?  Well, not according to some instructors.  Here is some of the feedback I’ve received from our faculty:

“For courses like mine that are taught with advanced pedagogical approaches, mid-semester evaluations are like asking an audience what they think of a play after the first act – clearly silly. Moreover, these unhelpful evaluations also reduce our ability to get students to invest still more time in the year-end reviews – the evaluations that really are appropriate and useful.”

“I think it is risky during the middle of the semester to empower students to think they can dictate how and what a professor should teach. The proof is in the pudding and the final grades. The most common complaint I get from students is that I talk too fast, my response is that the typical person speaks about 125 words per minute, and that a good note taker can capture about 45 words per minute…” and so on.

These comments are concerning to me for several reasons:

1) Instructors believe that students need to have experienced the entire course before their feedback is valuable.  Translate: My course is perfectly scripted and nothing you can say or do will change what I need to accomplish.

2) Student feedback dictates how the course should be taught. Translate:  I’m the expert in this subject, not the student.  Only I know how it should be taught.

What’s wrong with these assumptions? First of all, the “definition” of formative assessment can be thought of as, “…a process used by teachers and students during instruction that provides feedback to adjust ongoing teaching and learning to improve students’ achievements of intended instructional outcomes…the labeling of assessments and tests can lead to misunderstanding. Formative assessment is vulnerable as it is often misunderstood or misinterpreted as a particular test or product, as opposed to a process used by teachers and their students as an ongoing gauge of the current status of student learning.” (http://www.ccsso.org/Documents/FASTLabels.pdf)

A good example is this:  Formative assessment is when the chef tastes the soup; summative assessment is when the customers taste the soup.  Think of yourself as the chef–you need feedback on whether the course is going well. Think of the end of course evaluations as the customer tasting the soup–this is when you hear how good or bad it really was, but at a point when you can’t do anything about it.

The #1 best use of mid-semester assessments?  Determine what you want to know from your students that you can change about the course, and ask them.  Remember: It’s not about YOU. It’s about the students’ learning experience in the course–so ask them that. Prompts that could be used:

What about this course supports and encourages your learning?

What about this course inhibits your learning?

Notice that the questions aren’t about what the instructor is doing well or could improve, they’re about how the course could better support their learning.  The responses will generally be the same, but the student puts the focus on the class as a whole, not on the instructor’s choice of sweaters or heavy accent.  Typical student responses read:

“He should spend more time explaining and giving real-world examples and less time reading off the powerpoint.  His examples are interesting and help me to better understand the topic.”

“I like how she takes time to make sure that everyone is following along.  The group work helps to break up the long sessions.”

“In such a large class, it would help if he would wait a little longer for students to respond when he asks questions, especially for those of us who sit in the back of the auditorium, we never get called on to answer because he can’t even see us.”

These are all things that the instructor can change, do not impact the content or tell the teacher how he/she should be teaching the course.  These comments explain, from the students’ perspective, what they need to learn better or what is working well to promote their learning.

Deciphering Student Comments

Screen Shot 2014-08-18 at 6.55.25 PM

Screen Shot 2014-08-18 at 6.55.39 PM

Ever think that student evaluations of teaching are just a popularity contest?  Is your feedback looking like a bi-modal distribution–students either love you or hate you?  Are student course evaluations a waste of time because the data are worthless? Maybe it’s time you took another look at your students’ feedback in light of what the research tells us…

According to the IDEA Center Paper #50 (Benton & Cashin, 2010), misconceptions about course evaluations are common at universities like ours, misconceptions which are unsupported by research and which make faculty less willing to place value in student feedback and less likely to incorporate changes in their teaching based upon said feedback (Aleamoni, 1987; Feldman,
2007; Kulik 2001; Svinicki & McKeachie, 2011; Theall
& Franklin, 2007).

Some of the most commonly held misconceptions include:
• Students cannot make consistent judgments.
• Student ratings are just popularity contests.
• Student ratings are unreliable and invalid.
• The time of day the course is offered affects ratings.
• Students will not appreciate good teaching until they are out of college a few years.
• Students just want easy courses.• Student feedback cannot be used to help improve instruction.
• Emphasis on student ratings has led to grade inflation.

The paper goes on to point out that there is more than 50 years of “credible research on the validity and reliability of student ratings…which persist, unfortunately, largely due to ignorance of the research, personal biases, suspicion, fear, and general hostility toward any evaluation process” (Theall & Feldman, 2007).

But what about the qualitative comments students provide on the evaluations?  Faculty tell us that they most value the student comments on their evaluations, but students tell us that they don’t think their instructors value their feedback. So which is it?  Sadly, both are true.  Far too many professors dismiss the evaluations because they believe one or more of these misconceptions and students know this because they read the RateMyProfessor reviews and learn that nothing about his/hear teaching has changed in the last 10 years, regardless of what students are saying.

Want to get the best feedback from your students that will provide you with reliable data with which to make changes in your teaching?  Tell your students and show your students that their feedback matters.  I’ve even seen a professor put student evaluation comments into his syllabus.  I myself spend time at the beginning of the semester discussing the importance of feedback and how I’ve used student comments to change the course for the better, by saying something like, “This semester I’m changing the group project.  In past semesters, students told me how much they hated the group work because scheduling meetings was difficult or because some students felt they did more work than others. So this semester you’ll still be doing group projects but we’ll be doing all of our planning and preparation in Google Docs and Google Slides.  No more scheduling hassles and everyone is held accountable for their portion of the work, because I can see exactly what each person has contributed.”

All I know is that I don’t want to be one of those professors using the same lecture notes and syllabus 10 years from now.  I’m going to continue to solicit and incorporate my students’ feedback, a practice that I know will keep my courses fresh and exciting for me and my students.

 

 

Why Rate My Class?

Typical Scene Large Lecture Classroom

One of my responsibilities in the Faculty Center is the administration and utilization of the campus course evaluations, typically not a subject that one would get excited about…in fact, not something that I get particularly excited about.  BUT, what does excite me about course evaluations is finding ways to bring the teaching and learning process full circle.  From the moment the student steps into the classroom on the very first day of the semester to that same student exiting the room after completing the final exam, an ages-old process has been completed: what we think of as teaching and learning.

What often occurs inside the traditional campus classroom involves:

  • An instructor lecturing from a PowerPoint;
  • An instructor talking and writing on the chalk/white board, Sympodium, etc.;
  • An instructor reading from his/her notes;
  • Students sitting passively in the audience soaking up said instructor’s wisdom (and hoping to retain enough to pass the next exam);

or, it could look like this:

  • Instructor walking around the room lecturing and involving students in the discussion;
  • Instructor using technology to engage students actively in the learning process;
  • Students working in pairs or small groups;
  • Students teaching students with the instructor at the back of the room, facilitating.

So that’s what it LOOKS like in the traditional classroom, the teaching part.  But what about the learning?  What about the student experience while in that classroom?  There is an entire discipline devoted to learning, teaching and how students experience learning, and I’ll devote this space to discussing that discipline.  My idea is to highlight how we can better identify what students have learned and how they have learned it through our use of course evaluations, how we can bring this process full circle for students and faculty.