Lunch and Learn: Evaluating Teaching Effectiveness

This post summarizes recent Lunch and Learn discussions among Homewood faculty about methods for evaluating teaching effectiveness. This discussion supported the work of the Provost’s ad hoc Committee on Teaching Evaluations. Provost Kumar established this committee in response to the Second Commission on Undergraduate Education report, which included a recommendation to establish a new system for the assessment of teaching and student mentoring by faculty.  This was the first of multiple conversations the committee will hold with faculty, graduate students, and undergraduates.

Fifty faculty joined one of two discussions (February 16th and 22nd) moderated by Vice Deans of Undergraduate Education, Michael Falk and Erin Rowe, along with Mike Reese of the Center for Teaching Excellence and Innovation (CTEI).

The Vice Deans reviewed principles and objectives drafted by the committee to collect faculty feedback and suggestions for improvement. Attendees then discussed methods of improving how we evaluate teaching based on these principles and objectives that go beyond the current system of teaching evaluations.  The following summarizes some of the attendees’ comments.

  • Teaching Evaluations serve multiple purposes: students use them to choose courses, faculty use them to improve their teaching, Homewood Academic Council uses them in promotion and tenure decisions, and schools use them for program assessment and accreditation. One mechanism – course evaluations (or student evaluations of teaching) – should not serve all these purposes.
  • Using multiple methods of evaluating teaching (peer evaluation, review of course materials, etc.) and not just traditional course evaluations, will minimize student bias against underrepresented minorities.
  • An attendee shared that when she worked at a more teaching-focused college, it included a committee of peer evaluators – faculty trained to provide feedback. The instructor would meet with the peer evaluator before class to discuss lesson plans and then debrief after the observation. Junior faculty were reviewed more frequently than senior faculty. They tried to find a peer match based on discipline. This review was used as a formative assessment and also summative assessment when someone came up for promotion.
  • Another professor shared that at West Point, instructors attended formal training on how to teach. Senior faculty came in three times during the semester to observe new instructors in class with a defined rubric that was shared beforehand. As for course evaluations, only the instructor saw them; the department chair did not. Student comments did not play any role in promotion.
  • With more faculty recording their classes, peer evaluators could review those recordings and provide feedback on those videos. This is apparently done at the Harvard Business School.
  • Someone raised the question, what is considered “quality teaching?” and suggested there must be some standard.  We need to consider how much we weight the entertainment value of sitting in class or comfort level of students as opposed to being inspired to pursue a career or digging in deeper [learning more about topic].  Who decides that focus for course evals and how to do it? Another person asked the individual who raised the question for his thoughts on this question. He responded, “In engineering, every class must have an objective and we need to demonstrate we are collecting data to show we are meeting it. Another is to have an expert – maybe in sociology or psychology – to write a question that measures if the class in interesting or stimulating to be in.”
  • One instructor raised the question of who are the experts to conduct evaluations. Attendees mentioned instructional design staff at the Center for Teaching Excellence and Innovation, but also felt that discipline-based education experts were needed. Teaching faculty familiar with discipline-specific teaching strategies (e.g., math, engineering, humanities seminars) should also be considered.
  • For evaluation of teaching effectiveness, the most important thing is measuring what the students have learned and, ideally, retained over long periods. We need concept inventories and tests of student knowledge beyond the end of a class, possibly in future semesters.  Peer evaluations could be part of the process of helping instructors improve, but they don’t really measure learning.
  • An instructor shared, “At my previous institution, we assessed teachers through narratives describing changes they made [in their course] based on new studies that have been published in their areas, in addition to participation in teaching workshops/conferences, and adoption of new practices. I would also suggest we evaluate teaching rigor in some of the same ways we evaluate scientific rigor. Look at whether faculty make their resources open and accessible, use OER, etc.”
  • One professor suggested faculty share with students the purpose of course evaluations and how they will be used. It may discourage complaining. “I tell them I read their comments and my boss reads them. I’ve seen biased comments decline since I shared that with students.”
  • Another instructor said the dean’s offices could provide a script for instructors to read about how course evaluations are used and the importance of students civilly communicating feedback.
  • Course evaluations should be filtered for racist and misogynistic comments so faculty are not subject to them.
  • Course evaluations could include a checklist of comments so student feedback is more specific. Students would select the statements that are relevant to their instructor in addition to entering open comments.
  • It would make the surveys longer, but someone suggested every comment students make should be required to include at least one specific example as evidence.
  • One instructor asked, “How do we tease apart teaching effectiveness so they focus on learning and not grading?”
  • Is it possible to ask students to reflect on gateway or core classes sometime in the future to identify how the course provided foundational skills for future courses or co-curricular activities (e.g., internships)? Someone added, “This week I had a senior tell me, ‘I didn’t realize that those concepts really would come up over and over in my other classes, but they did!’” Another attendee shared, “They also report back 3+ years after graduating to say that they use something they learned in class that during the time they thought would be useless.”
  • Perhaps it would also be helpful to leverage our alumni network as one way to capture the enduring effects of learning from various classes.
  • Another instructor mentioned that it can also help to ask students for feedback during the semester so you can adjust instructional methods.
  • Whatever system is developed should not place undue administrative burden on faculty who are already taking on more administrative burdens.
  • An instructor remarked that once a change is implemented, faculty will need training on the new system of evaluation. It will also help encourage professors to be open to this process. Support for instructors assessing the results of evaluations will be critical.

One faculty member shared two books to inform the committee’s future work:

If you have questions or comments about the teaching evaluation process, feel free to email Mike Reese.

Mike Reese, Associate Dean and Director, CTEI
Mike Reese is Associate Dean of University Libraries and Director of the Center for Teaching Excellence and Innovation. He has a PhD from the Department of Sociology at Johns Hopkins University.

Image Source: Lunch and Learn Logo, Unsplash