Greg Reihman, Director of Center for Innovation in Teaching and Learning
First Published, Summer 2004
Contact: email grr3 at lehigh.edu, 8-6840, 370a Fairchild-Martindale
What is the best way make to sense of student evaluations of your teaching? What do these evaluations mean and what role can they play in the improvement of your teaching? There is a diverse body of opinion regarding the validity and usefulness of student evaluations of teaching. In fact, thousands of studies have been published about the reliability and validity of such evaluations. However, in this posting I set aside these broader questions offer instead some advice for reacting to your own set of instructor evaluations. My view is that, although no measure of teaching effectiveness is perfect, we as teachers can nevertheless gain a great deal by taking our student evaluations seriously. To that end, I offer some suggestions for how to approach student evaluations.
Suggestion 1: Prepare to change one or two things about how you teach.
With so much data in front of you and so many associated concerns on your mind, it is easy to get distracted from the fundamental purpose of these evaluations. They are meant to help you identify your teaching strengths and weaknesses and to help you decide where to focus your improvement efforts. You may have reasons to feel skepticism about these evaluations or frustration about role they play in your professional life, but if you are genuinely interested in improving your teaching, it will help to set those other concerns aside and read your evaluations determined to find the most important things that you can do to become a better teacher. Prepare yourself to read the evaluations as honestly as you can, to fairly determine for yourself which data is worth reacting to and which isn't. Lastly, please note that if you find yourself in the extreme situation of either uniformly dismissing or uniformly endorsing all the criticism put before you, you are likely not approaching your evaluations in a frame of mind that will allow you to acquire quality feedback needed to make authentic changes in your teaching.
Suggestion 2: Read your evaluations with thick skin.
It is difficult to absorb criticism if you feel defensive or insulted. If you want the evaluations to be of use, prepare for the worst before you start reading them because you never know when you are going to get an insensitive, rude, or even vulgar comment. If you receive such an inappropriate comment, or a particularly harsh score, try not to let it color the overall message of the evaluations as a whole. It may help to bear in mind that a student who writes such a comment is likely young, rash, and unprepared to consider the effect such feedback has. Moreover, recall that no one has trained students in how to give this sort of feedback and--even though students are in fact reasonably accurate judges of their own learning--they likely have little experience with the tactful expression of their judgments. Such a student may be angry, frustrated or disappointed about something and, unfortunately, chose this opportunity to vent. In some cases you may be justified in dismissing such comments completely; in other cases, you may want to consider what to do next time to detect and correct such frustration or hostility earlier in the course. If you find yourself unable to read the evaluations without getting upset at the coarser comments that may be among them, you may wish to ask a peer or a friend to read them for you and translate the comments into more constructive language.
Suggestion 3: Look for general trends and overlook the outliers.
While you may learn something through a close reading of each individual evaluation, the best approach is to look for the general trends. What are the recurring observations made by students? What do the numbers and the comments in these trends reveal about your strengths and weaknesses? As for the outliers, bear in mind that even the greatest teacher in the most highly successful class may get one or two evaluations that contain comments or ratings that don't match up with those of the other students. Although in some cases these outliers may reveal something to you about your teaching, in most cases you can safely ignore these and look for broad trends indicated by the class as a whole. Note that this applies both to unusually positive and unusually negative outliers.
Suggestion 4: Seek explanations, but don't explain away all the data.
If you want to use evaluations to improve your teaching, it doesn't help to read them in a defensive posture. While you may want to consider whether there may be reasons, independent of the quality of your teaching, that might be affecting your scores, you don't want to explain away all feedback so that you can gain nothing from it. For example, some faculty may want to write off the evaluations entirely because of factors that bias their accuracy. However, even if some factors really do bias the ratings somewhat, the existence of some bias does not delegitimize all the data. For example, some studies have shown that a required course will likely have lower average evaluations than an elective, so if your scores in a required course are low, this 'requirement penalty' may be a factor. But, again, it is not wise to use this fact as an excuse to dismiss all of the data. Rather, try to mentally correct for the bias, and then look at what the numbers and comments tell you, all possible bias aside. Also, note that the research has shown that many common conceptions about bias (instructor appearance, gender, rank, size of class, perceived easiness) to be inaccurate; that is, many factors that you might think bias students for or against you probably don't.
Suggestion 5: Focus on the elements of the evaluation that matter most.
If there's something you're working on in your teaching, pay particular attention to that portion of the form to gauge how it's going. Probably the most important categories are #1 (Overall effectiveness), #2 (Overall quality) #14 (Learned a great deal), and #21 (Would recommend the course). The other questions may help you fill in the details of your teaching efficacy, but these four most reliably get at the heart of the question of whether you have done a good job teaching your students.
Suggestion 6. Decide if it's appropriate to focus on student perception.
If, after much soul searching and consideration of other evidence, you are convinced that the ratings don't match the reality of your teaching, you may want to focus your efforts on correcting the perception that generated the ratings. For example, if you have think you have clear goals but the students think you don't (#15, Goals), then the pedagogical benefit of having clear goals is effectively lost and, for all practical purposes, you don't have clear goals. Or, if you are regularly available in your office, have extra office hours during tests, and meet students whenever asked, but still get low numbers on #13 (Availability), then the problem may be that students don't know that you are as available as you really are, or they may find it difficult to approach you. In either case, student perception matters and has an impact on how students learn. So you may want to include your availability more prominently in your syllabus (and remind students frequently) or you could organize informal events (a lunch, coffee, etc) that highlight your willingness to meet with them. If students see grading as unfair (#16), you may wish to explain your grading procedures more carefully. If students think that the assignments didn't help them learn the material (#18), you may want to make the purpose of the assignments more clear.
Suggestion 7. Chose one or two areas and commit to improving yourself in those areas.
When you have identified the general trends of the evaluation, corrected for possible biases, and persuaded yourself that the trends accurately reveal something about your teaching, focus in on one or two substantial things to work on. You can't change everything about your teaching overnight and trying to do so may do more harm than good. Meet with a peer, a chair, a colleague, or someone in Faculty Development to discuss ways to make these improvements.
Suggestion 8. Supplement the end-of-semester evaluation form.
The common form end-of-semester evaluation shouldn't be the only way you are getting feedback from your students. Design smaller versions of this evaluation form and give it out to students periodically during the semester. You may want to focus on something you are working on, perhaps some new assignment, method, or technology. At a minimum, use a mid-semester feedback form, asking general questions about what students like about the class, what they dislike, and what suggestions they have for you to help them learn more. Note that if you solicit this feedback, be prepared to act on it and tell students how you have done so. This doesn't mean that you have to radically change a course based on every piece of negative feedback; rather, you should let your students know what you have decided to change (and why) and what you have decided to keep the same (and why). This process informs you of problems so you can make mid-course corrections. It also makes students more aware of how they learn and trains them in giving feedback, all of which can't help but improve the quality of the feedback you get at the end of the course.
Center for Innovation in Teaching and Learning
8A E Packer Ave
Bethlehem, PA 18015
Center for Innovation in Teaching and Learning (CITL) fosters excellence and innovation in teaching, learning, and research by providing faculty and students with development opportunities, teaching tools, course development opportunities, classroom and instructional support, and consultation services.
27 Memorial Drive West, Bethlehem, PA 18015