• Users Online: 193
  • Print this page
  • Email this page


 
 
Table of Contents
POLICY PEARLS
Year : 2018  |  Volume : 1  |  Issue : 1  |  Page : 7-10

Improving response rates for course and instructor evaluations using a global approach


1 Department of Veterinary Population Medicine, University of Minnesota College of Veterinary Medicine, St. Paul, MN 55108, USA
2 Department of Veterinary Clinical Sciences, University of Minnesota College of Veterinary Medicine, St. Paul, MN 55108, USA

Date of Web Publication1-Oct-2018

Correspondence Address:
Dr. Erin D Malone
Department of Veterinary Population Medicine, University of Minnesota College of Veterinary Medicine, St. Paul, MN 55108
USA
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/EHP.EHP_7_18

Rights and Permissions
  Abstract 


Obtaining sufficient survey responses to make course and instructor evaluation results meaningful is a challenge in many, if not most, health professions training programs. This paper describes a series of policy changes that significantly improved data quality at one college of veterinary medicine located in the United States. The steps consisted of minimizing the number of items appearing on the instruments, providing students adequate time and space for completion, clearly explaining the purpose and value of the evaluations, simplifying data collection, collecting verbal feedback, and closing the loop with student participants by informing them of any changes that were made as a result of their feedback. The steps outlined in this model may be easily extended to other health professions programs that involve cohort models, multi-instructor courses and limited resources with respect to time and people.

Keywords: Course evaluations, instructor evaluations, survey response rates


How to cite this article:
Malone ED, Root Kustritz MV, Molgaard LK. Improving response rates for course and instructor evaluations using a global approach. Educ Health Prof 2018;1:7-10

How to cite this URL:
Malone ED, Root Kustritz MV, Molgaard LK. Improving response rates for course and instructor evaluations using a global approach. Educ Health Prof [serial online] 2018 [cited 2023 Mar 27];1:7-10. Available from: https://www.ehpjournal.com/text.asp?2018/1/1/7/242555




  Introduction Top


Students perspectives on their instructors and courses (student ratings of instruction, student ratings of teaching, student evaluation of teaching (here referred to as SETs)) continue to be a topic of extensive research and debate.[1],[2],[3],[4] The use of SETs is expanding globally with increasing value placed on the data for promotion, tenure, and retention.[5] While SETs are not infallible and should not be used in isolation, most agree that student input is essential for evaluation of teaching and course effectiveness.[6],[7],[8] Many suggest that the problems associated with these surveys have small effects on the final score as long as enough information is collected and interpretation is appropriate.[7],[8],[9] Adequate response rates for the population are a major key to valid results, minimizing the risk of sampling errors and biased results.[8],[10],[11]

SETs are required at the University of Minnesota by university policy as well as by departmental and collegiate promotion and tenure guidelines. Each semester, students are asked to rate each of their courses and instructors. For many years, course evaluation forms included nine separate criteria and instructor evaluations included twelve. Comments were requested for each of the categories. For our class sizes (approximately 100 students per cohort) and survey design, a >50% response rate was calculated as necessary for reasonable score validity when the average score variability (SD) was ≤1.0.[10],[11] For many years, we experienced such low response rates that score validity was in question. Multiple methods were attempted to improve response rates including using forms with fewer questions, only requesting surveys if an instructor taught more than three sessions in a course, opening surveys mid-semester for earlier input, offering rewards for high participation rates, prize drawings, and attempts to withhold grades until surveys were returned. Despite these efforts, response rates continued to decline. However, without an obvious alternative, the data were still used by the curriculum committee for course decision-making and were considered for promotion and salary decisions by departments.

In recent years, we changed our practices and significantly improved both course and instructor evaluation response rates and data quality in a time-efficient manner. The purpose of this article is to describe this model to help educators at other institutions similarly optimize the effectiveness of their course and instructor evaluation efforts.


  Course Evaluations Top


Modifications

In 2015, with the approval of the department chairs, the Curriculum Committee reviewed the current course questions and revised them into two global thematic questions set on a 5-point Likert-type scale: (1) Expectations for successful completion of the course were clear to me; and (2) Overall, I would rate the course. These items were supplemented by comments on (1) Things I liked best about this course; and (2) Suggestions for improvements.

Students were invited by cohort to a mid-day session devoted to course evaluations and told that lunch would be provided. Attendance was strongly suggested but not mandated. The session leaders explained the plan for the session, as well as the plan for data dissemination and use. While students ate, scores for each course were collected anonymously through an audience response system while written comments were collected through an online survey. Verbal comments were solicited during the remainder of the session. The academic associate dean, assistant dean and/or the curriculum coordinator listened, took notes and asked follow-up questions as well as questions about the semester as a whole. In general, explanations or counterpoint arguments were avoided. After surveys were closed (1 week after the end of examinations), comments were compiled into themes with verbal and written comments kept separate to avoid excessive emphasis due to more outspoken participants. Course coordinators, department chairs, and the Curriculum Committee received both scores and de-identified comments. In later years, students also received a summary of planned next steps to show how the college was responding to their suggestions and concerns.

Post-intervention results

With the single evaluation session per class and the shortened surveys, response rates substantially increased with related improvements in data quality. We have been using the process described in [Table 1] for course evaluations since 2015. Surveys (scores and comments) are now collected through Qualtrics® for ease of reporting. We continue to strongly suggest attendance but have not mandated the sessions due to the risk of “survey satisficing” (similar scores across all courses) or carelessness.[12] The students generally seem to appreciate the chance to voice their concerns, and those who prefer to enter them electronically are reassured that this input carries equal weight. As all courses are evaluated, and comments are generally constructive, faculty have been very supportive of the process. If course coordinators want more specific information about their course, they can survey students during regular class time via the course management system or submit additional questions for the verbal feedback session. Department chairs and the Curriculum Committee are equally satisfied as response rates, and data quality are much improved.
Table 1: Course and instructor evaluation model

Click here to view



  Instructor Evaluations Top


Modifications

In 2016, we proposed a more global approach to instructor evaluation, targeting early career faculty and suggesting only one survey per instructor per year, regardless of the number of courses in which that instructor taught. This change affected only the preclinical veterinary curriculum. We did not alter the process for collecting evaluations for undergraduate courses or clinical rotations.

Using the guidelines in [Table 2], the preliminary list of instructors for each semester and cohort was presented to the department chairs. Two of the three departments elected to participate the 1st year; the third department continued with the prior process of an evaluation per instructor in each course with surveys distributed electronically starting mid-semester. For the other two departments, minor edits to the proposed list were made based upon upcoming promotions and faculty feedback needs.
Table 2: Evaluation criteria for UMN-CVM instructors

Click here to view


The questions on the instructor evaluations were unchanged from the SETs performed before 2016 and were provided electronically using the same process as for course evaluations. The survey also included a picture of each instructor evaluated and later, a description of the topics the instructor taught. The time allotted for the process of collecting evaluations was expanded to 90 min to accommodate both course and instructor evaluations, following the process in [Table 1]. Students took approximately 20 min to submit electronic scores for both instructors and courses. The verbal discussion was restricted to courses.

Post-intervention results

Before the changes implemented in 2016, the total number of evaluations requested per student was excessive, with over 50 instructor evaluations requested most semesters. The list of instructors included many full professors, and many instructors had several surveys due to teaching assignments in multiple courses. With only a few evaluations on their list and food in front of them in 2016, students seemed very willing to complete the full set.

The response rate data convinced the third department to join in the new format in fall of 2017. Response rates remained high, even with the number of instructor evaluations stretched to 11 surveys per student. The focused list meant the evaluations performed matched the list of faculty needing SETs for promotion purposes. We have had enough flexibility to allow those going up for promotion to add an additional semester of evaluations. Due to the format of annual review packets, it has been important to ensure the review committees do not penalize faculty for not including evaluations for a particular semester. Reminders at the time of packet reviews have been an important step. We have encouraged faculty not officially reviewed in a given semester to collect their own formative evaluations, using their class time and not using the standard instructor evaluation forms. Instructors can also request mid-semester evaluations to gather information outside of this format.


  Lessons Learned Top


By carefully considering the most useful questions and most useful surveys, we have been able to shorten the course and instructor evaluation processes to something attainable within an extended lunch hour. Listening to student comments may be one of the most crucial components and is similarly effective at Cornell University (Katherine Edmondson, personal communication, March 3, 2018).

The restricted number of faculty evaluations was the most challenging hurdle due to the importance of SETs for faculty promotion. However, once others who were initially hesitant to adopt the process saw the improved quality results, the concerns quickly dissipated. At North Carolina State University College of Veterinary Medicine a similar model is implemented; however, even fewer evaluations are completed per faculty member. Specifically, over a 5-year span assistant professors must be evaluated three times, associate professors twice and full professors once (Lizette Hardie, personal communication, March 4, 2018).


  Conclusion Top


The purpose of this article was to describe an effective course and instructor model that has been demonstrated to be very effective at the University of Minnesota's College of Veterinary Medicine. Given the elements described in this work, we were able to make dramatic improvements to data quality that have greatly improved our evaluation processes. Although the methods and policies utilized in this article are specific to one College of Veterinary Medicine, we believe the process could be easily extended to other health professions programs that involve cohort models, multi-instructor courses and limited resources with respect to time and people.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Flaherty C. New study could be another nail in the coffin for the validity of student evaluations of teaching. Inside Higher Ed education. 2016. p. 5-6. Available from: https://www.insidehighered.com/news/2016/09/21/new-study-could-be-another-nail-coffin-validity-student-evaluations-teaching. [Last accessed on 2018 May 19].  Back to cited text no. 1
    
2.
Uttl B, White CA, Gonzalez DW. Meta-analysis of faculty's teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Stud Educ Eval 2017;54:22-42.  Back to cited text no. 2
    
3.
Benton SL, Li D. IDEA Editorial Note # 1 Response to “A Better Way to Evaluate Undergraduate Teaching”. IDEA Cent; 2015. p. 1-9. Available from: https://www.ideaedu.org/Portals/0/Uploads/Documents/A_Better_Way_to_Evaluate.pdf. [Last accessed on 2018 May 19].  Back to cited text no. 3
    
4.
Benton SL, Ryalls KR. Challenging Misconceptions about Student Ratings of Instruction. IDEA Paper, 58; 2016. p. 1-22. Available from: http://www.ideaedu.org/Portals/0/Uploads/Documents/Challenging_Misconceptions_About_Student_Ratings_of_Instruction.pdf. [Last accessed on 2018 May 19].  Back to cited text no. 4
    
5.
Miller JE, Seldin P. Changing practices in faculty evaluation. Academe 2014;100:35-8.  Back to cited text no. 5
    
6.
Beran TN, Donnon T, Hecker K. A review of student evaluation of teaching: Applications to veterinary medical education. J Vet Med Educ 2012;39:71-8.  Back to cited text no. 6
    
7.
Benton SL, Li D. IDEA Student Ratings of Instruction and RSVP IDEA Paper #66. IDEA Center; 2017. Avaiable from: https://www.ideaedu.org/Portals/0/Uploads/Documents/IDEAPapers/IDEAPapers/PaperIDEA_66.pdf. [Last accessed on 2018 May 19].  Back to cited text no. 7
    
8.
Linse AR. Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Stud Educ Eval 2017;54:94-106.  Back to cited text no. 8
    
9.
Ryalls K, Benton S, Li D. Response to “Zero Correlation Between Evaluations and Learning. IDEA Center; 2016. Avaiable from: https://www.ideaedu.org/Portals/0/Uploads/Documents/Response_to_Zero_Correlation_Between_Evaluations_Teaching.pdf. [Last accessed on 2018 May 19].  Back to cited text no. 9
    
10.
Berk R. Top 20 strategies to increase the online response rates of student rating scales. Int J Technol Teach Learn 2012;8:98-107. Available from: http://www.sicet.org/journals/ijttl/issue1202/2_Berk.pdf. [Last accessed on 2017 Dec 18].  Back to cited text no. 10
    
11.
Royal K. A guide for making valid interpretations of student evaluation of teaching (SET) results. J Vet Med Educ 2017;44:316-22.  Back to cited text no. 11
    
12.
Krosnick J. The Threat of Satisficing in Surveys: The Shortcuts Respondents Take in Answering Questions. Vol. 20. Survey Methods Newsletter; 2000. p. 4-8.  Back to cited text no. 12
    



 
 
    Tables

  [Table 1], [Table 2]


This article has been cited by
1 Student evaluations of teaching and the development of a comprehensive measure of teaching effectiveness for medical schools
Constantina Constantinou, Marjo Wijnen-Meijer
BMC Medical Education. 2022; 22(1)
[Pubmed] | [DOI]
2 The Kubler-Ross change curve and the flipped classroom: Moving students past the pit of despair
ErinD Malone
Education in the Health Professions. 2018; 1(2): 36
[Pubmed] | [DOI]



 

Top
 
  Search
 
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
Abstract
Introduction
Course Evaluations
Instructor Evalu...
Lessons Learned
Conclusion
References
Article Tables

 Article Access Statistics
    Viewed4574    
    Printed373    
    Emailed0    
    PDF Downloaded363    
    Comments [Add]    
    Cited by others 2    

Recommend this journal