|
|
 |
|
ORIGINAL ARTICLE |
|
Year : 2018 | Volume
: 1
| Issue : 1 | Page : 11-18 |
|
Using a model board examination and a case study assessing clinical reasoning to evaluate curricular change
Margaret V Root Kustritz1, Aaron Rendahl2, Laura K Molgaard3, Erin Malone3
1 Department of Veterinary Clinical Sciences, University of Minnesota College of Veterinary Medicine, St. Paul, MN, USA 2 Department of Veterinary and Biomedical Sciences, University of Minnesota College of Veterinary Medicine, St. Paul, MN, USA 3 Department of Veterinary Population Medicine, University of Minnesota College of Veterinary Medicine, St. Paul, MN, USA
Date of Web Publication | 1-Oct-2018 |
Correspondence Address: Dr. Margaret V Root Kustritz University of Minnesota College of Veterinary Medicine, 1352 Boyd Avenue, St. Paul, MN 55108 USA
 Source of Support: None, Conflict of Interest: None  | 2 |
DOI: 10.4103/EHP.EHP_2_18
Background: This study compared student ability to integrate basic science and clinical information before and after implementing a curriculum revision that introduced a problem-oriented case approach as required coursework. Materials and Methods: Student knowledge and competence were assessed just before entry into clinical training by completion of 100 multiple-choice questions mirroring the breadth and type of questions on the national licensing examination (Part I) and by completion of 10 cases to discern clinical decision-making (Part II). Scores from students from the classes of 2015 and 2016 (previous curriculum) were compared to those from students from the classes of 2017 and 2018 (current curriculum). Results: Part I scores were not significantly different between any classes in the previous and current curriculum. Part II scores for 3rd-year students in the current curriculum were higher than those for comparable students in the past 2 years of the previous curriculum. Mean scores for the class of 2016, the last year of the previous curriculum, were significantly lower than all other classes. Conclusion: Students benefit from measured and repetitive practice in clinical reasoning.
Keywords: Clinical decision-making, curriculum, licensure, outcome assessment, veterinary medicine
How to cite this article: Root Kustritz MV, Rendahl A, Molgaard LK, Malone E. Using a model board examination and a case study assessing clinical reasoning to evaluate curricular change. Educ Health Prof 2018;1:11-8 |
How to cite this URL: Root Kustritz MV, Rendahl A, Molgaard LK, Malone E. Using a model board examination and a case study assessing clinical reasoning to evaluate curricular change. Educ Health Prof [serial online] 2018 [cited 2023 Mar 28];1:11-8. Available from: https://www.ehpjournal.com/text.asp?2018/1/1/11/242552 |
Introduction | |  |
Clinical decision-making is a foundational skill for a veterinary practitioner. All veterinary colleges or schools accredited by the American Veterinary Medical Association Council on Education are required to demonstrate that all graduates can demonstrate a set of core clinical competencies that together comprise clinical decision-making.[1]
Students come to veterinary training generally well versed in the scientific method or hypothetico-deductive reasoning. Some argue that clinical reasoning is the same thing; veterinarians use the first set of information collected (history, physical examination findings, preliminary labwork) to determine the most likely set of differentials and then design our diagnostic testing scheme to “prove” that the primary differential is the correct diagnosis.[2] Others argue that clinical reasoning is instead use of inductive logic, whereby presentation of a given problem leads one to consider differentials and to make decisions in the broader context including societal factors, such as owner's financial resources.[3],[4]
Teaching clinical reasoning is complicated by the design of most current veterinary curricula. Students in the preclinical portion of the curriculum are presented with a large volume of information that is usually not presented within the broader clinical context and so encode that information in their memory without linking it to a clinical presentation.[4] Whether students are taught to think through cases as hypothetico-deductive reasoning or inductive logic, specific attention must be paid in the veterinary curriculum to giving students opportunities to practice these skills, with specific feedback and opportunities to try again.
The most common means by which clinical decision-making traditionally has been taught are problem-based learning and problem-oriented case analysis.[4] Problem-based learning, where students are provided with a problem and facilitated through self-guided inquiry to understand relevant supporting information, has not been demonstrated consistently to provide students with a foundation in clinical decision-making.[5],[6],[7] Problem-oriented case analysis may use either hard copy or electronic cases through which students work with or without instructor facilitation, either independently or in small groups. Independent use of electronic cases has been demonstrated to be less valuable than facilitated case presentation with specific questioning and feedback to guide student learning.[8]
Assessing student competence in clinical decision-making is difficult. Because students generally learned information with a framework of a discipline or a species, it may be difficult for them to retrieve the mix of information required to address a clinical problem.[9] As novice decision-makers, students also have a limited body of information that would permit them readily to ignore extraneous data in the history and physical examination and to compare and contrast likely rule outs when diagnostic information is interpreted.
Case scenarios can be used both to teach and to assess clinical decision-making.[10],[11],[12] Students need to practice the steps of clinical decision-making with guidance and feedback to ensure accuracy of their knowledge and to help them prioritize rule outs from common to uncommon depending on the unique circumstances of the case.[13] Students should first be provided with less complex or ambiguous cases and moved toward more complex cases only as they demonstrate competence. Practicing cases can increase student self-confidence as they get to demonstrate knowledge gained and can see their progress in understanding how to work through a case.[14]
Curriculum review and revision was undertaken at the University of Minnesota College of Veterinary Medicine.[15] Both the previous and current curricula include a series of core (required) courses in the first 3 years and permit tracking beginning in the 3rd year, followed by a great variety of internal and external clinical experiences offered in the 12.5-month long final “year” of training. Tracks available are equine, food animal, mixed animal/interdisciplinary, research, and small animal.
When the faculty voted to accept the current curriculum, it began with the 1st-year fall such that all students completed their training either entirely in the previous curriculum or entirely in the current curriculum. One of the stated goals of the revision was “to create graduates with entry-level competence and confidence and to provide them with the scientific foundation required for acquisition of discipline or species expertise, including an emphasis on critical thinking that integrates basic science and clinical learning, and appropriate understanding and use of the veterinary literature.” To help achieve this goal, both required and elective courses with a problem-based approach were introduced.
Any curriculum change is associated with extra work for instructors and costs to the college. Assessment of outcomes provides evidence of successful achievement of goals of curricular change and provides direction for continuing change. Outcomes assessment also helps the college demonstrate to faculty members, staff, and students the value of the work they do to improve student learning. This study assessed outcomes of these labor-intensive problem courses to see if the anticipated improvement in clinical decision-making in students entering the clinical year had occurred. Hypotheses were:
- Students in the current curriculum (classes of 2017 and 2018) will have similar scores on the multiple-choice “mini-board examination” compared to students in the previous curriculum (classes of 2015 and 2016). Student scores were not expected to change significantly because this portion of the curriculum, while reorganized, had not changed drastically in content and because students historically have done well on the national licensing examination, leaving little room for demonstration of improvement
- Students in the current curriculum will have better scores on the assessment of clinical decision-making compared to students in the previous curriculum. Students were expected to show significant improvement because of the concerted effort that had been made in the current curriculum to train students in clinical decision-making, an effort that had been available to a much lesser extent previously.
Materials and Methods | |  |
This study was approved by the Institutional Review Board. Problem courses are defined in this study as courses that require students to approach clinical cases from the perspective of the presenting problem(s) for a given patient.
In the previous curriculum, which was organized by body system, there was one elective problem course, Small Animal Problems, which routinely enrolled one-half to two-thirds of the 3rd-year class in spring semester. Other opportunities to practice clinical decision-making happened within the body system courses on an ad hoc basis.
In the current curriculum, cases still are provided in various classes on an ad hoc basis. Approximately one-third of each class enrolled in an elective multi-species problem course either during their 1st or 2nd year in the spring semester. All students were required to take a species-focused problem course in spring semester of the 3rd year. Students in a species track took the related species problem course (small animal, equine, or food and fiber). Students tracking mixed/interdisciplinary or research chose from among those classes. In all problem-approach courses in the current curriculum, students were provided with cases about an individual animal or population of animals, were given opportunities to gather historical information and were given physical examination findings, and then were expected to generate a problem list with associated rule outs and diagnostic tests. Students were provided with diagnostic test results and were required to interpret those results to come to some sort of conclusion, which varied with the case. Some cases also included communications with clients or other health professionals, consideration of client finances and knowledge of cost of examinations and diagnostic tests, and ethics. Courses varied in learning techniques used (small group versus large group discussion, assignments versus primarily in-class work, and expectations of literature review) and in assessments and grading.
Participants
Students completing their 3rd year in the DVM curriculum were required to participate in an assessment. The assessment consisted of multiple-choice questions (MCQs) and case studies requiring short answer responses to evaluate clinical decision-making. Participating students were from the classes of 2015 and 2016 (previous curriculum) and 2017 and 2018 (current curriculum).
Instruments
Part I: One hundred MCQs were drawn from online and hard-copy resources to mirror the type and breadth of questions on the North American Veterinary Licensing Examination (NAVLE). The published NAVLE blueprint was used to generate 18 bovine questions, 23 canine questions, 15 feline questions, 16 equine questions, and 28 other questions (other species or multiple species). The 100 questions similarly were divided by discipline (7 general practice, 29 medicine, 14 surgery, 46 specialties, and 4 other [public health]). Each question was worth 1 point, with a possible total score of 100.
Part II: The primary author wrote ten cases to assess clinical decision-making. Five cases each were about large animals and small animals. Fifteen faculty members, including clinicians from both small animal and large animal and representing a wide range of disciplines, and one external practitioner completed the cases just before the beginning of the study. The primary author used their responses to create a grading rubric [Appendix].
Procedures
Part I: MCQs were posted on a secure Moodle site and were open to students during a specific 2-week window at the end of their 3rd year. The same examination was provided to students in all 4 years of the study. Students were informed that this was intended to be formative, not summative, and were given no specific guidance regarding whether or not they should study for the examination. Students were reminded of the collegiate honor code. Identical messages were sent to students in all 4 years. The order of the questions did not vary by student. Students could complete the questions in any 2-h span of their choosing during the testing window without use of external resources, mimicking their taking of the NAVLE at a computer testing center. When that window closed, the primary author provided students with feedback on their performance by species and discipline and with their total score if they requested it.
Part II: For the case studies, students in all 4 years of the study received the same set of cases. The cases alternated large and small animal and order of the cases did not vary by student. Students were asked to respond with short answers to specific questions for each case. Case study questions were answered within the 2-h testing window described above. A science educator not associated with the veterinary college used the rubric to score student responses, which were provided to the grader blinded and randomized by year.
Data analysis
Chi-squared analysis was used to assess differences in percentage of male students between the classes. ANOVA followed by pair-wise student's t-tests was used to assess differences in mean age and mean grade point average (GPA) on required courses upon entry to veterinary school. For statistical analysis of Part I, data from those students who did not finish the examination were excluded. Some students skipped only very few questions; if students completed at least 92 questions, their data were included and the skipped questions were scored as incorrect. An ANOVA was performed with significance set at P < 0.05. For statistical analysis of Part II, all students were included. Cases with no response were excluded. A student's score was determined by dividing the total score on cases completed by the number of cases completed by that student. The highest possible score was 15.0. An ANOVA was performed with differences further evaluated using the Tukey's honestly significant difference (HSD) pairwise comparison, with significance set at P < 0.05. Analysis of the number of cases skipped by year was calculated using pairwise Wilcoxon tests with the Bonferroni–Holm correction. All calculations were performed in R version 3.3.3 (R Core Team (2017). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/).
Results | |  |
Part I: For the MCQs, scores for 100 of the students in the class of 2015, 93 in the class of 2016, 99 in the class of 2017, and 92 in the class of 2018 were included in the statistical analysis. For the case studies, 95 students out of 100 in the class of 2015 participated, 91 out of 98 in the class of 2016, 94 out of 99 in the class of 2017, and 93 out of 100 in the class of 2018. Although students were instructed that the college required them to complete this exercise, it was not associated with any specific course. All of the students who failed to complete the examination and cases did so for personal reasons, none of which were academic in nature and all of which were brought to the primary author for discussion. The four classes were not significantly different in percentage of male students (21% for the class of 2015, 11% for the class of 2016, 15% for the class of 2017, and 13% for the class of 2018) or mean GPA on required courses upon entry to veterinary school (3.6 ± 0.3, 3.6 ± 0.3, 3.7 ± 0.2, and 3.6 ± 0.2, respectively). The classes of 2015, 2016, and 2017 did not differ significantly with respect to mean age at the time of admission to veterinary school (25.4 ± 3.6, 25.5 ± 3.5, and 25.3 ± 3.9 years, respectively), and all three classes had a significantly higher age on admission than the class of 2018 (23.9 ± 3.0 years) (P = 0.005).
The coefficient of internal consistency, Cronbach's α, for the multiple-choice examination administered across all 4 years was 0.73. No significant differences were identified by year for total score, by species, or by discipline [Figure 1]. In general, scores were lowest for the equine questions and highest for the feline questions in all years. | Figure 1: Multiple-choice question scores as least squares means and 95% confidence intervals for each year, for each grouping. Species included canine, feline, bovine, equine, and other species. Disciplines included specialties (spec), medicine (med), surgery (sx), general practice (gp), and other disciplines
Click here to view |
Part II: A small number of students in each year failed to complete some or all of the cases. Some students contacted the primary author, citing lack of time or lack of interest in completing the cases. There was a significant difference in average number of questions not completed by class with the class of 2016 having a significantly higher average number of questions not completed compared to the class of 2015 (P = 0.0001), 2017 (P = 0.0138), and 2018 (P = 0 0086).
A significant difference was identified in total score for the case studies between the classes (P < 0.001), with a mean total score of 7.4 for the class of 2015, 6.3 for the class of 2016, 7.9 for the class of 2017, and 8.3 for the class of 2018 [Figure 2]. Significant differences also were identified in overall large animal score (P < 0.001) and in overall small animal score (P = 0.003). The significant difference was due to lower scores in the class of 2016 than in all other classes for total score and for large animal and small animal scores. For all years, small animal scores were higher than large animal scores. | Figure 2: Case study scores as least squares means and 95% confidence intervals for each year, for each grouping. Years that share a letter are not statistically significantly different (P < 0.05) using the Tukey's honestly significant difference pairwise comparison. SA: Small animal, LA: Large animal
Click here to view |
Discussion | |  |
Mean case study scores for 3rd-year students in the current curriculum were higher than those for comparable students in the past 2 years of the previous curriculum. Mean scores for the class of 2016, the last year of the previous curriculum, were significantly lower than all other classes. Equivalence designs are those comparing teaching paradigms and ideally are statistically evaluated not by direct comparison of scores but instead by comparison of scores in two or more study groups to a predetermined standard.[16] The authors are unaware of data to inform a standard that could have been set a priori and hope that data generated by this study may be used by us and others to set those standards for further research. Students from the class of 2019, the third class in the current curriculum, completed only the case study portion of this exercise; their total mean score, at 9.9, was significantly higher than those for both years in the old curriculum, suggesting that the change identified in this study is real. That data were not included in this study as testing conditions were not identical between the study groups and the class of 2019, adding variability.
One might argue that scores for Part II for students in the current curriculum may have been artificially elevated had they chosen not to complete cases about which they were unsure. Evaluation of number of questions skipped showed a difference only for one class in the former curriculum, supporting improved student competence in clinical decision-making by the end of the 3rd year of the program in the current curriculum. This demonstrates the value of targeted training in problem-oriented thinking and repetitive practice in clinical decision-making as part of the required curriculum. One limitation of the study is that faculty members throughout the curriculum were not instructed either not to use or to specifically use similar cases in the problem-oriented courses or elsewhere in the curriculum. Because the case assessment was written and faculty recruited for creation of the rubric well before the first class took the assessment, the authors believe it is very unlikely that faculty would have chosen specifically to emphasize this content over any other. However, it is possible that the class of 2018 happened to get a set of cases throughout the early curriculum that emphasized the same set of information presented in the case assessment used in this study. The data would more strongly have supported the conclusions if both classes in the previous curriculum had had lower scores than both classes in the new curriculum.
Another limitation of the study is the possibility that students from previous years provided information to students in the following years about content of the examination. Students are required to follow an honor code, which is legislated by a student-led honor code commission and is taken seriously. This does not, of course, completely remove the possibility of student cheating using resources on the examination or writing down questions to pass along to others. Overall scores for both parts of the study hovered between 50% and 60%. This is consistent with other studies evaluating students as they begin their clinical year and before they have begun dedicated study for NAVLE.[15] Given the competitive nature of veterinary students, it is unlikely they chose to cheat as they likely would have achieved a significantly higher grade than 50%–60%.
Scores for small animal cases in Part II were higher than those for large animal cases. Possible reasons for this include availability of the small animal problem course for all cohorts in this study; common use of case-based materials throughout the curriculum, which has more credits dedicated to small animal than to large animal medicine and surgery; and the greater amount of small animal experience among students attending this veterinary school. It is also possible that the small animal cases in this study more directly corresponded with information presented in the current curriculum than did the large animal cases.
Colleges must balance efficient use of instructors with provision of authentic assignments and assessments and appropriate timely feedback to guide student learning. Problem-based courses are relatively labor-intensive to teach as students must work through the information provided with guidance and facilitation to help them achieve gains in clinical decision-making.
Assessment of clinical decision-making also is labor-intensive. Written case studies were chosen as an assessment measure for this study, with generation of a rubric to enhance consistency in grading. In an ideal setting, students would be provided with a rubric to help them see how an expert viewed the case and to guide their trajectory from novice to expert as a clinician. MCQs can be used to assess clinical decision-making. Complex MCQs include context-dependent MCQs, which base questions on information presented, such as a case; demonstrated reasoning MCQs, which include opportunities for students to explain why foils are right or wrong as short answers; and confidence-level MCQs, which are created with foils that permit students to exhibit both content knowledge and confidence in their response.[17] Even though the use of MCQs seems more straightforward, complex MCQs require more direct instructor attention for grading and provision of feedback. Information such as that generated in this study supports colleges in their decisions to devote personnel resources to the teaching of these labor-intensive courses as it clearly supports attainment of valuable competencies for veterinary students entering clinical training.
Conclusion | |  |
Curriculum revision is a difficult process for any college. Analysis of specific desired outcomes of curricular change is valuable in helping the college justify use of resources and plan for ongoing improvements. Further enhancements to the curriculum that have already been enacted based on this work include mandatory problem-oriented coursework for the 1st- and 2nd-year students and addition of a mixed animal problem course for student tracking mixed in the curriculum. Further research will include continuing to evaluate student clinical decision-making through the use of these case studies as formative assessments just before students begin their clinical training; assessing for correlations between the findings of this study and NAVLE first-pass rate and score; and assessing for correlations between scores for clinical decision-making on this assessment and grades on the rotation assessment tool used at this college.
Acknowledgments
The authors would like to thank Mary Burch (2015–2018) and Cecilia Kustritz (2019) for grading case studies.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.

References | |  |
1. | |
2. | Cockcroft PD. Clinical reasoning and decision analysis. Vet Clin North Am Small Anim Pract 2007;37:499-520. |
3. | |
4. | May SA. Clinical reasoning and case-based decision making: The fundamental challenge to veterinary educators. J Vet Med Educ 2013;40:200-9. |
5. | Berkson L. Problem-based learning: Have the expectations been met? Acad Med 1993;68:S79-88. |
6. | Maudsley G. Do we all mean the same thing by “problem-based learning”? A review of the concepts and a formulation of the ground rules. Acad Med 1999;74:178-85. |
7. | Schmidt HG, van der Molen HT, Te Winkel WW, Wijnen WH. Constructivist, problem-based learning does work: A meta-analysis of curricular comparisons involving a single medical school. Educ Psychol 2009;44:227-49. |
8. | Abendroth M, Harendza S, Riemer M. Clinical decision making: A pilot e-learning study. Clin Teach 2013;10:51-5. |
9. | Bowen JL. Educational strategies to promote clinical diagnostic reasoning. N Engl J Med 2006;355:2217-25. |
10. | Feldman MJ, Barnett GO, Link DA, Coleman MA, Lowe JA, O'Rourke EJ, et al. Evaluation of the clinical assessment project: A computer-based multimedia tool to assess problem-solving ability in medical students. Pediatrics 2006;118:1380-7. |
11. | Fletcher OJ, Hooper BE, Schoenfeld-Tacher R. Instruction and curriculum in veterinary medical education: A 50-year perspective. J Vet Med Educ 2015;42:489-500. |
12. | Hardie EM. Current methods in use for assessing clinical competencies: What works? J Vet Med Educ 2008;35:359-68. |
13. | Wolcott SK. Designing assignments and classroom discussions to foster critical thinking at different in levels in the curriculum. In: Borghans L, GiJselaers WH, Milter RG, Stinson JE, editors. Educational Innovation in Economics and Business. The Netherlands: Kluwer Academic Publishers; 2000. p. 231-51. |
14. | Patterson JS. Increased student self-confidence in clinical reasoning skills associated with case-based learning (CBL). J Vet Med Educ 2006;33:426-31. |
15. | Root Kustritz MV, Molgaard LK, Malone E. Curriculum review and revision at the university of Minnesota college of veterinary medicine. J Vet Med Educ Fall; 44:459-70. |
16. | Royal K. Robust (and ethical) educational research designs. J Vet Med Educ Spri; 45:11-5. |
17. | Root Kustritz MV. Alternative assessment tools. Clin Theriol 2010;2:472-6. |
[Figure 1], [Figure 2]
|