Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.
Who will be affected?
Readers: No access to all 28 journals. We recommend accessing our articles via PubMed Central
Authors: No access to the submission form or your user account.
Reviewers: No access to your user account. Please download manuscripts you are reviewing for offline reading before Wednesday, July 01, 2020 at 7:00 PM.
Editors: No access to your user account to assign reviewers or make decisions.
Copyeditors: No access to user account. Please download manuscripts you are copyediting before Wednesday, July 01, 2020 at 7:00 PM.
Kennedy AB, Riyad CNY, Ellis R, Fleming PR, Gainey M, Templeton K, Nourse A, Hardaway G, Brown A, Evans P, Natafgi N
Evaluating a Global Assessment Measure Created by Standardized Patients for the Multiple Mini Interview in Medical School Admissions: Mixed Methods Study
Evaluating a Global Assessment Measure Created by Standardized Patients for the Multiple Mini Interview in Medical School Admissions: A Mixed Methods Study
Ann Blair Kennedy;
Cindy Nessim Youssef Riyad;
Ryan Ellis;
Perry R. Fleming;
Mallorie Gainey;
Kara Templeton;
Anna Nourse;
Gail Hardaway;
April Brown;
Pam Evans;
Nabil Natafgi
ABSTRACT
Background:
Standardized Patients (SP) are essential stakeholders in the Multiple Mini Interviews (MMIs) that are increasingly used to assess medical school applicants' interpersonal skills. However, there is little evidence for their inclusion in the development of instruments.
Objective:
This study aims to describe the process and evaluate the impact of having SP create a global measurement question that assesses medical school applicants’ readiness for medical school and acceptance status.
Methods:
This study used an exploratory sequential mixed-methods study design. A single question to measure readiness for medical school was collaboratively developed in a workshop by 21 SP, three Simulation Specialists, and two researchers. This question and the additional rubric items were evaluated through statistical tests based on applicant data. Internal reliability of the MMI was measured using a Cronbach’s alpha test and predicting admission status was tested using a forward stepwise binary logistic regression.
Results:
The evaluation of the developed rubric was assessed based on the 1,084 applicants across three cohorts. Cronbach's alpha was >0.8 overall and in each cohort year. The final stepwise logistic model for all cohorts combined was statistically significant (p < .001), explained 9.2% (R2) of the variance in acceptance status, and correctly classified 65.5% of cases. The final model consisted of three variables: Empathy, Rank of Readiness, and Opening the Encounter.
Conclusions:
This study indicates that SP can effectively create a global question to evaluate applicants in the MMI. SP bring a critical perspective that can improve the admissions process for medical schools. Clinical Trial: none
Citation
Please cite as:
Kennedy AB, Riyad CNY, Ellis R, Fleming PR, Gainey M, Templeton K, Nourse A, Hardaway G, Brown A, Evans P, Natafgi N
Evaluating a Global Assessment Measure Created by Standardized Patients for the Multiple Mini Interview in Medical School Admissions: Mixed Methods Study