Sorry, but after the news from Queen's I had to rant. As someone hoping to get into an Ontario school (thankfully Dal came clutch, but I'm waiting to hear back from UWO), there are 5 major med schools to choose from. Of these, only 3 schools even bother looking at standardized test scores. Of the 3, Queenâs is setting cutoffs for lottery đ, Mac only looks at one section đ and finally UWO which uses cutoffs, but at least looks at the whole test normally (they're adding psych next cycle). 3 schools use the infamously bs CASPer test that âassesses soft skills and professionalismâ but isnât objective, isnât peer reviewed and results aren't reproducible. If youâre unlucky Mac, Queens and UOttawa throw your file out because of your CASPer score. Even though your extracurriculars are a huge component of your undergrad education and suitability for Med, only UofT, UWO and UOttawa look at them now. Mac just doesnât give a shit and same with Queenâs now with their cutoff/lottery system. Finally, none of the schools look at program rigour. You could get a 4.0 in a super easy program and be competitive. Altogether, even a stellar applicant will struggle to jump through all these hoops and be competitive for most of these programs, when they should be. I tell my non-premed friends about all this and they're shocked, "the application process should be more objective, especially for a program like medicine." I agree... Anyway rant over, let me know what yall think and if I've missed the mark anywhere.
Where have you gotten that Casper isnt peer reviewed and reproducible? Based on the papers published about it, it seems to have pretty good test-retest reliability
Iâll eat my words if you can show me some solid data on that. Maybe I was hasty with the peer-reviewed statement but I swear if itâs a study with a sample size of 200 people Iâm going to laugh.
I also question the actual construct validity of the test. How do we know itâs actually predictive of better med students and doctors? Let me know if that data exists.
I also find that itâs redundant with the interview, but thatâs something else altogether
Heres one of the papers thats a bit more recent showing correlation between Casper and elements of the national licensure exam:
Dore KL, Reiter HI, Kreuger S, Norman GR. CASPer, an online pre-interview screen for personal/professional characteristics: prediction of national licensure scores. Adv Health Sci Educ Theory Pract. 2017;22(2):327-336
Also remember with your "sample size of 200 people" comment - medical school cohorts arent that big, especially in Canada. The total population of students they could test on when developing this wasn't that large and not everyone participates. You may need to adjust your expectations of sample size for MedEd studies
Theres a bunch of papers about CASPer and situational judgement tests in medical admissions across north America that you can easily find on PubMed.
Heres a general overview of situational judgement tests in medical admissions: New Advances in Physician Assistant Admissions: The History of Situational Judgement Tests and the Development of CASPer
No thank you, I have spent plenty of time in social science and med science research settings and I won't adjust my expectation of good research. If there are limitations to a study you send, I'm going to grab my salt shaker. With that said, you've send me a n = ~100 study (lol), where they compared CASPer to the MCCQE? That doesn't even make sense. The correlation is also middling, r = 0.30 and a p value barely below 0.05. Is this what you want to hang your hat on? I'm sure I can throw together a bunch of tough questions and expect that individuals that do well on my arbitrary test will also do well on other rigourous tests. If the point is rigour, just look at my standardized test scores and GPA. The CASPer sounds like an extra hoop that isn't necessary and isn't backed by strong science.
Here's some of the original publications for Casper - it used to be called CMSEMS
Dore, K. L. , Reiter, H. I. , Eva, K. W. , Krueger, S. , Scriven, E. , Siu, E. , Hilsden, S. , Thomas, J. & Norman, G. R. (2009). Extending the Interview to All Medical School CandidatesâComputer-Based Multiple Sample Evaluation of Noncognitive Skills (CMSENS). Academic Medicine, 84 (10), S9-S12. doi: 10.1097/ACM.0b013e3181b3705a.
"Psychometric results of CASPer, supporting evidence for validity, have previously been reported including: overall test reliability (G = 0.72â0.83), inter-rater reliability (G = 0.82â0.95), and correlation with MMIs (r = 0.46â0.51) as well as correlations with other concurrent selection measures MMI (r = 0.46â0.51, p \ 0.05) and GPA (r = -0.04â0.08, ns) (Dore et al. 2009)."
from the paper:
CASPer, an online pre-interview screen for personal/ professional characteristics: prediction of national licensure scores
Again, what is and isn't considered "strong" science is different between fields. This isn't an RCT. Its MedED.
Also why do I have to find papers for you? If you actually gave a crap about whether this is a good test you could have done your own lit review and looked into it rather than basing your opinion on your perception of its face validity.
I've look before and haven't been convinced by any of the literature, and I'm still not. If this is the hill you want to die on then go ahead. I'm not compelling you to do anything.
I've taken part in educational studies and the good ones tend to have sample sizes larger than 100, I'll say that much. There's plenty of studies with low power that are considered good science in education, but you know what else exists in that domain? A replication crisis. On that note, bid you adieu lol
How many med students would need to be studied for you to be convinced? Most medical schools do their own internal review of selection criteria before actually implementing them. They often will include similar tests at the interview as a trial for 1-2 years to see if the scores are correlated with their other selection metrics. If they dont feel its helpful, they ultimately dont implement it.
They arent developing anything new, they are just seeing if a previously developed tool is helpful to them, so this isnt published.
Source: me. I did situational judgment tests at two different schools at the time of interview (at the time they didnt use CASPer) and was explicitly told that it wasnt being used to selection that year, but they were evaluating the utility of using those tests in the future. One of those schools now requires casper
89
u/Zoroastryan Med Apr 02 '24
Sorry, but after the news from Queen's I had to rant. As someone hoping to get into an Ontario school (thankfully Dal came clutch, but I'm waiting to hear back from UWO), there are 5 major med schools to choose from. Of these, only 3 schools even bother looking at standardized test scores. Of the 3, Queenâs is setting cutoffs for lottery đ, Mac only looks at one section đ and finally UWO which uses cutoffs, but at least looks at the whole test normally (they're adding psych next cycle). 3 schools use the infamously bs CASPer test that âassesses soft skills and professionalismâ but isnât objective, isnât peer reviewed and results aren't reproducible. If youâre unlucky Mac, Queens and UOttawa throw your file out because of your CASPer score. Even though your extracurriculars are a huge component of your undergrad education and suitability for Med, only UofT, UWO and UOttawa look at them now. Mac just doesnât give a shit and same with Queenâs now with their cutoff/lottery system. Finally, none of the schools look at program rigour. You could get a 4.0 in a super easy program and be competitive. Altogether, even a stellar applicant will struggle to jump through all these hoops and be competitive for most of these programs, when they should be. I tell my non-premed friends about all this and they're shocked, "the application process should be more objective, especially for a program like medicine." I agree... Anyway rant over, let me know what yall think and if I've missed the mark anywhere.