A pilot study of the relationship between experts’ ratings and scores generated by the NBME’s computer-based examination system

David J. Solomon, Janet R. Osuch, Kimberly Anderson, James Babel, James Gruenberg, John Kisala, Mary Milroy, Willard Stawski

Research output: Contribution to journalArticlepeer-review

Abstract

This pilot study evaluates the consistency of experts’ ratings of students’ performances on the National Board of Medical Examiners’ Computer Based Examination (CBX) cases and the relationship of those ratings to the CBX’s scoring algorithm. The authors were investigating whether an automated scoring algorithm can adequately assess an examinee’s management of a computer-simulated patient. In 1989-90, at the Michigan State University College of Human Medicine, eight students, completing a surgery clerkship, each managed eight CBX cases and took a computer-administered, multiple-choice examination. Six clerkship coordinators rated the students’ performances in terms of overall management, efficiency, and dangerous actions. The ratings correlated highly with scores produced by the CBX’s scoring system.

Original languageEnglish (US)
Pages (from-to)130-132
Number of pages3
JournalAcademic Medicine
Volume67
Issue number2
DOIs
StatePublished - Feb 1992
Externally publishedYes

ASJC Scopus subject areas

  • Education

Fingerprint

Dive into the research topics of 'A pilot study of the relationship between experts’ ratings and scores generated by the NBME’s computer-based examination system'. Together they form a unique fingerprint.

Cite this