Comparison of automated scoring methods for a computerized performance assessment of clinical judgment created by Polina Harik, Peter Baldwin, Brian Clauser
Material type:
- text
- unmediated
- volume
Item type | Current library | Call number | Vol info | Copy number | Status | Notes | Date due | Barcode | |
---|---|---|---|---|---|---|---|---|---|
![]() |
Main Library - Special Collections | BF39 APP (Browse shelf(Opens below)) | Vol. 37, No. 8 pages 587-597 | SP17345 | Not for loan | For in-house use only |
Growing reliance on complex constructed response items has generated considerable interest in automated scoring solutions. Many of these solutions are described in the literature; however, relatively few studies have been published that compare automated scoring strategies. Here, comparisons are made among five strategies for machine-scoring examinee performances of computer-based case simulations, a complex item format used to assess physicians’ patient-management skills as part of the Step 3 United States Medical Licensing Examination. These strategies utilize expert judgments to obtain various (a) case-specific or (b) generic scoring algorithms. The various compromises between efficiency, validity, and reliability that characterize each scoring approach are described and compared.
There are no comments on this title.