%0 Journal Article %T Adopting RCA2: The Interrater Reliability of Safety Assessment Codes %A Brian M. Cummings %A Elizabeth A. Mort %A Merranda S. Logan %A Timothy L. Switaj %J American Journal of Medical Quality %@ 1555-824X %D 2019 %R 10.1177/1062860618793945 %X Safety assessment codes (SACs) are one method to evaluate adverse events and determine the need for a root cause analysis. Few facilities currently use SACs, and there is no literature examining their interrater reliability. Two independent raters assigned frequency, actual harm, and potential harm ratings to a sample of patient safety reports. An actual and potential SAC were determined. Percent agreement and Cohen¡¯s ¦Ê were calculated. Substantial agreement existed for the actual SAC (¦Ê = 0.626, P < .001), fair agreement for the potential SAC (¦Ê = 0.266, P < .001), and low agreement for potential harm (¦Ê = 0.171, P = .002). Although there is subjectivity in all aspects of assigning SACs, the greatest is in potential severity. This presents a problem when using the potential SAC and is in agreement with previous literature showing significant subjectivity in determining potential harm. An operational framework is needed to strengthen reliability %K RCA %K SAC %K interrater reliability %K safety assessment code %K root cause analysis %U https://journals.sagepub.com/doi/full/10.1177/1062860618793945