May 17, 2013
Update from the SIRC Instrument Review Task Force
Presenter: Cara C. Lewis, PhD
Cara C. Lewis, PhD,1 Ruben Martinez, BA,1 Cameo Borntrager, PhD,2 & Bryan Weiner, PhD3
1Indiana University; 2University of Montana; 3University of North Carolina at Chapel Hill
Abstract: It is critical that researchers utilize psychometrically validated instruments when studying their implementation efforts to build a strong knowledge base and to avoid drawing incorrect or inappropriate conclusions. Navigating the seemingly dense landscape of instruments used in implementation science is an arduous task even for the most experienced reviewer. In an effort to move the field forward, the Seattle Implementation Research Collaborative (SIRC) has coordinated a multi-site effort that attempts to systematically review, compile, and empirically rate instruments relevant to the study of implementation. SIRC first identified 33 distinct constructs integral to the implementation process delineated by the Consolidated Framework for Implementation Research (Damschroder et al., 2009) and the Implementation Outcomes (Proctor et al., 2010). Through SIRC’s systematic review of these constructs, we have identified and categorized over 450 instruments to be empirically rated and made available to members of SIRC. SIRC’s Instrument Review Task Force (containing over 50 members) will use the modified evidence-based assessment criteria to rate each instrument for its psychometric strength (e.g., reliability and validity). This talk will present the results of the rating process for approximately 115 instruments tapping implementation outcomes (i.e., acceptability, adoption, appropriateness, feasibility, penetration, sustainability). Additionally, we will unveil the interactive SIRC Instrument Review Project website page.