Are Existing Frameworks Adequate for Measuring Implementation Outcomes? Results From a New Simulation Methodology

Friday 2:30 – 3:45 Breakout C5

Presentor: Richard Van Dorn

Richard A. Van Dorn, RTI International; Stephen J. Tueller, RTI International; Jesse M. Hinde, RTI International; Georgia T. Karuntzos, RTI International

 

 

Existing implementation frameworks guide the measurement of implementation outcomes. However, empirically validating implementation outcomes, for example those identified by Proctor and colleagues, is often challenged by limited data sources, a constrained item pool, and inadequate sample size. In order to establish the minimum requirements for sufficient power to detect Proctor and colleagues’ eight implementation outcomes going forward, we used an exploratory factor analysis simulation. We assumed a fixed population and sampled from an infinite pool of items to simulate realistic item selection processes, where data can be collected from only one sample and there is limited control in selecting the loadings, crossloadings, and error variances from the pool of potential items. Our simulation modeled sample size (200, 500, 1000), item pool size (24, 40, 80), item response distribution (normal, binary, Likert), and a range of (cross)loadings and error variances. Results show the adjusted Bayesian Information Criterion was the most accurate factor extraction criterion, and that item pool size and sample size had larger impacts on correctly detecting eight factors than ideal item characteristics (e.g., high loadings, low crossloadings, low error variance) across response distributions. Results can be used to inform instrument development and power calculations for future implementation science research.

Development and Validation of Implementation Measures for Low-Resource Global Contexts

Friday 2:30 – 3:45 Breakout C5

Presentor: Emily E. Haroz

Emily E. Haroz, Johns Hopkins Bloomberg School of Pulblic Health; Laura K. Murray, Johns Hopkins Bloomberg School of Pulblic Health; Amanda J. Nguyen, Johns Hopkins Bloomberg School of Pulblic Health; Judith K. Bass, Johns Hopkins Bloomberg School of Pulblic Health; Shannon Dorsey, University of Washington; Paul Bolton, Johns Hopkins Bloomberg School of Pulblic Health

 

 

Global mental health (GMH) has few, if any, suitable implementation measurement instruments for low and middle income countries (LMIC) and diverse cultural contexts. Our group’s research has showed that existing measures developed in high-resource settings did not match with lower resource settings (e.g., asking about insurance or technology). Without appropriate instruments, examining the factors involved in implementation and scaling-up of interventions is challenging. The aim of this study was to develop and evaluate practical instruments specifically for LMIC to measure implementation outcomes across the domains of acceptability, adoption, appropriateness, feasibility, penetration and sustainability. Instruments were created for four stakeholder levels: government/policy, organizational directors/staff, service providers, and consumers. Questions were based on leading implementation frameworks: CFIR, RE-AIM, and a conceptual model of evidence-based implementation in public service sectors, as well as feedback from local partners and experts in the field of GMH and implementation science. Testing consisted of examining the acceptability, reliability and validity of the instruments for providers and consumers, using a mixed-methods approach. A total of n=183 individuals provided data. Preliminary results indicate the scales are acceptable, reliable and valid at the provider level. At the consumer level, reliability and validity results were mixed.

A Treatment Integrity Measure for Early Childhood Education Settings

Friday 2:30 – 3:45 Breakout C5

Presentor: Ruben G. Martinez

Martinez, R.G, Virginia Commonwealth University; McLeod, B.D, Virginia Commonwealth University; Sutherland, K.S, Virginia Commonwealth University; Conroy, M.A, University of Florida; Snyder, P., University of Florida; Southam-Gerow, M.A., Virginia Commonwealth University

 

 

Research on improving the quality of instruction in early childhood classrooms is a national priority. Many preschool children exhibit problem behaviors that place them at risk for emotional and behavioral disorders. Teacher-delivered, evidence-based programs (EBPs) have demonstrated positive effects for children with problem behavior; however, efforts to implement and evaluate EBPs across early childhood settings face barriers. Early childhood classrooms may differ across a variety of structural and process dimensions, which may influence implementation of EBPs with integrity. Additionally, few measures exist that are capable of characterizing the process used to implement instructional practices with integrity and characterize business as usual, which makes interpreting findings from effectiveness trials difficult. To address these barriers, the field needs a validated, pragmatic measure capable of assessing the delivery of instructional practices found in existing EBPs. This presentation will describe the development of The Treatment Integrity Observational Coding System for Early Childhood Settings and accompanying teacher self-report, designed to characterize the implementation of evidence-based instructional practices delivered by teachers in early childhood classrooms. The validation and dissemination of such a measure would provide stakeholders with the ability to assess integrity of multiple EBPs, facilitating the efficiency with which EBPs are assessed in school settings.