Breakout S – May 17, 2017
1. Measuring an Evidence-Based Model of Implementation: Preliminary Development of a Survey Instrument
Presenter: Josef I. Rusek, PhD
Authors: Josef I. Ruzek, PhD,1,2 Joan M. Cook, PhD,1,3 Richard Thompson, PhD,4 Stephanie Dinnen, MS,3 James C. Coyne, PhD,5 Paula P. Schnurr, PhD,1,6
1National Center for PTSD; 2Stanford University; 3Yale School of Medicine; 4University of Illinois at Chicago; 5University of Pennsylvania & University of Groningen, the Netherlands; 6Geisel School of Medicine at Dartmouth
Abstract: To facilitate testing of comprehensive models of implementation, measures are needed that assess a broad range of theory-based constructs within a single measurement instrument. We developed a measure assessing the six broad elements of the Greenhalgh et al. (2005) model: perceived innovation characteristics; individual characteristics of potential adopters; communication and influence; system antecedents and readiness; outer context; and implementation process. Survey and interview measures were developed, via systematic literature review of measures for associated constructs. Keywords representing 53 separate model constructs were searched to identify existing measures for each construct. These were assessed for adequate reliability, validity, and applicability to healthcare settings. Items meeting these criteria were used to guide survey/interview design; where no measure was deemed appropriate, items were created through a consensus-based process. Approximately two items were used to assess each construct, to ensure low respondent burden. The measure was used to assess factors affecting implementation of two PTSD treatments (Prolonged Exposure, Cognitive Processing Therapy). All 229 PTSD treatment providers in 38 VA residential PTSD treatment settings were asked to complete the measure. 216 (94.3%) completed the survey. Internal consistency was generally good and results suggested that the measure may be helpful in researching and planning implementation. CPT scored significantly higher than PE on a number of factors (including relative advantage, compatibility, trialability, potential for reinvention, task issues, and augmentation-technical support) and lower on perceived risk. The measure has several possible applications for research and implementation and can be adapted for other organizations, settings, and innovations.
2. Measurement Issues in Implementation Science
Presenter: Ruben Martinez, BA
Authors: Ruben Martinez, BA, & Cara C. Lewis, PhDCara C. Lewis, PhD, Indiana University
Abstract: According to Siegel (1964), “Science is measurement???, which begs the question, is measurement necessarily scientific? Unfortunately, the answer is “no???; particularly for the field of implementation science, which in its nascent state, has become vulnerable to measurement issues that threaten the strength of the developing knowledge base. It is as though the demand for the implementation of evidence-based practices is outpacing the science (Chamberlain, Brown, & Saldana, 2011), resulting in measurement that is not always scientific (Cook & Beckman, 2006; Proctor et al., 2011; Weiner, 2009). This situation presents an alarming paradox whereby investigators are drawing conclusions based off of instruments that have not been psychometrically validated, leading to an unstable methodological ground. In order to make interpretations or draw conclusions from data and confidently generalize findings to different settings it is imperative that the measures we utilize assess what we think they are measuring–a test only repeated analysis and reporting of psychometrics can affirm (Hunsley & Mash, 2011). If our measures are not empirically validated, we run the risk of constructing “a magnificent house with no foundation??? (Achenbach, 2005). Perhaps a bold question is worthy of our consideration: what is the value of performing evaluative research if it is not possible to place confidence in your interpretations of the data? We will present data from a survey completed by 80 implementation stakeholders who shared their perspectives of the most pressing measurement issues in the field. Specifically, we will report on their use of measures, theoretical frameworks, mixed-methods, and recommendations for advancing the science of implementation.
3. Common Elements for Implementing Evidence-Based Practices in Children’s Mental Health
Presenter: Lisa Saldana, PhD
Authors: Lisa Saldana, PhD, Patricia Chamberlain, PhD, Oregon Social Learning Center
Abstract: With the increased focus and effort to scale-up evidence-based practices (EBPs) in real-world settings comes recognition of the complexity of the process, which involves planning, training, quality assurance, and interactions among developers, system leaders, practitioners, and consumers. Little is known about which aspects of these methods are essential for successful implementation, or how to measure if they have occurred well. The Stages of Implementation Completion (SIC), was developed to assess sites’ implementation process and obtainment of implementation milestones for Multidimensional Treatment Foster Care (MTFC). The SIC is an 8-stage tool that maps onto three phases of implementation (pre-implementation, implementation, and sustainability). Items are defined by the activities that are identified by the developer/purveyor as necessary to implement MTFC. The SIC is being adapted for other EBPs in children’s services that address highly prevalent (e.g., anxiety) and costly (e.g., parenting for child welfare populations) mental health and behavioral problems. This presentation will examine the common/universal elements of the implementation process that have been identified through the SIC adaptation process with multiple EBP developers/purveyors. Preliminary data will be presented to demonstrate the frequency with which these elements are completed by adopting sites and the influence that this completion has on successful scale-up.