Improving the Efficiency of Standardized Patient Assessment of Clinician Fidelity: A Comparison of Automated Actor-Based and Manual Clinician-Based Ratings

Friday 10:30 – 1:45 Breakout A3

Presentor: Benjamin C. Graham

Benjamin Graham, Ph.D., National Center for PTSD, Dissemination and Training Division; Katelin Jordan, B.A., National Center for PTSD, Dissemination and Training Division

 

 

Standardized patient (SP) assessment is a well-established method of measuring skill acquisition in dissemination of training in evidence-based practices.  Trainee interactions with actors offer a more ecologically valid approach than proxy measures, but are time consuming and unscalable (Dickinson, et al., 2010).  Clinician-based statements are dynamic and challenging to code, whereas actor-based responses are less so.  Scoring based on actor statements may offer a more parsimonious yet effective method for certain areas of SP assessment.

This presentation compares automated actor-based scoring to manual clinician-based ratings in the appraisal of SP interviews.  Data for this study are based on SP sessions conducted for a large training dissemination study involving 414 clinicians treating veterans with PTSD.

Pilot reliability scores between methods were promising yet varied, and ranged from very good (Cohen’s kappa = 1.0 ) to fair (Cohen’s kappa = .21) dependent on the item.  We will share more comprehensive analyses based on forthcoming assessment of 414 current transcripts. 

The methodology is described, including a cost analysis, strategies for ensuring actor fidelity, and times in which actor-based scoring is inappropriate.  We discuss design implications for future projects evaluating clinician performance, and how this method might interface with technology-based and traditional approaches.

Measuring Treatment Fidelity on the Cheap

Friday 10:30 – 1:45 Breakout A3

Presentor: Rochelle F. Hanson Ph.D.

Rochelle F. Hanson, Medical University of South Carolina; Angela Moreland, Medical University of South Carolina; Benjamin E. Saunders, Medical University of South Carolina

 

 

One significant challenge to implementation researchers is determining a cost-effective, yet reliable and valid measure of treatment fidelity. While observational measurement represents the ‘gold standard,’ such methods are expensive, time consuming, and generally not feasible or sustainable in community-based settings. This presentation examines data on clinician self-reported fidelity in delivering TF-CBT components throughout participation in a community-based learning collaborative (CBLC). All clinical participants completed weekly online reports of their use of and perceived competence in delivering TF-CBT components to their training cases, as required for the CBLC. 268 clinicians, participating in 8 TF-CBT focused CBLCs, completed weekly metrics related to 593 training cases (mean # of weekly metrics completed per case = 7.72). Of these 593 training cases, 433 completed treatment. For these cases, clinicians reported completing an average of 8.87 (out of 11) TF-CBT components, and at least 10/11 components with 51.8% of clients. Clinicians reported an average competence of 2.06 (out of 4) across all TF-CBT components. Participation in the CBLC training requirements (i.e., attendance at learning sessions, consultation calls, rostering) were significantly related to self-reported use and perceived competence in TF-CBT. Self-reported use of TF-CBT components was related to significant pre to post treatment declines in rates of PTSD and depression for completed training cases. Specifically, post treatment declines in both PTSD and depression were significantly related to self-reported use of the trauma narrative and in vivo exposure components; declines in depression were additionally related to completion of the PRAC components. These findings yield promising directions for measuring treatment fidelity in a cost-effective, feasible, and sustainable manner.

Leveraging Routine Clinical Materials To Assess Fidelity

Friday 10:30 – 1:45 Breakout A3

Presentor: Shannon Wiltsey Stirman

Shannon Wiltsey Stirman, National Center for PTSD, VA Boston Healthcare System and Boston University; Cassidy Gutner, National Center for PTSD, VA Boston Healthcare System and Boston University; Jennifer Gamarra, National Center for PTSD, VA Boston Healthcare System and Boston University; Dawne Vogt; National Center for PTSD, VA Boston Healthcare System and Boston University; Patricia Resick, Duke University; Jennifer Schuster Wachen, PhD, National Center for PTSD and Boston University;  Katherine Dondanville, PsyD, The University of Texas Health Science Center at San Antonio;  Jim Mintz, PhD, The University of Texas Health Science Center at San Antonio;  COL Jeffrey S. Yarvis, PhD, Carl R. Darnall Army Medical Center, Fort Hood, Texas;  Alan L. Peterson, PhD, The University of Texas Health Science Center at San Antonio and the STRONG STAR Consortium

 

A critical barrier to efforts to monitor and support fidelity in routine care settings and large systems is the lack of availability of feasible, scalable, and valid fidelity measurement strategies. Indirect methods of fidelity monitoring have limitations. However, observation and expert fidelity ratings, considered the “gold standard???, are not feasible in large systems in which thousands of providers have been trained, nor is it likely to be feasible in smaller, less well-resourced community settings. We will present a strategy for assessing fidelity to Cognitive Processing Therapy (CPT) for PTSD that uses existing clinical materials (worksheets and clinical notes), and was developed to be feasible and scalable, with potential broader applicability to other CBTs. We will present results from phase I of development, based on materials from 158 patients. Rater agreement and internal consistency for adherence and competence across different sessions ranged from acceptable to excellent, and there was a high correlation between the aggregated competence items and observer-rated fidelity. Data will also be presented for a second set of ratings with over 100 patients enrolled in a practical clinical trial using the refined measure. Implications for implementation research and assessment of fidelity in routine care will be discussed.