Skip to Content

Assessing Intervention Fidelity in Randomized Field Experiments

Assessment Intervention Learning Policy Training

Abstract

This research focuses on the "what" of the "what works" questions posed by the Institute of Education Sciences (IES). In particular, it addresses critical conceptual and methodological issues related to assessments of the fidelity of educational interventions. The work is timely in that it contributes to the current emphasis on evidence-based practices (EBP) in education.

EBP has increased the technical and methodological demands on researchers in two ways. First, it has directed attention to the use of randomized field trials (RFTs). Second, the emphasis on EBP has made it clear that evidence of effectiveness must be accompanied with clear evidence of what produced the effects. Studies of an intervention's efficacy and effectiveness involve tests of an intervention in multiple sites, providing ample opportunity for slippage (infidelity) between the conceptual model of the intervention and its version as realized in the field.

Given the relative newness of these interests, it is not surprising that the literature on fidelity assessment has not provided a coherent definition of fidelity as it applies to studies of the effectiveness of educational practices nor has it provided guidance on several other key issues in assessing intervention fidelity. In particular, to date, there has been limited discussion on developing indices of fidelity and on how to match chosen assessment techniques (e.g., observations on a sample of participants or a sample of program sessions) with the overall purpose of the assessment for a given component. We need reliable and valid measures for each fidelity assessment; although there are exceptions, these issues have not been highlighted in the literature. Another perplexing challenge is if and how to combine multiple indices of fidelity within and across intervention components.

Finally, a major goal of studies investigating the effectiveness of educational innovations is to determine if they work, for whom, and under what circumstances. Fidelity assessments can provide some answers to these questions, but evidence of site-to-site variability in achieved fidelity needs to be incorporated into analyses of effectiveness. Little experience has been accumulated on how such analyses can be undertaken in efficient ways. To shed light on these issues, we begin with a conceptualization of fidelity assessment that is more closely aligned with contemporary ideas of causal analysis and statistical models for assessing the "what works" questions. This provides a framework for specifying some of the methodological demands associated with assessing intervention fidelity in studies of educational effectiveness.

To contribute to the future development of such assessments, the research has multiple parts. In Part 1, we will "take stock" of what has been learned about assessing intervention fidelity by conducting a systematic review of the methods used in the past for fidelity assessment. This review will summarize how and how well prior assessments were conducted, along with what was learned about intervention fidelity. Second, we propose to re-analyze several existing RFTs with fidelity assessments so as to explore and refine different ways of assessing fidelity. Third, we will develop and refine a comprehensive approach to intervention fidelity by gathering new data within a recently funded cluster randomized trial that will be undertaken in Nashville, TN. The products of this work (e.g., data on reliability and validity of fidelity measures, evidence about which methods "work best", and algorithms for constructing indices) will be made available to the educational research on the Vanderbilt website, peer reviewed papers, presentations and other publications.