In the face of proliferating data sources and research technologies, budget cutbacks, and the growing prominence of performance-based programs and evidence-based policy-making, there has been an increase in demand for faster, cheaper program evaluation. To address this need, the Center for Improving Research Evidence (CIRE) at Mathematica Policy Research has produced the Evidence Insight video series featuring four innovative research methods by different Mathematica researchers. The goal of the series is to make policy makers, researchers and practitioners aware of the strengths, limitations and contexts of these new evaluation techniques and enable continuous improvement of program delivery and effectiveness.
Summaries of the transcribed video of each of these techniques follow:
Rapid-Cycle Evaluation: Determining What Works in Less Time
Small changes to program operations and service delivery can improve the efficiency and effectiveness of the program. This video delivered by Alex Resch of Mathematica, describes how rapid-cycle evaluation can quickly determine the impact of these changes. Such evaluations rely predominantly on existing program data and rigorous research methods to measure impacts of changes on program delivery, to identify causal links between the changes and the outcomes, and to focus on results that can be observed in a short time. Because rapid-cycle evaluations proceed relatively quickly, the results give decision makers and practitioners the timely and reliable evidence needed for continual program improvement.
Bayesian Methods: A Faster, Probabilistic Approach to Research Design
Bayesian methods allow researchers to rigorously use external data to supplement traditional data collection sources and techniques to gain insights about programs and policies that would not have been possible with the traditional evaluation techniques alone. In this video, Lauren Vollmer of Mathematica explains how these methods produce precise probabilities, enabling policymakers to target programs and policies to meet the needs of specific audiences. Moreover, the increased precision enables detection of meaningful affects even when sample sizes are small, thus reducing the cost of data collection. And the ability to draw conclusions in probabilistic terms along a spectrum offers researchers and policymakers greater opportunity to apply subject-matter expertise in determining whether effects are meaningful in context. While there are costs involved, such as sometimes unreliable external data and computational complexity, these obstacles have been met by the development of comprehensive guidelines and sensitivity checks as well as advances in computation that enable faster and less computationally draining Bayesian analysis.
Adaptive Randomization: A Fresh Perspective on Traditional Research Design
Randomized control trials (RCTs) are considered to be the gold standard for providing valid and rigorous evidence of program evidence by controlling for non-program factors such as the characteristics of program participants that possibly contributed to observed outcomes. However, they can be time-consuming and expensive to implement. In this video, Mariel Finucane and Ignacio Martinez discuss adaptive randomization, an innovative extension of traditional RCTs. Faster and cheaper than RCTs, adaptive randomization enables researchers to adjust their methods in response to accumulating evidence without losing the methodological principles of randomization. This new research method, when combined with access to rich datasets such as electronic health records, contributes, in efficient and rigorous fashion, to our understanding of how specific program elements can be refined to better meet the needs of particular subgroups.
Predictive Analytics: Transforming Decision Making in Three Steps
Predictive analytics is an emerging research field in which data and evidence are used to predict outcomes and inform decision making. In this video, Phil Killewald of Mathematica breaks the predictive process down into three steps:
- Define the problem: gain thorough knowledge of the extent and quality relevant data bases and of agency activities; and pose evaluation questions to address shortcomings in program effectiveness;
- Design the predictive model: gather historical data on past activities where a shortcoming is to be addressed; design, build and evaluate a number of predictive models using these data; and test the model using separate data in order to identify the best performing models; and
- Make the predictions: use the best performing model to predict outcomes with the real-world data where the true outcomes are not yet known, make the model available as early as possible, and periodically assess its accuracy and update it as necessary or as new data become available.
When applied to policy-relevant research questions, predictive analytics can be a valuable tool for organizations seeking to improve their services and their decision making processes by identifying what works for whom under what conditions on the basis of evidence-based outcome predictions.
This information should increase awareness of and amenability of these new techniques among not only researchers, but also practitioners seeking to improve the effectiveness of the outcomes they deliver.