What’s the problem with making data-driven and evidence-based decisions to guide social policy? If the data and evidence are derived from randomized control trials (RCTs), then perhaps quite a lot.
In a recent Stanford Social Innovation Review article, Srik Gopal and Lisbeth B. Schorr make a compelling case that the uncritical application of the “Moneyball” ideal to social policy is a flawed approach that overlooks “the fundamental realities of how complex social change happens.”
To make their case, Gopal and Schorr challenge the view that evidence from RCTs — often regarded as the “gold standard” of evaluation methodology — is automatically more valuable than other types of evidence. In particular, they question the assumption that an effective intervention in one setting can be transferred to a new context with fidelity.
Despite their reservations about the usefulness of RCTs, they do not advocate abandoning the rigorous use of data altogether. Rather, in acknowledging that “complex problems demand complex interventions,” they call for an expanded methodological approach that 1) broadens the base of evidence, 2) focuses on principles of practice, and 3) embraces adaptive integration over fidelity.