Elected officials and policymakers increasingly seek evidence of social programs’ effectiveness both to improve program design and to help make difficult budget decisions. One sound way to get this information is through classically designed experiments, in which the outcomes of an intervention on one group of people is measured against the outcomes of another group who do not receive the intervention.
However, social experiments are often criticized for having a variety of flaws relating to scientific standards, feasibility, and ethics. These issues can generally be overcome through thoughtful experiment design, sufficient funding, and other measures, according to “Obstacles to and Limitations of Social Experiments: 15 False Alarms,” a paper by two methodological experts in Abt Global’s Social and Economic Policy division.
“While some of these common criticisms have degrees of merit, many are based on faulty premises or stand on weak ground when examined closely,” said Stephen H. Bell, Abt Vice President and Senior Fellow, who co-authored this paper. The paper’s other co-author is Abt Principal Scientist Laura R. Peck.
The paper is part of the Abt Thought Leadership Paper Series, a collection of working papers, white papers, and re-publications that undergo an internal peer-review process and aim to present provocative ideas and cutting-edge methods for consideration by other analysts and funders of evaluation research.
Criticisms and responses discussed in the paper include:
Criticism: Experiments are too slow to produce results to aid decisions made at a regular and rapid pace.
Response: Creating an overlapping series of shorter-term experiments can provide a steady flow of relevant new information.
Criticism: Experiments don’t compare participants who receive services – such as children in Head Start – to people who receive no services at all, and therefore produce convoluted results.
Response: The goal of such social research is to compare the impact of Head Start to the range of whatever else exists, including other similar pre-kindergarten programs, not to a world in which no such programs exist.
Criticism: The conclusions drawn from randomized trials in social experiments are not valid for the general population because the most sample studies do not statistically match the general population.
Response: Some evaluations of ongoing social programs have achieved generalizability through careful sample selection. More could do so.