This report summarises the findings from the evaluation reports of projects funded in the context of the Children’s Social Care Innovation Programme Round Two. Between 2016 and 2020, the second round of the English Children’s Social Care Innovation Fund funded 50 projects to encourage and support innovative practice across the country. Each of these projects was accompanied by an independent evaluation, commissioned by the Department for Education. These evaluations adopted a variety of methodologies; ranging from qualitative insights drawn from interviews and focus groups, to large scale randomised controlled trials and quasi-experimental evaluations that aim to identify causal impacts. The reports of these evaluations were published in late 2020 and early 2021, and contain a huge amount of information and insight – in total, running to more than 2,000 pages. To help busy professionals understand this huge volume of research, we’ve summarised each study with regards to their key findings and the quality of the impact evaluation in this booklet.
This booklet was created to provide a summary of the results of the evaluations and their quality, to help busy professionals understand this huge volume of research. The summary aims to make the many evaluation reports published in the context of the Innovation Programme more accessible and easier to comprehend, drawing out some of the key insights and highlighting where gaps in the evidence remain.
How we went about it
For each project, we’ve tried to condense some key features of the project; what was done, whether there was an impact evaluation – and the strength of that evaluation – whether there was a cost-benefit analysis, and what it found. For all the outcome measures considered by the impact evaluations, we’ve also summarised what was found – did the evaluation find an increase in that measure, a decrease, or no change.
Our summaries indicate whether the researchers conducting the original study have done an impact evaluation via a sophisticated causal inference design (i.e. Interrupted Time Series, Difference in Differences, Propensity Score Matching, Coarsened Exact Matching, Randomised Controlled Trial). These types of analysis try to identify what happened as a consequence of the intervention. Some studies collected data and compared it to a comparator group, however, if they did not use a sufficiently robust design (such as those listed above), we have marked them as not having conducted an impact evaluation. In addition, we have also rated the quality of the impact evaluation. This was done by assessing a variety of factors, including the sample size, the quality of the data used, the appropriateness of the methodology, whether a suitable comparator group was used, whether the duration of the study was sufficient, and the risk of bias through confounding, sample attrition or sample selection.
The overall picture painted by the evaluations is of a serious need for more research, and particularly more impact evaluations. Tellingly, the less robust the impact evaluation, the more likely it is to find a positive impact of an intervention; reflecting the fact that things like selection bias, survivorship bias, and omitted variables biases are likely to favour finding a positive effect.
While there are encouraging signs from several projects, there is also a broader lesson about the targeting of investment in evaluation. In many cases, programmes were too small or insufficiently understood, to allow a full evaluation. In other cases, large investments in programmes affecting many families have been permitted to occur without an impact evaluation to tell us whether this work is having an impact. In still more cases, the absence of a cost-benefit analysis makes it difficult for local authorities with constrained finances to make decisions between two ideas.