This month sees the publication of a research paper by Mary Baginsky, Jo Moriarty and Jill Manthorpe in the Journal of Children’s Services, assessing the existing evidence base around Signs of Safety. This review differs in its approach to the ‘realist review’ of Signs of Safety conducted by our research partners at Cardiff University, which we published in November last year.
Despite some differences in methodologies (and hence a slight difference in the slant and the limitations of each paper) the conclusions drawn by Baginsky and her colleagues is much the same as that of our review – that there are significant and important gaps in the evidence base around Signs of Safety.
As Baginsky and colleagues point out, there have only been limited attempts to date at controlled studies. Controlled studies attempt to find the causal impact of an intervention – in this case Signs of Safety, by comparing families supported under that model with families that are as similar as possible, but are supported under a different ‘business as usual’ model.
The use of ‘controlled’ studies – either using a randomised controlled trial, or a quasi-experimental approach which attempts to approximate the effect of randomisation using statistics – is important if we’re to understand how big of a change something like Signs of Safety makes to outcomes for young people and their families. Without a good control group, we can’t be confident of findings – either from a Danish study reporting better outcomes, or from an Australian one finding worse outcomes after Signs of Safety had been implemented.
As well as a shortage of these types of study, Baginsky et al also highlight other gaps in the evidence base that future research could and should address. The first is the importance of context – understanding, from a programme now implemented in two thirds of local authorities in England and in a large number of overseas jurisdictions, what about a local authority, social workers and management, contributes to (or hinders) the successful implementation of Signs of Safety, or indeed any other change.
Second, is the importance of understanding whether the culture and circumstances of a family are well suited to some elements of a model. One evaluation of Signs of Safety found a clash between the approach and some of the values of the indigenous communities in Canada.
Finally, there is the matter of implementation and ‘fidelity’. As Baginsky and her colleagues found in their evaluation of Signs of Safety for the DfE’s innovation programme, there is a lot of variation in how, and how well, Signs of Safety is implemented. The question of whether Signs of Safety ‘works’, isn’t a singular one – does it work well when implemented absolutely perfectly, or can it still be effective even with 50% fidelity? How about 25%?
These questions should of course be of interest to anyone looking to start implementing Signs of Safety in the future, and Baginsky et al are right to highlight their importance, and the importance of seeing that such research is properly funded into at least the medium term. That’s why we’re pleased to be working with her and colleagues to support their ongoing evaluation of Signs of Safety with a quasi-experimental approach – although we recognise that some types of intervention, and some programmes, can’t be evaluated in this way.
However, there is a lot that we can learn from their review for other evaluations, in particular the importance of taking into account context, fidelity, and ensuring that our methods are fit for the questions we are asking.