Last week we hosted our first ever conference for our evaluator panel at WWCSC. This day-long conference – held remotely – brought together representatives from most of the 22 organisations that form our panel of independent evaluators, to hear about methods, social work, some of the projects we’ve funded so far, and, powerfully, to hear Wayne Reid talk about tackling racism in social work.
Our panel of evaluators are central and crucial to so much of what we do at What Works for Children’s Social Care, and building the panel was a critical early step in our establishment. We commission independent evaluators to design and execute evaluations of the many projects that we fund – from small scale pilots, to large scale RCTs like our Social Workers in Schools programme.
Our evaluators come from a wide variety of academic backgrounds and organisations of different sorts; from social work departments in universities, to economic research consultancies, and more besides. Collectively, they possess a huge volume of expertise in research methods, and understand the intricacies of the social care sector. They’re committed to conducting high quality research to try and find out how we can do the best for the young people we’re here to support.
But our evaluators have a much more important role than that – telling the truth. We rely on our independent evaluators to be just that – independent. Independent of us, of the Department for Education and independent of the intervention developers.
Often, this will mean drawing conclusions that are uncomfortable. Of the over 200 randomised controlled trials funded by the Education Endowment Foundation, fewer than 20% show meaningful positive effects on attainment. This is disappointing, but it’s true.
We wish as much as anyone that it weren’t the case that a great many great ideas don’t produce detectable changes in young people’s outcomes. When we decide to fund a project, or otherwise conduct an evaluation of it, we want it to succeed – we want a positive evaluation result that we can advocate for or argue in favour of – or which we can fund to expand. We’ll have read the funding application, spoken to the intervention’s developers, and sometimes with the young people and families that feel as though they’ve benefited from it. We want it to work – and in some cases, we might already be convinced to believe that it does work.
This desire, this belief, no matter how natural, does not actually help. If an intervention aims to, for example, safely reduce the need for children to enter care, then the fact that we want it to work, or feel like it will, ultimately doesn’t matter – what matters is whether it does. Working out whether it does means standing aside from the intervention and trying, dispassionately, to assess its effectiveness. If evaluators cannot do that – if they become convinced that an intervention works before the results are in, or if they think that their role is to ‘prove’ that something works – then their role becomes more propagandist than researcher.
This, along with the usual arguments about efficiency, is why it’s important to find out what doesn’t work. Nobody truly believes that everything we do across the sector works, or is even a great idea. We know some types of practice must be less effective than others. So, if all our evaluations were to show that interventions were effective, they might make us feel better, or maybe make us more popular, but they wouldn’t be credible. Only by finding that some things don’t work can we really believe that anything does.
We cannot have light without the darkness to offer contrast, and to provide a realistic image of the world – and an independent evaluation is the best way of providing that contrast.