“Move Fast, and Break Things” was reportedly the internal motto at Facebook until 2014 – at which point the phrase may have fallen out of use, but neither their speed nor their tolerance for destruction appears to have.
The phrase is often invoked by those who believe in engineering or scientific solutions to major problems, and the idea that we can, by applying the same mindset as Facebook, Amazon, or Google, tackle the great challenges of our times. With vastly more powerful computers and immensely cleverer algorithms than we had even a decade ago, one approach in which much stock has been placed is the use of machine learning and predictive analytics – essentially using computers to make thousands of calculations and to try and predict the future.
It’s an attractive prospect to many of a more technical or scientific bent. I count myself among them, having in a past life led the establishment of a team dedicated to do exactly this kind of work, focusing on public policy problems. Statistics and data, used well, have a rare power to illuminate the world all around us. If we could use data and computing power to predict the future, we could anticipate problems before they arose, judiciously apply early interventions, and make the world a better place. Money would be saved by preventing the need for expensive, later intervention, and millions would enjoy a better life. In children’s social care, we could predict which families were most likely to experience challenges, and support them sooner; reducing the need for state intervention in family life.
Unsurprisingly, the approach is not uncontroversial. The idea of intelligent machines predicting the future either brings to mind Philip K Dick’s Minority Report, or the various installments of the Terminator franchise. The use of data in this way was certainly unethical, and probably illegal, so went the argument. Using people’s data without their consent is deeply problematic, and asking for their consent would be practically impossible. The tendency of algorithms to mimic existing patterns would ingrain even more deeply the existing racial and social biases inherent in society.
Evidence from the United States, particularly in a criminal justice context, provided supporting evidence for both sides of this argument. Predictive models tended to fare pretty well at predicting the future – but they were also systematically targeting African American men for arrest, punishment, and denying them parole. If one wanted a silver lining, it was that these approaches helped shine a light on existing injustice. As if anyone didn’t know it was there before.
In the UK, a more cautious approach by government has generally been the norm, although there remain many advocates. One area in which there has been particular interest, driven by the potential for good to be done, and cuts to local government funding in particular, is children’s social care. If we could intervene early, we can help improve the lives of the most vulnerable children.
An early research project, which I oversaw, suggested that the approach showed considerable promise; albeit to be managed carefully.
Today, we at What Works for Children’s Social Care have published a different kind of research report on this. Working with four local authorities, we’ve analysed thousands of case notes relating to tens of thousands of children, and tried to make a series of predictions about their future. What we find is not encouraging.
Across 32 models, none meet the threshold we set in advance for success, with most of them falling far short of it. Models that attempt to predict the future – i.e. those that are actually useful in practice – do even worse – meaning that more families could see unnecessary intervention in their lives, and more opportunities for support could be missed. The models don’t perform any worse for specific groups – defined by race, age, or disability – but this is a cold comfort when the models don’t perform well anyway. It seems that increasing the sample size may help but the population changes quickly enough that in waiting for more data the previous data becomes obsolete and local authorities have different enough contexts that combining data is unlikely to help.
Our research is just one piece of a wider landscape. An ethical review we commissioned from The Alan Turing Institute and the University of Oxford’s Rees Centre set stringent guidelines for when these approaches might be ethically deployed. Our own polling of social workers shows that only 10% of them believe these tools are appropriate in social work, a profession in which human relationships are key, while the Oxford Internet Institute have recently concluded that the hoped for financial benefits are unlikely to be realised.
These problems need not be catastrophic. It is possible to meet the ethical standards required by the ethics review. With better models, or lots more data, it is possible that models will improve dramatically and with them their usefulness to improve the lives of children, or to produce savings.
I am not a luddite. I believe that data and statistics are valuable tools. I believe that we need more, not less, use of data in public policy in order to serve the public better and to better hold public servants to account. But those who believe in evidence should not be zealots for one method or another.
At the moment, the case has not been made. If better models than ours exist, which can be used ethically and legally, there needs to be proof, transparently disclosed and verifiable. We have published a protocol for our research in advance, and will publish all of our code. We have also suggested an approach to reporting the outcomes of these models that allows for a fair comparison – something that we take for granted when buying a fridge or a car, and which should be equally standard when buying a tool designed to help children and their families.
Now is a good time to stop. With the global coronavirus pandemic, everything has been changed, all our data scrambled to the point of uselessness in any case. Let those who believe in these approaches reflect on what to do next. Let those who believe they have already cracked it, prove it.