It’s been almost a year since we first announced that we would be conducting research on the use of machine learning in children’s social care. Both the topic itself, and our decision to research it were controversial.
Many people are uncomfortable with the idea that opaque algorithms processing large quantities of data – often collected for other purposes – might be making judgements, or even decisions about families and what should happen to them. Many are concerned about the implications of this for the balance between the state’s power to intervene in family life, and the rights of some of Britain’s most vulnerable families.
A different, perhaps smaller, group sees positive potential in the use of advanced analytics. They believe the ability to analyse textual data held by local authorities, or data merged from different sources within local governments or the wider public sector, could provide an opportunity to spot which families most need support, and to provide that support as early as possible. This could reduce intervention where it isn’t needed, and stopping it becoming necessary.
Either side of this argument could be right. Both could be right. It’s not possible to draw a firm conclusion on the basis of what we know right now – and that’s a personal view as well as an organisational one. Even if we believe that data is, in general, good and that statistics are a useful tool, it doesn’t stand that we must be supportive of every use of statistics and data. More than anything, those who are most familiar with these approaches should be the most alive to their potential weaknesses.
We’ve said before that a debate is needed on two fronts. The first, which we’ll be publishing a report on later this year, is about effectiveness – the promise of machine learning is that it will outperform our current approaches in terms of its ability to identify who needs support. This is a testable claim, but one that too rarely is tested, and where we have no widely accepted standard of reporting. We’re putting the claims of efficacy to the test, and at the same time working to develop a standard of reporting that will be clear and accurate.
The second part of the debate is about the ethics of applying machine learning in a children’s social care context. Even if the tools are effective, it doesn’t mean they’re ethical to use. Yesterday we published a report on just this, written by researchers at the Alan Turing Institute and at the Rees Centre at the University of Oxford. The report is thorough and detailed, both in its consideration of the issues of machine learning, and how it sits in a children’s social care context.
The ethics review isn’t prescriptive – it can’t tell you what to do. What it can do, is to make clear what principles you should be following if you want to use machine learning in an ethical way in children’s social care. It also delineates some barriers – about data quality, representativeness, and availability – which will heavily curtail the number of circumstances in which these tools can be ethically deployed – at least in the current context.
The review is intended to be a guide, and a spur to further debate and discussion. We don’t want to discourage innovation or good practice, but practice which is unethical cannot be good.
This is only the start of the debate, so if you have any questions or comments, please do get in touch.