Summary

Children’s Services departments have substantial amounts of data available to them. This, combined with advances in computing power and algorithms, opens up the possibility of using machine learning to identify children at risk – allowing social workers to use their time to work directly with families.

However, to date it has been unclear just how effective machine learning models were at predicting which cases would escalate in future.

In this project, we worked with four local authorities to develop models to predict eight outcomes for individual cases. These models learned from historical data and then predicted whether unseen cases were at risk of the outcomes.

We found that

  • On average, if the model identifies a child is at risk, it is wrong six out of ten times. The model misses four out of every five children at risk. None of the models’ performances exceeded our pre-specified threshold for ‘success’.
  • Adding information extracted from reports and assessments does not improve model performance.
  • Our analysis of whether the models were biased was unfortunately inconclusive.
  • There is a low level of acceptance of the use of these techniques in children’s social care amongst social workers.

Download the Summary Report
Download the Technical Report

Objectives

The aim of this project was to understand whether machine learning models could identify cases at risk of defined outcomes, and whether these models worked equally well for all groups.

How we went about it

We worked with four local authorities to develop models to predict eight outcomes for individual cases. The predictions all focused on a point within the children’s journey where the social worker would be making a decision about whether to intervene in a case or not and the level of intervention required, and looked ahead to see whether the case would escalate at a later point in time.

We used natural language processing techniques to turn reports and assessments into information that can be used as input to a model. We then used machine learning techniques to learn patterns in historical data associated with risks and protective factors, and examine whether those factors were present in unseen cases. We sought to understand whether machine learning models, applied in this way, correctly identify the cases at risk of the outcome and those that are not and whether they do this equally well for different groups. We also compared four different ways of designing the models.

We only used historic data and no decisions about live cases were made by the models.

Key Findings

  • We did not find any evidence that the models we created using machine learning techniques ‘work’ well in children’s social care.
  • On average, if the model identifies a child is at risk, it is wrong six out of ten times. The model misses four out of every five children at risk.
  • None of the models’ performances exceeded our pre-specified threshold for ‘success’ .
  • Adding information extracted from reports and assessments does not improve model performance.
  • Our analysis of whether the models were biased was unfortunately inconclusive.
  • There is a low level of acceptance of the use of these techniques in children’s social care amongst social workers.

Implications

We did not seek to definitively answer the question of whether machine learning will ever work in children’s social care across all types of outcomes and in all contexts, but we hope to have shown some of the challenges faced when using these approaches in children’s social care. For local authorities already piloting machine learning, we encourage them to also be transparent about the challenges they experience.

Given these challenges and the extent of the real world impact a recommendation from a predictive model used in practice could have on a family’s life, it is of utmost importance that we work together as a sector to ensure that these techniques are used responsibly if they are used at all.

What next

As part of this project, we developed a standardised way of reporting machine learning models in the sector to enable transparent communication about and comparison of models.

Related Reports: Ethics Review of Machine Learning in Children’s Social Care