Part of the role of a social worker is assessing risk, for example, whether a child is at sufficient risk of significant harm to justify a child protection conference. By being able to predict outcomes sooner, it is thought that harm to children could be prevented and dealt with earlier, and it might reduce the need for higher levels of statutory social care involvement. This project aims to assess the technical feasibility of using predictive analytics to predict such outcomes.
A large proportion of the information known about the child / young person and their family is information gathered by social workers, and recorded in reports and notes. The analysis explores whether including such text improves the predictive accuracy of the model. For example, the algorithm uses the topics of the document, key words and structured data associated with cases that escalate to predict – using the information at referral or assessment – whether the case will escalate within a medium period of time. The model is “trained”- i.e. learns the patterns associated with the outcome – and “tested” – i.e. to see whether it’s correct on historical data. This allows the model to be tested in a safe environment and gives timely feedback on the quality of the model.
The analysis began in summer 2019, and will run until spring 2020, when a final report will be published. It is part of a broader project, assessing the appropriateness of predictive analytics in children’s social care. An independent ethics review, carried out by the Rees Centre, Oxford University, and the Turing Institute will examine existing ethical frameworks and assess their applicability to current machine learning practices in the sector, as well as identify specific complexities of children’s social care which would influence the ethics of using machine learning in the sector.