Is evidence like a GPS?

30 September 2019

This month, I was at a conference and heard an analogy of machine learning – trying to use raw computer power and a lot of data to predict what’s likely to happen next – being described as like a GPS in your car.

The speaker, a sceptic about the value of this approach, said they were reluctant to trust a device, when they themselves knew shortcuts and back alley routes better than they believed a GPS could. They, as an experienced driver, wouldn’t surrender their judgement to that of a machine anytime soon.

I returned to think about this while on holiday later in the month. I had driven part of the way to our destination and, unlike the speaker at the conference, I am far from an experienced or confident driver – and nor, sadly, am I native to Cornwall, where we were staying. The GPS was therefore helpful to me, in navigating a situation that was far from my experience.

This got me thinking that, actually, machine learning was less like a GPS than I had thought, listening along to the talk. The GPS knows where all the roads are, which ones you’re going to get to next, and which routes are most efficient. Modern software can even detect accidents on the road ahead in plenty of time and amend the route – something I wouldn’t have known was a good idea until I reached the back of a traffic jam. In an ever shifting environment, an algorithm can’t do those things.

From this, it was a fairly short jump (or drive) to thinking about driving aids and evidence more generally. For me, a lot of evidence feels more like a blindspot sensor – which lets you know when there’s something in your blindspot. They can be really useful in alerting you to danger somewhere you either aren’t directly looking, or physically can’t see yourself – a definite advantage. But they also have their drawbacks. Because they’re also prone to picking up anything that’s in your blindspot, these sensors also do a pretty good job of telling you the bleeding obvious – that you’re driving next to a hedgerow that you can definitely see. They also, as I learned with a mild swerve, can lead you astray – particularly if you focus too heavily on them to the exclusion of everything going on around you.

At the risk of taking the driving analogy too far, a lot of the evidence that gets created by researchers – be that ethnographic research, interviews, randomised controlled trials or data analysis – can also be thought of like some of the extra features in a car. They can be useful – helping guide the way or fend off risk – but ultimately, they can’t drive the car, and are a poor substitute for what my driving instructor used to call “vehicle sympathy” – or what social workers call good relationships.

This is why we’re trying to make our research accessible but not prescriptive. We hope that our work will help signpost where (or indeed if) particular interventions or approaches could help social workers, and the systems they work in, to achieve better outcomes for children, young people and families. By calling on the evidence when it’s helpful, to help focus activity, and maybe to reduce the number of decisions, or the amount of uncertainty a social worker needs to handle every day, we could free up social workers’ time to do what they are best at – forging human connections.