The UK local councils and police forces are using personal data they own and algorithms they bought to pre-empt crimes against children. It could go horribly wrong.

Wei Shi

March 1, 2019

4 Min Read
UK police are using AI to make precrime a reality

UK local councils and police forces are using personal data they own and algorithms they bought to pre-empt crimes against children, but there are many things that could go wrong with such a system.

A new research by Cardiff University and Sky News shows that at least 53 UK local councils and 45 of the country’s police forces are heavily relying on computer algorithms to assess the risk level of crimes against children as well as people cheating on benefits. It has raised many eyebrows on both the method’s ethical implications and its effectiveness, with references to Philip K Dick’s concept of precrime inevitable.

The algorithms the authorities sourced from IT companies use the personal data in their possession to train the AI system to predict how likely a child in a certain social environment is going to be subjected to crime, giving each child a score between 1 and 100, then classifying the risk level against each child as high, medium, or low. The results are then used to flag to social workers for intervention before crimes are committed. This does not read too dissimilar to the famous Social Credit system that China is building on national scale, though without the benefits of faster housing loans or good schools for kids as a reward for good behaviour.

The Guardian reported last year that data from more than 377,000 people were used to train the algorithms for similar purposes. This may have been a big underestimate of the scope. The research from Cardiff University disclosed that in Bristol alone, data from 54,000 families, including benefits, school attendance, crime, homelessness, teenage pregnancy, and mental health are being used in the computer tools to predict which children are more susceptible to domestic violence, sexual abuse, or going missing.

On benefit assessment side, the IT system to support the Universal Credit scheme has failed to win much praise. A few days ago, computer generated warning letters were sent out to many residents in certain boroughs, warning them their benefits would be taken away because they have been found cheating. Almost all the warnings turned out to be wrong.

There are two issues here. One is administrative, that is how much human judgement can be used to overrule the algorithms. Local councils insisted that analytics results will not necessarily lead to actions. Privacy activists disagreed. “Whilst it’s advertised as being able to help you make a decision, in reality it replaces the human decision. You have that faith in the computer that it will always be right,” one privacy advocacy group told Sky News. Researchers from Cardiff University also found that “there was hardly any oversight in this area.” Over-enthusiastic intervention, for example taking children away from their families in not absolutely necessary circumstances can be traumatic to the children’s development. Controversies of this kind have been long and hard debated in places like Norway, Sweden, and Finland.

Another is how accurate the output from the algorithms are. The police in Kent believed that among the cases pursued by their algorithm, over a third of all cases on the police’s hand, 98% have been accurate. If this is true, then either Kent Police has a rather relaxed definition of “accuracy”, or it knows something the technology world does not. IBM’s Watson, one of the world’s most advanced AI technologies, has been used by Vodafone to help provide digital customer service. It has won Vodafone prizes and was hailed as a big AI success by IBM during MWC 2019. Watson’s success rate at Vodafone was 68%

Late last year the Financial Times reported that one of China’s most ambitious financial service, Ant Financial, which is affiliated to Alibaba, has never used its credit scoring system to make lending decisions, despite that it had been four years in the making and had access to billions of data points in the Alibaba ecosystem. “There was a difference between ‘big data’ and ‘strong data’, with big data not always providing the most relevant information for predicting behaviour,” an executive from Ant Financial told the FT. A think-tank analyst put it in a more succinct way: “Someone evading taxes might always pay back loans, someone who breaks traffic rules might not break other rules. So I don’t think there is a general concept of trustworthiness that is robust. Trustworthiness is very context specific.”

It is understandable that the UK police and local councils are increasingly relying on algorithms and machine learning as they have been under severe spending cut. The output of algorithms could be used as helpful references but should not be taken at its face value. It is probably safer to admit that AI is simply not good enough yet to drive or guide important decisions as policing, criminal investigation, or social worker intervention. Getting Vodafone’s customer service more accurate is a more realistic target. Even if the bot still failed to help you set your new phone up properly, you would not end up queuing at the foodbank, or have your children taken away for “crime prevention” purposes.

About the Author(s)

Wei Shi

Wei leads the Telecoms.com Intelligence function. His responsibilities include managing and producing premium content for Telecoms.com Intelligence, undertaking special projects, and supporting internal and external partners. Wei’s research and writing have followed the heartbeat of the telecoms industry. His recent long form publications cover topics ranging from 5G and beyond, edge computing, and digital transformation, to artificial intelligence, telco cloud, and 5G devices. Wei also regularly contributes to the Telecoms.com news site and other group titles when he puts on his technology journalist hat. Wei has two decades’ experience in the telecoms ecosystem in Asia and Europe, both on the corporate side and on the professional service side. His former employers include Nokia and Strategy Analytics. Wei is a graduate of The London School of Economics. He speaks English, French, and Chinese, and has a working knowledge of Finnish and German. He is based in Telecom.com’s London office.

You May Also Like