In recent years, governments around the world increasingly rely on machine learning technologies to strengthen domestic crime prevention efforts . While most governments including the U.S. employ machine learning for security purposes , this article focuses specifically on China, the fastest growing market for AI security technologies and home to ~200 million surveillance cameras .
China ranks 128th in the world in number of police officers per citizen . To effectively govern the world’s largest population with a relatively small security force, China has turned to machine learning for help. The application of facial recognition technologies and big data analytics allows China to build an effective crime prevention system. This system functions in three important ways. First, facial recognition tools help police identify and capture suspected criminals who pose security risks to the public . Recently, police in Zhengzhou, aided by AI-powered facial recognition glasses, detained heroin smugglers at the local train station . Similar technology helped police in eastern China capture 25 runaway criminals at a local beer festival . Second, big data tools allow police to analyze motion and behavior data to detect criminal activities. For example, traffic authorities in Jinan used gait analysis to identify jaywalking and track down violators . The government of Chongqing analyzed activities of its residents to identify suspicious individuals linked to a local crime. Factors such as individuals’ visits to knife stores, interactions with victims, and facial expressions were factored into the analysis . Third, machine learning allows the government to predict criminal intents and prevent crimes. Working with AI company CloudWalk, China is developing a ‘police cloud’ — a vast database of information on every citizen, including criminal and medical records, travel bookings, social media comments, and store visits . The result is a big-data rating system that identifies highly suspicious groups based on background and behavior signals . Though still in development, the tool has the potential to help police identity high-risk individuals and streamline crime monitoring and prevention efforts.
Notwithstanding encouraging results in China, machine learning as a crime prevention tool has several limitations. First, hardware and software shortcomings pose significant challenges. Surveillance cameras, for example, cannot scan more than 1000 faces at a time . Furthermore, most facial recognition softwares struggle to achieve a high level of accuracy. A recent FBI study indicates the average facial recognition tool yields a large percentage of false positive results . These shortcomings seriously limit the reliability and scalability of facial recognition programs. Second, data collection is time consuming and technically challenging. Vast majority of data files in China aren’t digitized, and reconciling information from disparate systems require extensive efforts . Lastly, the regression models behind China’s crime prediction tools are crippled by systematic biases and currently unfit for large-scale deployment. Similar to BrightMind’s struggle with gender and regional biases in its employment prediction model, crime prediction tools in China are affected by ethnic and socioeconomic biases. A recent study by Human Rights Watch found the tool assigns disproportionately high risk ratings for Uyghurs and other minority groups . Left unaddressed, these biases could lead to wrongful detentions and arrests.
Despite these constraints, China is increasing its commitment to machine learning. The government plans to invest $150 billion in machine learning programs by 2030, creating an AI-powered security system that is ‘omnipresent, fully networked, and fully controllable’ . In addition, well-funded startups including Watrix and LLVision are working to improve facial and motion recognition technologies .
To fully unlock the potential of machine learning as a crime prevention tool, I believe the Chinese government needs to pursue several initiatives. First, the government should invest in data standardization. The effectiveness of a machine learning program is limited by the quality of its input data. The government needs to digitize records, reconcile mismatches, and connect disparate legacy systems to build a complete and accurate source of data. Second, the team working on building crime prediction algorithms must address biases in the model. The team must consider gender, ethnic, regional, and socioeconomic factors to surface and remove discriminations against certain groups . Once the algorithm is deployed, the team should continuously update model logics to proactively combat biases and improve prediction accuracy. Lastly and most importantly, the government must establish appropriate legal frameworks and procedures to minimize human rights violations. Crime prediction tools should supplement, not replace, the thought process of jurists. China needs to update its due process laws and establish greater checks and balances to eliminate unfair treatments and protect the rights of suspects.
As China and the rest of the world continue to make great strides in crime prediction technologies, the question remains whether the technology can balance the public’s need for security and individuals’ need for privacy. Are there areas (i.e. types of crimes) in which machine learning can have a bigger impact? And how can we prevent willful abuse of this powerful tool?
 Schneier, B. (2017). Ubiquitous Surveillance and Security. IEEE Society on Social Implications of Technology (SSIT). Accessed November 12, 2018, from http://technologyandsociety.org/ubiquitous-surveillance-and-security/
 Del Greco, K. (2017). Law Enforcement’s Use of Facial Recognition Technology. Federal Bureau of Investigation (FBI). Accessed November 11, 2018, from https://www.fbi.gov/news/testimony/law-enforcements-use-of-facial-recognition-technology
 Mozur, P. (2018). Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras. The New York Times. Accessed November 12, 2018, from https://www.nytimes.com/2018/07/08/business/china-surveillance-technology.html
 United Nations Secretary-General (2010). State of Crime and Criminal Justice Worldwide. Twelfth United Nations Congress on Crime Prevention and Criminal Justice. Accessed November 11, 2018, from https://web.archive.org/web/20140211174006/http://www.unodc.org/documents/commissions/CCPCJ_session19/ACONF213_3eV1050608.pdf
 Jiang, S. (2018). You Can Run, But Can’t Hide From AI in China. CNN. Accessed November 11, 2018, from https://www.cnn.com/2018/05/23/asia/china-artificial-intelligence-criminals-intl/index.html
 The Associated Press. (2018). Chinese ‘Gait Recognition’ Tech IDs People by How They Walk. The New York Times. Accessed November 11, 2018, from https://www.nytimes.com/aponline/2018/11/05/technology/ap-as-tec-china-gait-recognition.html
 Denyer, S. (2018). Beijing Bets On Facial Recognition In a Big Drive For Total Surveillance. The Washington Post. Accessed November 11, 2018, from https://www.washingtonpost.com/news/world/wp/2018/01/07/feature/in-china-facial-recognition-is-sharp-end-of-a-drive-for-total-surveillance/?utm_term=.e641c7d19b3a
 Yang, Y. (2017). China Seeks Glimpse of Citizens’ Future With Crime-predicting AI. Financial Times. Accessed November 11, 2018, from https://www.ft.com/content/5ec7093c-6e06-11e7-b9c7-15af748b60d0
 HRW. (2018). China: Big Data Fuels Crackdown in Minority Region. Human Rights Watch. Accessed November 11, 2018, from https://www.hrw.org/news/2018/02/26/china-big-data-fuels-crackdown-minority-region