Applying Machine Learning for the Common Good – Is it Always Worthwhile?

How do we thoughtfully use data to increase efficiency for the greatest amount of people in a sector that has historically and exclusively been driven by human judgement? Where is human oversight non-negotiable?

City of Boston image

Government systems often remain plagued by bureaucracy and stagnation, and lag behind the private sector’s advancements in innovation.  As such, the potential for machine learning within local government is massive and should be approached with measured optimism about the future benefits such technology can offer society by making government smarter and its decisions more efficient and just[1]. However, as machine learning becomes an increasingly larger aspect of government[2], it is imperative for Boston to remain cognizant of the data’s out of-sample accuracy and potential biases.

The City of Boston prides itself on data-driven governance. Over the last four years, Boston has leveraged city-wide data to deliver faster, more efficient services to Boston residents[3]. The Citywide Analytics team collects and analyzes data on city performance in nearly every service area – from trash collection, to potholes filled, to stabbings and homicides[4]. As in any effective machine learning system, this data is then used to find patterns of predictability, and in turn to make more sound policy decisions[5].

City of Boston’s CityScore

Boston is addressing the need for data-based decisions through dedicated, increased staffing and intentional research, and has moved in sync with other large metropolitan cities, like Chicago and Los Angeles, to make data a ‘’hallmark of 21st century governance[6].” Over the short term, the Citywide Analytics team aims to build in more instantaneous predicative decisions in areas such as restaurant inspections and crime prevention. In the longer term, the City aims to combine citizen input[7] with things like traffic camera data to predict future car and cyclist crashes. The City will presumably employ a similar approach to companies like TOME, by mapping at-risk areas through AI and various factors like weather, road width, and day light[8].

As Boston grapples with machine learning becoming inherent to government processes, it would behoove the City to take a critical eye towards what predictions require human oversight. While predicative modeling appears to have a net-positive benefit in areas like traffic control, one area that may require intentional oversight is crime prediction. In order to be a problem suited for a machine-learning approach, it must require prediction, rather than casual inference, and must be insulated from outside influences. Crime, however, is multifaceted and often rooted in various other systemic problems [9]. Predictive policing, the use of machine learning to predict crime based on common characteristics of individuals and historic data, raises concerns of racially-biased targeting. Steven Bellovin expands that machine learning presents the possibility that “individuals with a propensity toward criminality could be identified and punished for crimes that they have not yet committed.”[10] In order to combat these potential biases, the City should ensure that the risk assessment instrument in use is regularly vetted and monitored by staff. Human oversight can allow Boston to minimize the possibility that the system might introduce bias or inaccuracies due to deficiencies in available data[2].

Imagine Boston City Image

 

As the City of Boston takes on machine learning, it is also critical to ensure that collected data is sourced across every geography, socio-economic class, and race. Given the fact that poverty remains geographically concentrated[11], sample-accuracy, the model’s ability to predict outcomes across environments,[12] is a major concern. Remaining vigilant about securing representative data will allow for stronger predictive decision making.

The possibilities for machine learning’s impact within local government are vast and powerful. This research begs the questions:

  • To what areas of local government is machine learning potentially detrimental?
  • Can biases within predictive modeling for public policy ever be fully recognized and averted?

Word Count [797]

 

 

[1] Coglianese, Cary and Lehr, David, “Regulating by Robot: Administrative Decision Making in the Machine-Learning Era” (2017). Faculty Scholarship. http://scholarship.law.upenn.edu/faculty_scholarship/1734, accessed November 2018.

[2] Report of the Executive Office of the President’s National Science and Technology Council Committee on Technology, (Washington, DC: Government Printing Office, 2016), https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf, accessed November 2018.

[3] City of Boston, “CityScore,” https://www.boston.gov/cityscore, accessed November 2018.

[4] Katherine Hillenbrand, “Case Study: Boston’s Citywide Analytics Team,” Data Smart City Solutions, May 15, 2017, https://datasmart.ash.harvard.edu/news/article/case-study-bostons-citywide-analytics-team-1043, accessed November 2018.

[5] Anastassia Fedyk, “How to tell if machine learning can solve your business problem,” Harvard Business Review Digital Articles, November 25, 2016, https://hbr.org/2016/11/how-to-tell-if-machine-learning-can-solve-your-business-problem

[6] Jess Bidgood, “Tracking Boston’s Progress With Just One Numbers,” New York Times, October 8, 2015, https://www.nytimes.com/2015/10/09/us/getting-the-big-picture-in-boston-number-by-number.html, accessed November 2018.

[7] City of Boston, “Vision Zero Map,” http://app01.cityofboston.gov/VZSafety/, accessed November 2018.

[8] Amit Chowdhry, “How Tome Software is Tackling City Congestion and Safe Mobility,” Forbes, March 29, 2018, https://www.forbes.com/sites/amitchowdhry/2018/03/29/tome-software/#439ebea55979, accessed November 2018.

[9] Angwin, Julia and Larson, Jeff, “Machine Bias” (2016). Pro Publica, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, accessed November 2018.

[10] Steven M. Bellovin, When Enough Is Enough: Location Tracking, Mosaic Theory, and Machine Learning, (New York, New York, (2014)), p. 575–76.

[11] American Sociological Review. Vol. 59, No. 3 (Jun., 1994), pp. 425-445.

[12] Mike Yeomans, “What Every Manager Should Know About Machine Learning,” Harvard Business Review Digital Articles, July 07, 2015, http://web.b.ebscohost.com.ezp-prod1.hul.harvard.edu/ehost/pdfviewer/pdfviewer?vid=1&sid=1ca2405f-dc72-4193-b11d-178ec9d0d33a%40sessionmgr101

Previous:

How 3D chocolate printing open up new opportunities for The Hershey Company?

Next:

Chariot: Changing Transit through Crowdsourced Co-Creation

Student comments on Applying Machine Learning for the Common Good – Is it Always Worthwhile?

  1. Great piece! Really appreciate your thoughtful comments.

    I was aware that the cities were starting to appoint Chief Data Officers, but didn’t know the extent that machine learning already in action. It’s interesting to think about its limitations in public policy, especially within a system that already suffers severely from issues like racial bias. I echo your concerns about what inputs the algorithm may be trained on. Nevertheless, could it be true that any policy decision is rooted in some type of stance / political perspective / bias even if it ultimately serves a just purpose? It’s hard to understand where to draw the line on how we detect and interpret such biases, or even how we may effectively adjust for biases we already know exist.

Leave a comment