Government systems often remain plagued by bureaucracy and stagnation, and lag behind the private sector’s advancements in innovation. As such, the potential for machine learning within local government is massive and should be approached with measured optimism about the future benefits such technology can offer society by making government smarter and its decisions more efficient and just. However, as machine learning becomes an increasingly larger aspect of government, it is imperative for Boston to remain cognizant of the data’s out of-sample accuracy and potential biases.
The City of Boston prides itself on data-driven governance. Over the last four years, Boston has leveraged city-wide data to deliver faster, more efficient services to Boston residents. The Citywide Analytics team collects and analyzes data on city performance in nearly every service area – from trash collection, to potholes filled, to stabbings and homicides. As in any effective machine learning system, this data is then used to find patterns of predictability, and in turn to make more sound policy decisions.
Boston is addressing the need for data-based decisions through dedicated, increased staffing and intentional research, and has moved in sync with other large metropolitan cities, like Chicago and Los Angeles, to make data a ‘’hallmark of 21st century governance.” Over the short term, the Citywide Analytics team aims to build in more instantaneous predicative decisions in areas such as restaurant inspections and crime prevention. In the longer term, the City aims to combine citizen input with things like traffic camera data to predict future car and cyclist crashes. The City will presumably employ a similar approach to companies like TOME, by mapping at-risk areas through AI and various factors like weather, road width, and day light.
As Boston grapples with machine learning becoming inherent to government processes, it would behoove the City to take a critical eye towards what predictions require human oversight. While predicative modeling appears to have a net-positive benefit in areas like traffic control, one area that may require intentional oversight is crime prediction. In order to be a problem suited for a machine-learning approach, it must require prediction, rather than casual inference, and must be insulated from outside influences. Crime, however, is multifaceted and often rooted in various other systemic problems . Predictive policing, the use of machine learning to predict crime based on common characteristics of individuals and historic data, raises concerns of racially-biased targeting. Steven Bellovin expands that machine learning presents the possibility that “individuals with a propensity toward criminality could be identified and punished for crimes that they have not yet committed.” In order to combat these potential biases, the City should ensure that the risk assessment instrument in use is regularly vetted and monitored by staff. Human oversight can allow Boston to minimize the possibility that the system might introduce bias or inaccuracies due to deficiencies in available data.
As the City of Boston takes on machine learning, it is also critical to ensure that collected data is sourced across every geography, socio-economic class, and race. Given the fact that poverty remains geographically concentrated, sample-accuracy, the model’s ability to predict outcomes across environments, is a major concern. Remaining vigilant about securing representative data will allow for stronger predictive decision making.
The possibilities for machine learning’s impact within local government are vast and powerful. This research begs the questions:
- To what areas of local government is machine learning potentially detrimental?
- Can biases within predictive modeling for public policy ever be fully recognized and averted?
Word Count 
 Coglianese, Cary and Lehr, David, “Regulating by Robot: Administrative Decision Making in the Machine-Learning Era” (2017). Faculty Scholarship. http://scholarship.law.upenn.edu/faculty_scholarship/1734, accessed November 2018.
 Report of the Executive Office of the President’s National Science and Technology Council Committee on Technology, (Washington, DC: Government Printing Office, 2016), https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf, accessed November 2018.
 Katherine Hillenbrand, “Case Study: Boston’s Citywide Analytics Team,” Data Smart City Solutions, May 15, 2017, https://datasmart.ash.harvard.edu/news/article/case-study-bostons-citywide-analytics-team-1043, accessed November 2018.
 Anastassia Fedyk, “How to tell if machine learning can solve your business problem,” Harvard Business Review Digital Articles, November 25, 2016, https://hbr.org/2016/11/how-to-tell-if-machine-learning-can-solve-your-business-problem
 Jess Bidgood, “Tracking Boston’s Progress With Just One Numbers,” New York Times, October 8, 2015, https://www.nytimes.com/2015/10/09/us/getting-the-big-picture-in-boston-number-by-number.html, accessed November 2018.
 Amit Chowdhry, “How Tome Software is Tackling City Congestion and Safe Mobility,” Forbes, March 29, 2018, https://www.forbes.com/sites/amitchowdhry/2018/03/29/tome-software/#439ebea55979, accessed November 2018.
 Angwin, Julia and Larson, Jeff, “Machine Bias” (2016). Pro Publica, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, accessed November 2018.
 Steven M. Bellovin, When Enough Is Enough: Location Tracking, Mosaic Theory, and Machine Learning, (New York, New York, (2014)), p. 575–76.
 American Sociological Review. Vol. 59, No. 3 (Jun., 1994), pp. 425-445.
 Mike Yeomans, “What Every Manager Should Know About Machine Learning,” Harvard Business Review Digital Articles, July 07, 2015, http://web.b.ebscohost.com.ezp-prod1.hul.harvard.edu/ehost/pdfviewer/pdfviewer?vid=1&sid=1ca2405f-dc72-4193-b11d-178ec9d0d33a%40sessionmgr101