Why inclusion matters for the future of AI
This article originally appeared on the Berkman Klein Center’s AI & Ethics blog.
The development and deployment of artificial intelligence (AI) technologies hold tremendous promise for much of the world, including the Global South. For example, in areas where effective medicine is too expensive or inaccessible, these technologies may lower the costs and ease of access while simultaneously engendering better outcomes. These systems can help to maximize the value of limited government resources in areas where government services are unequally distributed, and provide critical services to those most in need. In areas of political and economic strife, AI-based technologies can serve as early-warning systems, alerting governments, NGOs, international organizations, and multistakeholder organizations about impending humanitarian and human rights crises.
Even the most optimistic accounts of what novel emerging technologies can do to improve the quality of our lives have to consider and address a set of important barriers (let alone more fundamental issues with assumptions about technology’s role in society). For instance, there are always differences between the promise of technologies and their implementation within complex, highly contextual real-world applications. And despite the opportunities these technologies may offer, there is a real risk that — without thoughtful intervention — they may in fact exacerbate structural, economic, social, and political imbalances, and further reinforce inequalities based on different demographic variables (including ethnicity, race, gender, gender and sexual identity, religion, national origin, location, age, and educational and/or socioeconomic status).
For example, in the field of education, AI-based educational technologies such as digital tutors, curriculum plans, and intelligent virtual reality may actually enhance educational outcomes, and provide engaging interactive learning experiences for young people. At the same time, the complex interplay between data sets and algorithms that power these AI-based technologies often lead to pressing questions around discrimination, transparency and accountability, as well as privacy and safety of those who are using these rapidly emerging technologies.
Issues of exclusion and bias within AI are not new, but additional recent examples illustrate increasing attention to the many challenges we face. To name only a few: researchers continue to discover how facial recognition systems can reinforce structural biases based on how such systems fail at reading skin-type and gender; scholars are reinforcing the ways in which data discrimination may further oppress often marginalized and underrepresented groups; and it has been demonstrated that automated systems may reinforce inequality and bias in oftentimes unintentional or less visible ways.
Although AI technologies can have global impacts, their development has often been siloed, both geographically and sectorally, with a small number of companies driving forward these technologies with little insight from different industries, disciplines, social classes, cultures, and countries. As a result, there is also a widening gap between those who have access to data collected about users, information about AI technologies, and the ability to understand their impact, and those who do not. Some global institutions are beginning to examine how AI can impact and contribute to the social good, but there is much work to be done. This emerging “AI Divide” — if allowed to continue — could jeopardize equal treatment of people within and among nations. This asymmetry is a critical issue that must be addressed locally and globally.
Over the past year, the Berkman Klein Center for Internet & Society, with our partners around the world within the Global Network of Internet & Society Centers (NoC) as well as the Digitally Connected network, have been exploring the intersection of AI and inclusion in attempt to broaden the dialogue and engage many of the stakeholders, particularly those in Global South countries, who may be most impacted by the changes AI systems will bring to our everyday lives. Our role has been to support a conversation driven by our collaborators in Global South regions, and includes the the NoC and its main coordinator, the Institute for Technology and Society Rio, which hosted a major symposium in November 2017 with experts and researchers hailing from Asia, Africa, Latin America, and the US and Europe as well as a smaller regional symposium in January of 2018 in Costa Rica. International solutions should encourage meaningful interdisciplinary sharing of information at the international, national, and local levels, and ensure that the benefits of AI technologies remain accessible to all. Our work is supported by and informs the Ethics and Governance of Artificial Intelligence Initiative, a joint effort of the Berkman Klein Center at Harvard University and Media Lab at MIT.
As one contribution to the conversation on AI and inclusion, we have created a website that centralizes a broad range of resources based on the research and learnings of our work this past year. The materials found on this site include reading lists, research questions, salient opportunities and challenges, and a multitude of voices of experts, practitioners, and leaders from around the world sharing insights.
We also would like to acknowledge the amazing global team that has contributed to this effort and who continue to drive it forward:
- At the Berkman Klein Center: Urs Gasser (PI), Jenna Sherman, Levin Kim, Ryan Budish, Elena Goldstein, Andres Lombana-Bermudez, Alex Shaw, Alexa Hasse
- In LatAm: Lionel Brossi, Carlos Affonso Souza, Fabro Steibel, Celina Beatriz and the entire ITS Team
- Collaborators across the globe associated with the NoC and who contributed to the Global Symposium on AI and Inclusion
Amar is a senior researcher at the Berkman Klein Center for Internet & Society. His current research areas include issues of networked policymaking, improving information for decision makers in the public and private sectors, and harmful and hate speech online.
Sandra is a Fellow at the Berkman Center and the Director of Youth and Media. She is responsible for coordinating the Youth and Media’s policy, research, and educational initiatives, and is leading the collaboration between the Berkman Center and UNICEF.
Keep ExploringMachine Learning Curve