December 16, 2019

Some assembly required: building an interdisciplinary superteam to tackle AI ethics

climbing the mountain

TL:DR;

  • A diverse cohort of experts make up Assembly 2019 with a singular mission — tackling AI bias and collaborating on meaningful solutions.

 

What do a communications studies professor, a politics PhD, a technology policy advisor, and a machine learning engineer have in common? They share deep expertise in the ethics and governance of artificial intelligence — and they’re members of the 2019 Assembly program. Hosted by the Berkman Klein Center for Internet & Society and the MIT Media Lab, Assembly brings together a small cohort of technologists, managers, policymakers, and other professionals to confront emerging problems related to the ethics and governance of AI.

You might also like...

AI technologies are increasingly embedded in our lives at home and work — powering our virtual assistants, moderating content on social networking platforms, and helping companies hire new employees. Yet, as AI technologies become more ubiquitous, applying them can raise serious ethical concerns. AI systems are trained using data from the past to make decisions or predictions about the future. This can pose serious risks as societal biases embedded in data get baked into new technical systems. Biased algorithmic outputs are opaque; sometimes even a system’s programmers aren’t sure how a prediction was made. In a world plagued by systemic bias, how do we create AI systems that reduce inequality, rather than perpetuate it? What frameworks can companies use to determine if the application of a machine learning system is unethical? How do we bring communities impacted by AI systems into conversations about AI design and use?

“Biased algorithmic outputs are opaque; sometimes even a system’s programmers aren’t sure how a prediction was made.”

These are some of the questions being tackled in the 2019 Assembly program. During the fourteen-week program, the 17 member cohort identifies problems related to AI and ethics, and then proposes and develops interventions. For the first two weeks, the cohort engages in intensive team building, learning, and ideation sessions in Cambridge. Then, the cohort works in small teams to build their ideas. Last year, the cohort developed six projects, including “EqualAIs,” a privacy tool that circumvents facial recognition systems using adversarial attacks, and the “Data Nutrition Project,” which created a diagnostic label for datasets to drive higher data standards.

The cohort’s learning and projects are enriched by the group’s backgrounds and interdisciplinary expertise — in topics like communications, ethics, machine learning, media theory, technology policy, and project management. Participants work across sectors, in industry, academia, civil society, and government. This gives the cohort the ability to see problems and potential interventions from different perspectives, allowing them to innovate in areas that might have otherwise been out of reach. As one Assembly 2018 alum of the Data Nutrition Project, Kasia Chmielinski, noted: “We’re thinking about art and media, learning, product management, and engineering. And that’s reflected in the outputs of our project: a prototype, but also a paper, and now we’re also speaking regularly across domains. I’m really glad for the opportunity to have these conversations across the industry.”

Project teams are advised by scholars and practitioners from the Berkman Klein and Media Lab communities. Academia has a long history of studying the social impact of technology and the ethical ramifications of technical and business decisions. Harvard and MIT are home to world class scholars working on AI, fairness, ethics, and a slew of other topics relevant to Assembly. The cohort benefits from this wealth of knowledge.

“The technology we create now — and the decisions we make about how to use it — will shape the future.”

Yet, over time, the locus of new technical developments in AI has shifted from academia to industry. We tend to see new developments and applications coming from companies like Google, Amazon, Microsoft, and IBM. But, the challenges and opportunities related to the application of AI-based technologies affect people and organizations across society. Unfortunately, it’s still fairly rare to bring academic and corporate centers of expertise together through cross-sector collaboration. That’s where Assembly comes in. We hope Assembly spurs meaningful learning and collaboration across sectors, and encourages other institutions to host similar programs. The technology we create now — and the decisions we make about how to use it — will shape the future. We urgently need more creative, innovative spaces to inform those decisions.

Assembly 2019 began in mid-March. This year, the cohort is tackling questions around bias in natural language processing, the presence and potential of surveillance systems and algorithmic warfare, inclusive data collection and models, and the illusion of objectivity in the classification of datasets.

People in this Article

Keep Reading

Next Up

Next Edition | July 8, 2019