Laura N Montoya on the global cultural lens of AI
There is often a western cultural lens attributed to modern systems of artificial intelligence. Yet, remarkable advancements and innovative solutions have been developed and implemented all over the world — sometimes with limited resources and infrastructures. With advancements in AI only positioned to accelerate, threatening to further compound the historical, economic, and geopolitical systems of inequality and bias, it is critical for underrepresented people, communities, and societies to not only have recognition, but also equal access to the resources and tools that will empower them.
In this episode, our hosts Colleen Ammerman and David Homa speak with Laura N Montoya about the inherent correlation between underrepresentation and bias, the need to support communities building tech systems, plus advice for young people looking to get into the tech space. Laura is a scientist and engineer turned serial entrepreneur and startup adviser. She is also the founder and executive director of Accel.AI, a global nonprofit lowering the barriers to entry in engineering artificial intelligence.
Read the transcript, which is lightly edited for clarity.
Colleen Ammerman (Gender Initiative director): So, today, we’re talking with Laura Montoya. She is a scientist and engineer turned serial entrepreneur and startup adviser. She’s also the founder and managing partner of Accel Impact Organizations, which includes the Accel AI Institute, LatinX in AI, and the Research Colab network. Welcome, we’re really excited to talk with you today.
Laura Montoya (founder and managing partner of Accel Impact Organizations): Thank you. I’m really excited to be here.
David Homa (Digital Initiative director): Laura, thanks for joining us.
To start with, you work in an interesting space where you have expertise in the technical aspects of AI and ML, but also work in the social aspects around how people interact with it and how it affects society. I wonder, when was your first exposure to AI, and what was that about?
LM: You know, that’s a funny question. I feel like now that I better understand what AI is… I had exposure from a young age. And, being able to utilize the computer and really being able to even just play with little, like, robots and go to science museums — that’s something I did from [a] very young [age]. But, I didn’t have that conceptualization of what AI was. I didn’t understand its potential power and scalability and the impact that it could have on the world. But, I would say that the idea of a robot, or something that potentially you can interact with that would mimic a person or another entity — because I was really into animals when I was young — I think that it was something I was very drawn to as a child. And, it was something that I felt that I could really connect with — even my toys and the robots in general. So, I would say that as I grew into adulthood, I found that I drew from that experience as a child and my love of these — you could call them inanimate objects. But when you’re a kid, you breathe life into them through your imagination.
And, as time has gone on, (and obviously the technology has gotten better), now you have these robots that actually are very lifelike, right? One of my favorites, actually, sits on my desk. And, I can show him to you. It’s Cozmo. I’m sure you guys are familiar with him, from Anki. And, they have a new one out now, Vector. But this, I think to me personally, was one of the greatest innovations because of the way that they animated his face, right? And, it allows children really to connect with something that otherwise would just be a toy. And, the way that it breeds excitement, and in the way that it brings joy… And so, for me personally, that’s something that I have taken into my career now and my understanding of AI and how I want AI to really live in the world. I want it to be something that people can connect with, [that] they can appreciate, that can help them, that can solve problems. And, I think that that’s something that’s been missing in a lot of ways from the conversation around artificial intelligence so far.
CA: I would love to hear you talk about this notion of “Demystifying AI.” What is this program, and what you’re doing with it, and why it’s important?
LM: Oh, of course. So, the Demystifying AI Symposiums is a program that we started with Accel AI Institute, actually from the beginning, about six years ago. And, the goal was really to help people better understand what AI is and how to apply it in the real world. And, that was a time when all the hype around AI wasn’t really there yet. We didn’t have all of these free courses that are available now through Coursera and Udemy, and even better courses like through Udacity and so on. And so, there had been a winter in AI prior to that. We were at a place where people were just trying to figure it out and say, “Okay, well, what is this technology and how can I use it? And would it actually make my company better? Would it make my products better? And how can I basically ride this wave?” And so, for people like me and more of the underrepresented communities that we work with, I really saw this as an opportunity to help people get into the space. To provide education and ensure that, once a technology is applied, and marginalized populations will have to transition their work, that they would also have that opportunity to reskill and move into that new economy.
The goal really of these symposiums is to ensure that people get that exposure. What we did is we actually hosted sessions, which were weekend long to begin with, and we would invite people from academia, from industry, basically experts in the field, to come in and really provide that exposure, and not necessarily from a theoretical vantage point, but really an applied vantage point. The goal was to get people with hands-on experience to really be able to use scientific packages like Python, and then apply that to the Anaconda packages, and then ensure that they could say, “Okay, if I have one specific project that I want to do, to achieve, how can I take that and actually make it happen using something like TensorFlow or PyTorch, and really utilize computer vision and natural language processing, and then make this a reality?” And so, that was the goal of these symposiums: to ensure that someone could have a very small idea, get exposure to people and a hands-on application, and then take that and turn it into something like a portfolio project. And then hopefully, also, really increase their curiosity and their interest, and then they can continue to learn and grow on their own. So, from there, we would apply and give them different resources so they can continue to learn. We have obviously a full GitHub repository with actually thousands of free resources that are available, especially now in this area of AI and machine learning, which is what we would impart on all of our attendees after they would work with us. So, that was really the goal of our Demystifying AI Symposiums.
CA: That’s great. And, it makes me wonder, is there anything that you learned from doing that work about how to do it well? You know, that you can offer to others? I mean, lots of organizations and companies are also adding more and more thinking about these issues. How do we make sure that we do create that access to marginalized communities? And, I think there’s lots of other programs and people who want to deliver on that same mission. So, what are the things you feel like are important to actually do that effectively?
LM: Oh, of course. I think the key, really, is understanding where someone is at before they start a program like that. Because you can’t expect that any program is going to be a one-size-fits-all solution. For us, we realize that a lot of people — especially those that hadn’t had any prior exposure — were really starting from a blank slate. Many of them had no prior experience in even software engineering, understanding any of the languages, or really using anything like even a terminal on their computer. They were starting from scratch. So, they had never downloaded any of the packages or anything like that. We had to really have a lot of patience and ensure that we were coupling people with the right volunteers and the right mentors, so that they would get that time that they needed to really ramp up from zero. And then from there, we would build them up to the next level. So, I think that was the biggest takeaway, is that… especially when you talk about AI today, and you think about, well, how do you actually do AI? Now there have been a lot of advances recently with APIs that could allow you to create something very, very simply in a day, right? Even within a few hours, if you have some technical and prior knowledge and experience.
But, if you’re starting completely from scratch, oftentimes what people would do is really hit their head against the wall on the simplest issue. If something did not download correctly, if they couldn’t understand how to write a proper print statement, people get discouraged. And so, for us, we wanted to ensure, to be there, to encourage them, to make them understand that it is worth it, and they can’t give up just from the first step. And, they need to continue on if they really want to achieve this goal. So, for us, that was — I think for me, at least, my biggest personal takeaway — is ensuring that we meet people where they’re at, and that we can encourage them so that they continue to go and really achieve the goals that they’re trying to achieve, whether that be a simple project just to experience it, or if they’re going to move further and actually study this field and go into a grad program, and then become an AI engineer or researcher.
CA: It’s not just about imparting the technical skill or knowledge, is also what I hear you saying. I mean, that’s important, but it’s not just the training, but it’s actually creating that… capacitating people in a more holistic way, where they feel empowered and feel like they have the ability to learn and grow with the technology.
LM: Of course, yes, and not just a safe space in that they feel comfortable in doing the work that they can do, and feel like they can do the work, but also that they’re surrounded by other people that look like them, right? That are on a similar journey, that are encouraging them and say[ing], “Hey, I’m right here with you, and I want to achieve this goal, too, and how can we work together and put our minds together to achieve this goal?” And, I think that that is something also that’s incredibly valuable. And, honestly, that’s something I took away from my time working with Women Who Code, because that was a wonderful organization with whom I’m a director, and I used to run one of their chapters. And, that for me, I felt like that time and experience was really key for me when I was first getting into software engineering. Being surrounded by other women who really want to see you achieve and are there for you and show up every week after hours and really want to help you — that is so amazing and so valuable of an experience. And, really knowing that you can do that with other people and have those networking connections that, once you’re ready to take that next step and apply for a job and really get into the field, they’re going to be there for you as well. So, ensuring that type of environment is something that we recreated for our members is really key.
DH: Laura, you mentioned, in particular, focusing on people who are traditionally underrepresented in tech. To you, why is that important both for society, but also for the technology outcomes?
LM: Honestly — and this has been said many, many times — if the technology is not reflective of the larger population, it’s going to have bias, right? And, not only that, it’s not going to be something that is achievable and something that is reflective of the broader community. And, if we’re going to develop products, if we’re going to have things that people can connect with — can engage with — you want them to be able to see themselves, right?
If you have, for example, a smartphone, and all your friends can use the facial recognition technology to unlock their phone, but you can’t, then, obviously, that technology wasn’t made for you, and how does that really make you feel inside? There’s this thing that potentially should be available to everyone at this point. Things like smartphones and anything, right? Hand washing sinks in the bathroom… these are utilities, right? These are purposeful items. And so, if they’re not developed in a way that actually works for the broader society, the broader community, then you’re hurting people, you’re causing harm, and you’re making them feel like they’re less than. It’s not really fair to have a world that isn’t created for people that are like you, especially when you are representative of a very large part of the population. There is absolutely no excuse and no reason why more thought shouldn’t be put into creating these products that will scale and will work for everyone with having them in mind. And also, putting in the time and the forethought to ensure that people from all different backgrounds [are represented], not just in the US. We have a very Western cultural lens when it comes to technology, even though a lot of our products are actually developed overseas. And, how unfair is that? I think it’s really adding to the issues around how the US population tends to capitalize on other economies in other societies, and that we are developing these products and utilizing people from other places to do the work for something that isn’t even going to be used by them. I think that that is something that we really need to change in this world, and it’s unfortunate.
DH: And, it’s important also, like you said, for people interested in this space, or who might be interested in the space, to see people like themselves doing this work. How about for you? Have you faced issues personally with not enough people looking like you? And, how have you overcome that?
LM: Yes, of course. For me personally, being a Latina, being a woman in this field, when I was first getting started, I didn’t see a lot of people, especially in the space. When you think about artificial intelligence, you see a lot of white men with PhDs. They’ve gone to Stanford, they’ve gone to Harvard and MIT, and they’re the ones that are publishing the research, and they’re the ones that are taking on the position of CTO or AI Director within an organization. And, it is hard to look at that. For someone like me who wants to be… wants to embody that role, who wants to be that person, (and was the first person to graduate from university in my family), I think it’s invaluable to have role models that look like you and that you can see yourself in. And so, that’s a big reason why I created the LatinX in AI organization — because I wanted to provide that for Latinx people, not only within the US but also in South America. And, I think I, personally, occupy an interesting space, being a Latinx American, because I can see all sides of that culture and that ethnicity. And, oftentimes it can be hard as well, right? Because, when other people come from another country and they see you as an American, they don’t necessarily think that you have the same life experience as they do. And, they’re absolutely right. My life experience is completely different than their life experience. But that’s also why I want to provide that for them as well.
And, that’s also a huge driver of why I started the organization: to ensure that people that come from anywhere — whether that be a Latin American, whether that be someone who is born in Mexico or Uruguay or Brazil or Colombia — that they can see someone achieving in this space in the US And not only in the US, but in other developed countries as well around the world, that they can compete in the global market. And so, that’s part of why we host these large AI machine learning conferences — like the Neural Information Processing Systems Conference and the International Conference on Machine Learning that are often hosted in places like Montreal and Toronto, and in different parts of Asia and Germany and so on — is because we want to ensure that those people have the opportunity as well to get out of their comfort zone, get out of their personal space, and the bubbles in which they’re currently working in, and gain exposure, and also have representation in those spaces. Because if you — and it’s obviously not just with the Latinx population, with the black population, and being a woman in general in those spaces — if you go into that space, and all you see are people that don’t look like you, it’s not very welcoming, and it’s not encouraging. And so, just like that experience we had with Women Who Code, right? And, how that space made me feel safe and welcome — again, that is what I want to provide for others. So, that was a driver for me in creating this part of our org, and it’s something that is incredibly valuable to me as well.
DH: I was wondering what important and interesting things are going on in South America that maybe people haven’t heard about?
LM: Oh, wow. There is so much, honestly, that’s happening [laughter] — it’s incredible. And it’s funny, because you don’t hear about it as much, obviously, from here. But, what I would say is really amazing are the startups that are now popping up around AI machine learning as well. One of them that comes to mind is Rappi, which is similar to probably what you think of as Uber here, right? And, it’s a company that basically delivers anything that you need through a phone app. They have courier services that come in, and basically, they use AI machine learning technology to connect all the couriers to the people that are trying to order these products. And, they also want to ensure that they do it in a way that hasn’t been done before here. One of the reasons why it’s pretty unique is because they don’t drive as many cars in that area. This is in Medellin, in Bogota, in Colombia. And so, the delivery often happens by bicycle, it often happens by motorcycle. But still, they’re able to track these individuals and make sure that products are delivered in a really rapid pace. For me, I thought that having that company get so big so fast, and then expand to many other countries within South America, was incredibly amazing. It’s something that I think [has] actually caused great value because different remote areas of these countries now are able to receive products and goods that they haven’t been able to receive before. And so, for me, that’s something that we would traditionally think of as very common now, especially in the US. But, it’s not something that they’ve had for very long. So, it’s amazing. I think that that was something that I thought was really exciting to see from a more product standpoint.
On top of that, if you think about the medical industry, in places like Cuba, for example — because I have a lot of friends that actually live in or [were] raised in Cuba — they are quite advanced from a medical perspective. So, most of the funding that has gone into technical development within Cuba has been applied to the medical industry. And so, they have now been able to use AI machine learning and apply it for different areas of research, when it comes to detecting different kinds of cancers, to solving treatment problems that they wouldn’t necessarily have had before. But they are doing it in a way that uses what we would consider limited technology. They do basically what’s called AI on the edge, because a lot of the processing is just handled through your smartphone, right? And, then it sends the imaging data through the cloud. And so, because they don’t have the resources to necessarily purchase these large X-ray machines — and they wouldn’t necessarily have the servers available to them to actually send all of this data and do the processing themselves, through large servers that we would normally host through Amazon or through Google Cloud services here in the US. So, just seeing that type of advancement and seeing people and professionals be able to solve these problems, using the same ideas, the same theories, the same types of technologies, but apply it in their community even with limited resources. I think that that is amazing. And that is really something that we should be proud of and that we should speak more about.
CA: So, part of it is about supporting that work, like you’re saying, elevating that work, and enabling more people to do it, both in places like the US and elsewhere. But, it’s also about getting people in places like the US or Europe or Canada to pay attention to what’s going on in those places. So, like I said, it may be a tough question to answer, but [I’m] just curious for your thoughts about how do we do that. How do we get the people who are in these positions of power and influence in places like the US to really pay attention and be open to learning from what’s happening in Cuba or Colombia?
LM: Yeah, of course. So, a lot of that is the work that we’re doing as an organization to provide that representation and get others to pay attention. Part of it is ensuring that these people are in the same place and time. That they have the opportunity to speak in a way that allows for collaboration and exchange of ideas and, really, that opens their minds up to the fact that there is a broader world out there, right? And, that people are doing this work and driving innovation in places that you wouldn’t necessarily consider otherwise if you didn’t have exposure to it. So, for us, we think that’s first and that’s key — is putting people in the right place to really drive those conversations. Otherwise, we really need to ensure that the researchers from the US and from Canada are paying attention. And, that they understand really what’s missing from these different places, and what they can do to make a difference.
And so, for us, after we surveyed many of our members within Central and South America, we found that one of the key areas of need that was lacking was mentorship. What traditionally happens — because these countries do not have the same types of resources and their governments do not invest as much within research and development within their countries — [is] people end up leaving. Once they get a master’s degree, once they get a PhD, the goal is to get out of the country and find a place that will actually pay them more, so that they can have a better lab, they can provide more for their family, and they can get that exposure. And, on the one hand, that’s [an] amazing opportunity for that individual, and it’s great that they were able to make it that far. But also, then, that still continues to create this vacuum of this lack of mentorship within the country itself, and basically, this brain drain that happens in that area, where all of these very highly educated individuals that are doing this wonderful work are now moving on to other places.
And so, what we have done, as well, is created this mentorship program that we offer, where we connect people from all over the world, and specifically people working at very large, very advanced companies. So, we have our mentors include people from Google, from Facebook, from LinkedIn, from Apple, and so on. And, we connect them directly with researchers in South America and Central America so that they can talk about their work, and so that they can gain exposure to people working within the industry. But, when you’re really considering maybe the people that are not reachable — those that wouldn’t necessarily take the time to mentor or aren’t already listening and aren’t open to the conversations — I think the key there is just going to be more time and more exposure. So, the more we bring people in from different countries that show that they can do this work and have the skills and want to be in this space, then the more they move up in the community, right? Others are going to have exposure to them and they’re going to see, okay, well, actually, people like this are doing the work. And it’s the same thing, I think, for any minority group.
DH: During this pandemic, a number of companies have obviously experimented heavily with remote work, and also this idea that some people maybe feel they don’t need to be out near San Francisco, they don’t need to be in the Valley anymore, they don’t need to be exactly at the home office. And, there’s been a bunch of talk about smaller cities in the US benefiting from people either going back home or leaving. And, I wonder, can you envision that extending to beyond the US and maybe into South America and other places? How might that happen, and how might that change things?
LM: We think — maybe in the US — even if you’re going to a more remote state, or environment if you live in the country, that maybe the internet isn’t as strong, right? This happens in our country also. You know, someone from a farm in Iowa potentially wouldn’t have that opportunity to do the most advanced machine learning research if they’re sitting in that family home. But, potentially they could upgrade that home, right, to get to the point where they could do their work. I think that that is a little bit more feasible here. In South America, there are very, very remote areas that wouldn’t even have that option, right? That you can’t just call up your utility company and say, “Hey, I want to install this gig fiber internet [laughter] to ensure that I can do my job.” So, I think that it’s a little trickier in those cases, and the support infrastructure has to be there first.
There are some grants now that are coming out, actually, to help provide a more stable internet for people within more remote locations in different parts of the world, and I think that that’s something that is very valuable. There are a few organizations that are trying to help solve that problem, as well. And that will also bring things like laptops and extra computers with them, so that other people in those environments can start gaining access to the internet and to technology in a way that they haven’t had previously. So, when I envision it happening, I think that it’s something that has to be — it’s not like an individual can just take responsibility for it — I think that it’s something that has to be more of a community-wide endeavor, that potentially the group and the community has to come together to say, well, this is something that we want, and this is something that we’re going to work hard to achieve. They have to then find the proper resources and reach out to the right organizations to come and help them. So, I think that that’s going to be a larger barrier for people within different parts of South and Central America who are lacking that access for sure.
DH: Laura, is there anything in particular you want to cover or speak to or share with people?
LM: Right now, there’s a lot that is happening in the area of AI and ethics space. And, for me personally, I think that it is invaluable that we continue to support the people that are doing this work, and not just the people that are living in other countries, but also the people that are within our community, right? We have very well-known researchers and engineers that are showing up to work every day, and they’re still experiencing bias. And, they’re still facing basically a marginalization within that community. It doesn’t matter that they have a PhD or they’re well published or they’re considered a research scientist. For us, it is essential that people who come from different backgrounds are represented within artificial intelligence and machine learning, because this technology has an effect on everyone’s lives, right? And, there’s absolutely nothing in the world almost today that you can touch that isn’t going to have AI embedded in it somehow going forward into the future. Someone said [something] recently that I really appreciated, that “AI is the next internet.” In the way that the internet is basically all around us, and it’s something that you can’t live without at this point. Well, that is going to be AI within the next few years. And, it almost is now, right? Between our cell phones, between our smart toothbrushes, right? [laughter] Like anything, really, that you touch. And so, if that technology is not representative of the broader population of the world, like, you’re doing people a disservice, you’re really causing harm. And, that’s not okay. People need to take a step back and really think about: is this the world that I want to live in in the future? Is this really the impact that I want to have on society is to create something that causes harm? And, I don’t think people really want that. I think deep down, most people, they just have been inconsiderate, and they haven’t been really taking into account the real potential cause of their actions. So, I think that that’s what I really want to leave the conversation with, is kind of that call to action to people to take some time and reflect and really ensure that the work that you’re doing does help and doesn’t harm others.
CA: Besides your own work, which we’ll definitely be sharing some of these great programs that you’ve talked about, what are some other resources you would point people to, whether it’s books or other organizations or people to learn about?
LM: When I was first starting out, honestly, we really just focused on the Deep Learning book. And, this is something that is available for free online. What I really appreciate about this book is that it basically takes you from the intro-level mathematics and then allows you to build up from there, because you have to understand that the basis for all of artificial intelligence and machine learning is math, right? So, for people that appreciate that and want to go further into theory and want to publish research and investigate that space, that would be an amazing resource for them to start with. But now, if you’re math-shy — because I know a lot of people are math-shy, and they don’t necessarily want to jump into linear algebra or calculus or differential equations or anything right from the start — I would say, honestly, one of the best resources is fast.ai. I think that Rachel [Thomas] has done an amazing job putting that course together online, and their videos and resources help people get that hands-on experience. And again, just like our Demystifying AI Symposiums, [they] really help you build projects from the ground up and just get your hands dirty without thinking necessarily about the theory or thinking about the math. And, I think that that is really great.
Other than that, I would say pick a project. Like, if you’re going to do anything, you have to have a goal in mind. And then, from there, you have to break it down into small steps and work to achieve that goal. Otherwise, you’re going to get overwhelmed, or you’re going to lose interest or lose confidence in why you’re even studying this. Because the world of machine learning is so incredibly vast, and it’s changing rapidly, every single day, right? If you think about the number of papers that are being published and the different models that are coming out now, it’s like every day there’s something new. And so, if you try to just keep up with that and say, well, how am I ever going to achieve this ideal view of what AI is, or the pace in which the industry is moving, then you may get discouraged. But, I would say don’t do that. Just focus on what you want to achieve, right? What do you want to solve with artificial intelligence? How can it help you? How can it help your community? I think that that is really the key in this case. And, there is no other way to approach this problem of getting into AI, other than really thinking about your core values.
DH: Laura, thanks for joining us today.
LM: Thank you very much for having me. It’s been my pleasure.
CA: This has been a really fascinating and inspiring conversation. Thank you so much for talking with us.
DH: That’s a wrap on the interview, but the conversation continues.
CA: And, we want to hear from you. Send your questions, ideas, and comments to email@example.com.
Keep ExploringJust Digital Future
All data science is political. It's impossible to take a dataset and analyze it in a 'perfectly objective' way. Because you're always going to be putting on there some type of value judgment about what the dataset represents.James Mickens
Computer scientist and Harvard University professor
Candice Morgan from GV (formerly Google Ventures) shares how to navigate inclusive strategies in organizations; how the “summer of protest” has moved the dial of accountability; and how the venture space can better practice antiracism.