Artificial Intelligence and Ethics

We often bring up the topic ethics when evaluating various strategies for people analytics. My post is focused on pushing the discussion further by raising though questions (such as how much are we willing to sacrifice? and is it worth it?) we as a society have to deal with as we accept people analytics to make better decisions.

Ever since the beginning of Y2K, there has been an overwhelming  growing interest in all things internet-related (all the way from: file sharing over the internet; bid data collection and now; the use of such data to create artificial intelligence). Perhaps it is time for us to take a pause to discuss in detail, the implications of this movement (if at all it is one, or a long fad) and search deep in our souls to answer if we are comfortable with the consequences (both positive and negative) of artificial intelligence. I often wonder: What are the costs?; and Do they outweigh the benefits?. In other words does the end justify the means?

Artificial intelligence (AI) refers to the concept of computer systems capable of taking clues from the environment in order to perform tasks that would otherwise require human intelligence.  AI has known it has evolved from language translation into more complex tasks such as making decisions and such applications have been channeled towards people analytics where companies are now leveraging models/ algorithms to make hiring, retaining and firing decisions.

The article I am responding to[1], as well as many others, have highlighted the positive (such as reduction in costs, increased decision accuracy, etc) and negative effects (e.g., death of a pedestrian due to testing of an autonomous car) of AI. However, I am left struggling with the answer to the question: How much are we as a society willing to sacrifice in order to develop a working model?

AI algorithms take their inputs from big data collected from our imperfect world which we are trying to make more perfect through algorithms? For example, in an effort to reduce recruitment resources and hire the best, Amazon created a hiring algorithm to spot top talent from a pool of resumes. The algorithm turned out to amplify a familiar problem – the discrimination against female software engineers[2]. I understand the algorithms have to be trained to increase accuracy, but if its inputs or codding are already biased what hope is there?

Building an algorithm is no easy task. Organizations invest a ton of money into this in hopes of using this proprietary software as a competitive advantage. IP rights have somewhat enabled AI to take on the reputation as a blackbox. No protocols/ process exist for independent parties to conduct investigations to ensure both codes and inputs contain little to no biases. Big corporations claim to have internal teams double checking the so called algorithms but this setup raises questions about conflict of interests.

Suppose we assume good intent, and assume no bias in the codes or algorithms, who takes responsibility when bad actors use the technology for evil?  Who will police this new set of risks and ensure compliance even after we set rules? Corporations (who have huge budgets for lobbying) are making investments into people analytics but no records show that the regulators are not making similar investments to acquire the skills to set proper regulations, ensure compliance and investigate reports. No one seems to be responsible for preventing nefarious actors.

I believe we all need to take a pause and answer:  How much are we willing to sacrifice just to get a working algorithm?; As a society, are we willing to forego the progress we have made on gender equality matters for the sake of getting an accurate model?; Where should we draw the line?

 

[1] https://www.harvardmagazine.com/2019/01/artificial-intelligence-limitations

[2] https://www.bbc.com/news/technology-45809919

Previous:

Racial Bias in Healthcare Algorithms

Next:

Can Humu Provide a Model for Higher Ed?

Student comments on Artificial Intelligence and Ethics

  1. It’s funny how the world works. In most cases, innovation leads while regulations follow, and perhaps rightly so in most cases. So far, it seems as though big data mining is growing at such high rates, that regulators are struggling to keep up. In the places where laws are in place, the focus seems to be mainly on privacy. However, there’s this whole other issue of how the data is used. For example, while it is legally reasonable for my employer to be able to scrape data from my work emails, why should they be able to use it to detect that I’m considering other options outside the firm? If the culture is vindictive, that’s the quickest way for me to get a target on my back. That said, given the plethora of ways in which the data can be used, it’s unclear that regulations will ever be able to catch up.

  2. Interesting article. The innovation that has come with big data and machine learning algorithms has been amazing, but I am often concerned that we are inviting significant long-term risk in exchange for short-term profits. I’m reminded of a quote from Jurassic Park: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should”. These topics are complex – both in terms of technology and ethics. I think we need advocacy groups that are thinking about these issues 100% of the time, as well as support from governments to protect consumers and put guard rails around what types of data can be collected, who owns it and what it can be used for. While I think regulators have fallen behind on this in the past, there’s no time like the present to make this a priority.

  3. This is a great topic, thanks for sharing. For me, this brings up the question: what role will governments / society have to play in regulating the use of AI, especially in relation to data privacy, or even more broadly application.

    For example, should we ban the use of AI in relation to law enforcement. One could argue that the use of AI could potentially assist police departments across the country in policing certain areas that might have a predicted level of crime over others. Or in warfare… should we have AI guided weapons such as drones?

    Certainly a lot of food for thought about the guardrails we as a society need to discuss and eventually place on such a powerful and potentially revolutionary technology

Leave a comment