In his article, “How to Ethically Secure People Analytics,” Andy Hames does a satisfying job of featuring both the advantages and potential pitfalls of the application of people analytics within a company, but he fails to fully capture the role of ethics in this field.
Hames begins by highlighting that, for a company, having the ability to understand which of your employees are thriving, which might be struggling, or which might be ready for the next step in their careers, is crucial. The better you can understand the people who make up your team, the better you can support these people and promote a healthy, engaged and productive working environment. Additionally, with better information, you can also leverage and retain a diverse range of talents. I believe that most people can get on board with this application of people analytics. People helping people!
At the same time, as with any form of data collection, there are always pitfalls of which to be aware. Hames provides a cogent example of an instance at The Daily Telegraph in 2016 where desk sensors were tested in order to monitor the use of their current office space and determine whether it was being used effectively. While their intentions may have been innocent, it is likely of no surprise to many that this decision was confronted with backlash from employees who feared that they were being surveilled by their employer. As Hames points out, increased data collection (especially in the form of surveillance) can lead to employees feeling coerced to behave in a certain way. This can place limits on creativity, collaboration and overall team production.
While I completely support how Hames has laid out the field of people analytics, I take issue with his concluding arguments about the ethics of decision making and what he coins “pragmatic people analytics” (or ethical people analytics).
First, Hames claims that algorithms can never replace human intuition when it comes to making the right, moral decision. The claim that humans always make the right, moral decision is a far cry from the truth. There has been much research done to show that that judges are more lenient with their sentences after they have eaten. A study conducted by the Proceedings of the National Academy of Sciences evaluated over 1,100 judicial rulings and found that (even when controlling for the variation among judges – a fixed effects model was used) there was a statistically significant increase in the likelihood of a favorable ruling after a food break. If judges, who are typically viewed as being exemplars of impartiality and rationality, cannot let extraneous factors impact their decisions, how can we even begin to imagine that others, whose roles are not defined by impartiality and rationality, can be trusted to make the right, moral decision?
It is important to note here that I do not believe that algorithms should replace human decision making. In many instances, the data being used to train the algorithm is inherently biased. Any algorithm that is trained with biased data will produce results that only further perpetuate those biases. For example, if we were to try to better understand the attributes of a person (charged with a crime) that best predict their risk of recidivism, our training data would be reduced to those who were not detained after their crime, since there would be no way to measure recidivism for a detained prisoner. However, as we saw in the example above, the group of people who are detained inherently come with some bias. Someone might have been the recipient of a less-favorable-pre-lunch-decision and been detained, meaning that they will not be included in our training set and we can never understand how their characteristics relate to recidivism. We can therefore never truly understand the relationship between our inputs and our output.
The second argument that Hames makes is that in “pragmatic people analytics,” companies can retain the trust of their employees by being transparent. While I completely agree that transparency is key, I do not agree that you can equate transparency to trust. A simple example that demonstrates this point could be the following: a company alerts its employees that it is conducting a sentiment analysis of all emails sent from their sales team to external clients, in order to better understand which types of email exchanges are associated with successful sales pitches. There is clear transparency here, but this transparency is not likely to instill much trust in employees.
Hames clearly and succinctly outlines the field of people analytics. However, in his analysis of ethical considerations, his seemingly blind trust in human-led decision making and the impact of transparency are concerning.