This article shows a reflection on How to Ethically Secure People Analytics, by Andy Hames, Dec. 2021.
One of the by-products of the decade-long wave of enterprise digital transformation is the upsurge in the need of people analytics. This is an era in which almost all of the behaviors will leave its digital footprint. When making hiring, retention and even lay-off decisions, HRs are not only relying on traditional surveys and feedback sessions, but also are increasingly more eager to explore the possibility of looking into employees’ data records of their day-to-day interactions. However, such desire of data granularity is often in conflict with developing an inclusive culture. How should enterprises draw the line between “gaining sufficient data” and “avoid being a big brother”?
Looking into the history, there are so many lessons where enterprises failed after crossing the line of data privacy. For example, employees at The Daily Telegraph were asked to work with individual sensors at their desks. The initial intention was to use such sensors to collect office utilization data, hence to guide the decision-making on if they need to additional office space. However, such move was regarded as offensive by employees and sensors were removed by the end of the day.
The concept of “data-driven decision making” became prevalent as the penetration algorithms in enterprise world grows. Tik tok, as an example, is relying purely on viewership-related datasets to determine which content will gain more exposure. Such ‘data liberalization’ mindset has also been seen in the people analytics practices. Many people see data gained via ‘listening’ to employees’ email contents/phone call records/trip details are making better predictors of who the employees really are, and what they are really thinking and caring about. However, this view does not count the fact that the employer’s surveillance behavior itself will create impacts on how their employees will behave. Quantitative measures should never been built on sacrificing qualitative and unmeasurable factors like organization culture.
Not to mention the risk of falling into the pitfall of bias. In 2018, Amazon was reportedly abandoned their AI recruitment system, based on the concern of creating systematic bias. The system will tell itself that “male candidates are more preferable” by analyzing historical data.
However, we should never neglect the fact that data analysis has provided another strong reference in people-decision-making. “Algorithm-aided decision making” method requires human to be the final decision-maker, and count on human’s deliberate judgement on how to trade-off quantitative input collection versus qualitative organization culture.
Don’t let “Big Data” become “Big Brothers”