This article provides an overview of the field and process of people analytics, with an eye to whether the rapid growth of the field is a good (or a bad) thing. This article struck a chord with me and implored me to think about my own skepticism about the field after all that we have learned this semester.
My first major concern about people analytics is the power imbalance between organizations and the individuals that make up organizations. This is rampant throughout the entire process of people analytics. Starting first with the research questions that organizations choose to focus on. These are chosen to improve organizational efficiency, performance, and the “bottom-line” (e.g., make hiring more efficient, improve sales performance). However, what is best for an organization might not be best for its individuals. A potent example is automation in the workplace. While improving organizational efficiency, this change puts many individuals out of work—and this doesn’t just apply to what many refer to as “low-skill” jobs (more here)—and this process ultimately increases income inequality (more here). At the next stage of the process—data collection—this concern only worsens. Many organizations collect and “own” vast amounts of data about their employees, giving them a massive amount of power over these individuals. Further, employee knowledge of the collection and/or use of this data varies—in many cases, individuals do not have to explicitly know or consent to the collection or use of this data (more here). When it comes to insights derived through this process, there is the question of whether and how organizations communicate with their employees. In many cases, organizations will act on the insights derived from a people analytics project—but, do they reveal these insights to their employees? If knowledge is power, this last stage of the process grants even greater power to organizations. So, how do we police the use of people analytics to even the playing field? What rights do employees have in these processes? These are questions that we must be considered as the field progresses.
My second major concern about the field of people analytics is the illusion of objectivity. I cannot deny that there is value in the use of data, but it is not as “objective” as many people believe. Decades of research have shown that humans are inherently biased. But, so are algorithms. While there is an argument to be made that algorithmic bias is easier to fix than human bias (more here), I worry that bias in data and algorithms is not weighted heavily enough. Humans are involved in the collection of data and the creation of algorithms at every stage. Broadly speaking, the choice of which variables to measure and how to measure them lies with human decision-makers. Further, when one has the data, the decision of how to clean it, organize it, and ultimately analyze it also lies with human decision-makers. While procedures and tests exist to limit the intrusion of human bias into these processes, it is dangerous to treat data and algorithms as “objective.” Doing so can lead decision-makers to put too much weight on results that may be erroneous or more complex than the data at hand can describe. This is especially concerning when the people involved are not rigorously trained in such matters, as is the case in many inter-disciplinary people analytics teams. I worry that the illusion of objectivity that comes with the world of big data is leading organizational decision-makers to blindly trust analytical results that should be treated with a hefty dose of skepticism.