“Big Data” or “Big Brother”?

How should companies draw the line on ethics when embracing the era of people analytics?

This article shows a reflection on How to Ethically Secure People Analytics, by Andy Hames, Dec. 2021.

One of the by-products of the decade-long wave of enterprise digital transformation is the upsurge in the need of people analytics. This is an era in which almost all of the behaviors will leave its digital footprint. When making hiring, retention and even lay-off decisions, HRs are not only relying on traditional surveys and feedback sessions, but also are increasingly more eager to explore the possibility of looking into employees’ data records of their day-to-day interactions. However, such desire of data granularity is often in conflict with developing an inclusive culture. How should enterprises draw the line between “gaining sufficient data” and “avoid being a big brother”?

Looking into the history, there are so many lessons where enterprises failed after crossing the line of data privacy. For example, employees at The Daily Telegraph were asked to work with individual sensors at their desks. The initial intention was to use such sensors to collect office utilization data, hence to guide the decision-making on if they need to additional office space. However, such move was regarded as offensive by employees and sensors were removed by the end of the day.

The concept of “data-driven decision making” became prevalent as the penetration algorithms in enterprise world grows. Tik tok, as an example, is relying purely on viewership-related datasets to determine which content will gain more exposure. Such ‘data liberalization’ mindset has also been seen in the people analytics practices. Many people see data gained via ‘listening’ to employees’ email contents/phone call records/trip details are making better predictors of who the employees really are, and what they are really thinking and caring about. However, this view does not count the fact that the employer’s surveillance behavior itself will create impacts on how their employees will behave. Quantitative measures should never been built on sacrificing qualitative and unmeasurable factors like organization culture.

Not to mention the risk of falling into the pitfall of bias. In 2018, Amazon was reportedly abandoned their AI recruitment system, based on the concern of creating systematic bias. The system will tell itself that “male candidates are more preferable” by analyzing historical data.

However, we should never neglect the fact that data analysis has provided another strong reference in people-decision-making. “Algorithm-aided decision making” method requires human to be the final decision-maker, and count on human’s deliberate judgement on how to trade-off quantitative input collection versus qualitative organization culture.

Don’t let “Big Data” become “Big Brothers”

Previous:

Focus on the most impactful applications of predictive analytics

Next:

Behavioral Nudges for Organizational Change: Humu

Student comments on “Big Data” or “Big Brother”?

  1. Hi, Peizhen! First off, I love the Orwellian title of your blog post – definitely caught my attention. You also make really great points around the ethics of using data, as well as cases when privacy or baked-in bias was an issue. After the AirBnB case, I asked my friend who works there if they recalled receiving any disclosure about the scraping of email and Slack for network analyses or answering questions about who their work friends are. Safe to say, they did not recall this buried in their email log. It also made me think about the cultural differences in our attitudes when it comes to privacy and surveillance. An interviewer once asked a person living under an authoritarian government how they feel about people monitoring their data, and the person said people should have nothing to worry about if they aren’t doing anything wrong. Lots to unpack there! Thanks so much for sharing such a thought-provoking piece 🙂

  2. Thanks for the post Peizhen! I found it very interesting. As we discussed during the AirBnB case, a sense of comfort and belonging is very important to company culture. While analysis of private elements such as emails, trips, and calls can lead to informative inferences about employees, I think there is a very serious tradeoff between data collection and employee comfort. Even with the best intentions, rigorous data collection can feel like a form of spying and can corrode the trust in an organization. I think you hit the nail on the head when you said that “Quantitative measures should never been built on sacrificing qualitative and unmeasurable factors like organization culture.”

  3. Hi Peizhen,
    Thanks for sharing—I really enjoyed reading your post! As you’ve indicated, there certainly seems to be a line between gathering employee data to perform effective people analytics and implementing creepy worker surveillance systems. I had never heard of the example of individual sensors monitoring employees at The Daily Telegraph, but that seems to be a clear-cut case of a company crossing the line. Your post brought me back to our recent case study on Airbnb, specifically the portion that discusses the use of active versus passive data. Although the use of passive data is sometimes viewed as too intrusive, the benefits of passive data analysis are significant. It does not require employees to spend time filling out long surveys and involves continuous streams of data that can be used to carry out time-series analyses. However, one potential drawback of using passive data is that employees may begin altering their digital behavior if they know that it is being monitored, which would likely lead to less meaningful analysis results. Moving forward it will be interesting to see which approach—active or passive data analysis—becomes the more common practice in people analytics. As you’ve perhaps suggested, passive analysis can be a viable route as long as companies implement it in a manner that is not overly intrusive.

  4. Hi Peizhen,
    Great blog post! Really draws the comparison between data for good and its ethical implications in a way that makes the reader think: it’s hard to gather what the answer is — if there even is one.

    Perhaps the solutions will vary between companies, and employees will “self-select” into organizations who’s data analytics and people analytics practices are in line with their preferred ideals. However, it could also vary dramatically by role — a fulfillment worker at an Amazon warehouse may have more data points to be collected than a CEO. Perhaps all this concern is really only for non-leadership roles. What do you think?

Leave a comment