RL10

  • Section 1
  • Student

Activity Feed

On April 14, 2021, RL10 commented on Microsoft Analyzed Data on Its Newly Remote Workforce :

This is a really interesting post, Carla. The blurring to nothing of boundaries between work and non-work during the pandemic is something I’ve been hearing a lot about anecdotally, so it’s really interesting to read a more analytical take. I like your suggestion of treating certain times as sacred, and I think your concern about an enforcement mechanism is well founded. I think this points to a more general question of how to best deliver data-driven behavioral nudges. When they’re delivered through technology directly, they can be seen as annoying. However, when delivered by a manager or otherwise filtered through a human intermediary, they risk being seen as intrusive. I wonder which delivery method is actually more effective and engenders the higher level of both employee well-being and organizational effectiveness.

This is a super interesting post! I’m curious how you think companies should balance transparency with the desire to continue to modify metrics as they develop. Should companies notify employees when they make changes to how data is used? I’m especially curious in the context of a company like Amazon that is under heightened levels of external scrutiny. Opening the “black box” of an algorithm risk exposing a company to criticism and liability about how they’re using their data, but I think you’re totally right that some amount of transparency is critical to building trust. I’m also struggling a bit to think about how much workers “should” be willing to give up in terms of privacy and tracking in exchange for better pay and benefits–it’s clearly the argument that prevailed in this case, but I think there are real justice concerns about how much society “should” allow and what responsibility companies have to notify and explain things to their employees.

This is a really interesting post! I was especially struck by your discussion of quality, and I think your points of caution are really well articulated. It makes me wonder whether this tool is better thought of as a complement for managers rather than a replacement or standalone tool. For example, this kind of sentiment analysis could be positioned as a starting point for a discussion between managers and employees, or a way for shy or new managers to begin to get to get a sense of their teams. I also wonder if this could be useful for surfacing toxic teams led by high performers (the Uber example comes to mind) and thus forcing accountability within organizations. It seems like this would come with its own set of training requirements, so I wonder how this would impact the positioning and value proposition of the tool.