Paula Álvarez

  • Section J
  • Section 1
  • Alumni

Activity Feed

On April 15, 2020, Paula Álvarez commented on Employee Monitoring at Barclays :

Cool article, Jared!

It seems like a number of things went very wrong for Barclays. Whether or not I believe in the product (and I don’t think I do, but I’d like to know more), it sounds like this was fundamentally an implementation issue. There is a framework we used in MSO (Managing Service Operations) last semester on how to shape a service culture that I’ve come to find very useful in analyzing situations like the one you’ve written about.

The framework has 3 components: clarity, signaling and consistency. Clarity refers to transparency of beliefs and goals of an organization. These beliefs must be signaled to everyone within the organization. Actions taken need to be aligned with those clear goals and beliefs. The first two seem to be problem here and the third one remains an open question (though my hypothesis is that it was also part of the problem).

1) Clarity: there might’ve been a clear goal to the implementation of this employees monitoring initiative, but leadership failed to communicate that effectively to employees. This lack of clarity from above, leaves employees wondering and hypothesizing as to what the goal is.

2) Signaling: implementing employee monitoring without a clearly stated objective sent the wrong message to employees, which become anxious and feel invaded.

3) Consistency: even if Barclays had had clear objectives and in the implementation of this initiative those clear goals had been properly signaled to the employees, there would be still be another question to answer: are those goals consistent with Barclays’ culture and broader set of values? If the answer is no, even with clarity and signaling, this probably would’ve failed any way.

On April 15, 2020, Paula Álvarez commented on Locked in by Algorithms? :

Thank you for sharing that article, Aurora! It’s so eloquently written and I think it captures my sentiments after writing this post. This notion of transparency he talks about I think it’s the critical aspect here, and it’s what I had in mind when I wrote about the need for checks and balances and some kind of auditing body at the very end of the post.

Thanks again!

On April 13, 2020, Paula Álvarez commented on Data Transparency-Privacy Tradeoff During a Pandemic :

Rocio- thank you for the thoughtful breakdown of the impact of the vital COVID-19 contact tracing effort on our considerations of acceptable limits on data privacy. I have been thinking quite a bit about this issue. It can be tempting to write off the danger of relinquishing protections on privacy in favor of improved surveillance and data access. That danger can feel like a concern for the future as we navigate the immediate challenges of the pandemic. But you highlighted some key concerns to a short-sighted approach: the risk of long-term consequences to relaxing privacy safeguards, and the history of data abuses and breaches that should make us all wary of handing over our data.

I see another reason that safeguarding data privacy must be a central pillar in the adoption of contact tracing: the crucial need to maintain public trust*. Containing the spread of the virus will be a challenge that requires overwhelming buy-in from the populace. Many countries are talking about implementing a software development kit or app like Singapore’s TraceTogether that you mentioned to track cases and interrupt the chain of infections by notifying people when they have recently come into contact with someone who tested positive and suggesting they isolate (PEPP-PT in Europe seems to be a great example: https://www.politico.eu/article/europe-cracks-code-for-coronavirus-warning-app/).

The simple fact is that in order for a tool like this to become effective, a majority of the population has to use it (the article above predicts at least 40-60%). Many of the most vulnerable older citizens among us do not have smartphones, or do not carry them everywhere so we’re starting from behind. Illegal immigrants or members of marginalized communities might be particularly reluctant to report location and health data. The general public must feel secure that participating in this global effort isn’t going to put them at risk in other ways.

The good news is that this reasoning implies that public safety and data privacy actually align in this case! The better we protect privacy, the more people buy in, and the safer we all are.

—————

*Specific actions for keeping public trust: 1) anonymizing all personal identifiers, 2) only saving epidemiologically relevant proximity history, and 3) erasing data as it is no longer useful for contact tracing.

On April 11, 2020, Paula Álvarez commented on Wisdom of the Crowd: Interviewing Your Network with Searchlight.ai :

Really interesting, John! Thanks for sharing!

A couple of reflections from my end:

i) I agree with the efficiency argument in favor of Searchlight over reference calls. It has the potential to save HR money and time. However, I’m not sure I agree that it filters out less serious applicants. You point out that “by giving applicants the ability to retain references on the platform, it helps applicants continually add to their profile over time”. If that’s the case and I’m understanding correctly, once I put in the upfront work to get my references on the platform, wouldn’t it very little incremental effort for me to apply to other jobs using those exact same references?

ii) The second thing I wanted to touch on is the idea that the platform might help mitigate unconscious biases. On their website, Searchlight says:

“Counteract prestige bias with a more equitable hiring practice and objective reference data. Using Searchlight, 80% of our partners have hired more top performers from underrepresented backgrounds.”

There are two things that influence whether your algorithm effectively eliminates or perpetuates biases: (1) the bias in the data you input, (2) the design of the algorithm itself. Regarding (1), I can see how a well defined survey might be effective in collecting data in an objective way (e.g. it’s well studied that when recommending a female vs a male, we are more likely to use certain adjectives and the survey can be designed to mitigate that). If you manage to gather less biased data, your algorithm is less likely to produce biased results. I would like to know more about how the algorithm addresses (2).

iii) Unconscious bias is just a subset of bias, but the algorithm by design might perpetuate other biases. For example, I wonder if Searchlight weighs the references from a manager at SMB the same way it weighs those of a manager from a big tech firm. Furthermore, I worry that their results might disproportionately benefit those with larger networks. Without Searchlight, a recruiter might be willing to do a few calls, but with Searchlight it can get as many reference points as possible for the same amount of effort. Hence, the disadvantage for a candidate that has worked for a few years at a small company with respect to one that has worked for a Google or a McKinsey where they have had different teams and managers might be exacerbated.