jlaydon

  • Section J
  • Section 1
  • Alumni

Activity Feed

Thank you for the interesting read, Meg! I couldn’t agree more with the main areas of concern that you highlighted (privacy, transparency, and respect). In addition to cost-cutting and safety, I tried to push myself to think of another way management could justify leveraging “scientific management” in this way. In doing so, I was reminded of a book that we had to read in Leadership and Happiness titled ‘Against Empathy: The Case for Rational Compassion.’ I think the author might argue that terminating under-performing workers might be the “compassionate” thing to do as their employment is simply not a good fit. In other words, the employee would be better off, and presumably happier, at a different employer. Obviously, one has to make some pretty serious assumptions and jump through some hoops to reach this conclusion, but I suppose one might be able to rationalize “scientific management” in this manner. Given the “pros” and “cons,” I feel like Jeff Bezos needs to take a page from LCA’s playbook and outline the legal, ethical, and economic responsibilities of Amazon. I would hope that after doing so, he would realize that using this technology to automate terminations is a terrible call.

On April 15, 2020, jlaydon commented on Building Bridges with AI :

Thanks for sharing! I believe that in general, the more data and opinions one can collect prior to developing a strategy, the better. However, as you pointed out, there are several issues that one must consider before drawing insights from a dataset. In this particular situation, I agree with you in that my main two concerns are: access to the opportunity to participate in the survey and participation bias. Given the potential impact of the conclusions drawn from this data, I would also like to see demographic data tied to opinions. I feel like doing this can help mitigate against the risk of a political group cherry-picking a sample set and projecting the results onto a population.

On April 15, 2020, jlaydon commented on Your Instagram Told Me So… :

Thanks for sharing, Jade! Super interesting topic – especially as we will presumably see engagement with social media increase while under quarantine. I wonder how institutions would choose to implement such a strategy. For example, would they require students or employees to give them access to their profiles so that they could run their algorithms? Additionally, if an algorithm is eventually developed to accurately predict serious issues like depression, is it the responsibility of an employer to identify those traits? Or should the social media platforms themselves take on that responsibility?