Last year I had an interview through the new hiring platform HireVue, a “pre-employment assessments platform [that] uses AI and validated IO psychology”, according to its website. Even though I finally made it through the process to meet the real interviewers and eventually got an offer, the whole experience did not sit well with me. I had a gut feeling that this new platform, that is supposedly is free of human biases, might in fact discriminate the non-conventional interviewees.
This article (https://www.wired.co.uk/article/ai-hiring-bias-disabled-people) explores this argument deeper, and I agree with the author that we should consider the wide-spread usage of AI in the recruitment process with caution, because it might potentially discriminate against disabled people.
Firstly, HireVue is scoring the performance of the interviewee based on the one-way video footage by analyzing the variety of data points that are usually not accessed consciously by the recruiters, such as the micro movements of facial expressions, speech patterns, choice of words, etc. The employer may define what are the factors the most important for the job, and the platform will then assess the interviewees and suggest the best fitting one based on that criteria. I see three potential problems with such process: scoring system itself is forced and might be unfair to outliers, one-way video might negatively affect the performance of the interviewees and the myriad of data points might skew the analysis from what is truly important.
Then, according to the article, HireVue does not only simplify the recruiting process, but also controls for the unconscious human bias. However, AI might exhibit issue with recognizing the atypical speech patterns, such as the ones used by deaf people and people on the autism spectrum. Moreover, our recent experience as a class with Quantified Communications exposed the limits of the algorithmic approach to analyzing the speech of not native English speakers. I am one of those, and even though some people might consider my heavy Russian accent as a peculiar strength, the report I got from Quantified Communications left me confused with low score on speech quality and clarity. I cannot imagine how it might then feel for people with non-trivial speech patterns to be benchmarked against the average speaker from completely different sample that does not represent the whole population.
Finally, AI platform was not taught compassion, emotional intelligence cannot be embedded in the digital code. Even though it is a fact that humans are biased in whom we incline to like and dislike, there is a positive side in the biases, such as open-mindedness, kindness, trust, credence the interviewer is likely to give to someone she sees face-to-face. In the Arts of Communication class we learned that humility, that the person expose to the audience, takes this audience on the emotional journey along with the speech giver and ultimately makes this person more likeable. Ultimately, we are not hiring the machines to work, so why these candidates should be judged by a machine?
Following this, I believe recruiters should be aware of such possible negative consequences that simplification and digitalization of the recruiting process might produce. There are possible mitigating factors such as providing an option for the eligible interviewees to avoid human-less digital assessment altogether, or apply such a one-way video process and digital assessment only to highly standardized jobs that do require massive hiring, such as call centers.