A humanoid robot stands side on with its hand out, two people a male and female in business attire stand on it's palm.

The Downside of AI Use During the Recruitment Process

An article by Harriette Morgan

Artificial Intelligence (AI) as we know it today is a misnomer. It's wrongly named. It’s not the AI we’ve seen in movies like I, Robot or 2001: A Space Odyssey.

Our technology is intelligent. It does have the ability to acquire and apply knowledge and skills, just as humans do. What it is not doing is "thinking", in the way that we do. AI creates outputs or solutions based on combining its existing knowledge. It does not have the capacity for creativity, that’s not to say it doesn’t produce results outside of its programming. Typically, the AI makes its decisions based on the data is is trained on. When that data is biased, so are the results especially when that data does not reflect the standards of today.

For the purpose of this blog, bias is being defined as a judgement, inclination, or prejudice against a person or group in a way that is unfair.

AI is, very simply, the development of smart systems that can carry out tasks that typical require human intelligence. The technology learns using algorithms to find patterns from the input it receives. The input is massive amounts of data which comes from reputable websites, databases, and research papers. With the likes of something like ChatGPT, the data also comes from user interaction. Each input adds more variables for the algorithm to be able to utilise.

The problem with all of this is that data input is coming from humans. We make the websites, we create the databases, and we author the research papers. Therefore, AI is inherently infused with human biases, be they conscious or unconscious. By using a biased system, the same prejudicial failures are going to occur but at a much more efficient pace.

AI creators have tried to make normal human concepts such as fairness into mathematical and measurable data. The intention here is to minimise prejudice against marginalised communities. However, disability and accessibility are quite often excluded from the conversation.

Current measures to remove or reduce bias in algorithms tend to compress the variance for disabled people. Variance, in this context, means the measure of how far data points are spread out from the average. Disabled and neurodiverse people are outliers to the “norm” of society. By compressing or flattening out disabled data points these algorithms are ignoring the variability, or in DEI terms the diversity, of disabled people.

There has been talk recently about the integration of AI into the recruitment process. Both One News (episode airing on Sunday 26th November, 2023) and Breakfast have discussed the use of AI in recruitment.

In the words of Julian Lambert, who said during his interview on Breakfast, the recruitment process has four stages- Attraction, Engagement, Selection, and Assessment.

The use of AI to filter out CVs without the “right” keywords or the "required" amount of experience is going to disadvantage people. Or using AI to watch recruitment videos and filter out the “bad” ones is not going to advantage disabled people. AI often misinterprets facial expressions of some disabled people and speech recognition AI struggles to decipher words from other types of disabled people. The use of AI in the recruitment process does not create an environment that is objective. Thus, reinforcing the reality that AI perpetuates bias against disabled people.

AI takes out the “human factor”. The use of creative thinking capabilities which can identify reasons as to why Candidate X is a viable option despite not using the preferred language, keywords, or having the “right” amount of experience.

Those of us in the disabled community, who have limited job experience, are never going to be able to break into an industry where AI is the first step to acceptable candidate engagement or selection.

There are claims that AI algorithms are more efficient and less biased than humans. Thus, the aim of AI in recruitment is to reduce the conscious and unconscious biases that humans have. The difficulty is that if the concept of bias is ill-defined to the algorithm, it can cause the AI to replicate that bias or even magnify it. An example of this, was back in 2018 Amazon created an AI tool for their recruitment process. It had to be discarded because it showed bias against women, despite efforts to rectify that. The discussion around gender bias, as well as racial or age prejudice has been around since before the implementation of AI’s role in recruitment. But disability has barely entered the conversation. Disabled people are, so often, underrepresented during discussions about AI.

To understand why, disabled people are discriminated against in algorithms depends on how the humans writing the algorithm define disability. Therefore, human bias is affecting the algorithm from its inception. As most of society currently sits within the medical model of disability, which views people as being disabled because of their diagnosis or their impairment. This view then impacts on the algorithm.

Employers who are unaware of the benefits that disabled people bring to an organisation can create a bias against disabled candidates, as they believe that “non-disability” is essential for productivity. This all bleeds into the creation of algorithms that reflect entrenched biases against disabled people.

AI algorithms struggle to process disabilities, especially in terms of emotional analysis in faces or recognising voice and speech patterns, because those disabilities are removed from, or not inserted into, the AI dataset. In fact, AI algorithms that assess the natural language sentiments of people, tend to rate the mere mention of disability as being negative or even toxic.

By not using datasets which represent the full spectrum of humanity, disability included, the AI, by its nature, cannot and never will be fair. This is not to say that disability data is not being gathered by AI algorithms. There is a disproportionate amount of disability data being collected alongside datasets around addiction, homelessness, and violence. The AI then finds the pattern and correlates disability with addiction, homelessness, and violence.

There are a variety of ways to try and remove bias from algorithms, like getting rid of names from applications, or basing applicant selection on traits or characteristics. However, it can be challenging to recognise and then mediate the algorithmic bias.

All this impacts on the recruitment process for disabled people because non-disabled people are considered to be the norm by AI. Therefore, disabled people are automatically viewed as “not-normal”. “Not-normal” people do not get passed AI algorithms. Particularly those that are not set up to learn about the different types of humans, and how that appears.

AI is in its early stages in use in New Zealand’s recruitment sector, and it needs to be monitored. The AI must have supervision. Yes, it can make decisions but there needs to be transparency. A human must be scrutinising those decisions and reversing any that have been made out of bias. This, of course, means that the human needs to be aware of their own biases.

It would be remiss if we didn’t discuss the fact that prejudiced recruitment methods against disabled people, have existed long before AI was even a thought. Disability discrimination pre-dates technological advances in machine learning. Attempts to “fix” the algorithm can often end-up reinforcing the negative views around disability, deepening ableism, and forcing the solution onto disabled people rather than the AI. This becomes a vicious cycle which perpetuates the medical model of disability mindset and prevents disabled people from fully reaching their potential in the workforce, and beyond.

This is not to say that AI is bad. This blog is simply pointing out that its use in the early stages of the recruitment process is not fair. Disability is ill-defined at the creation of the algorithm because society sits largely within the medical model of disability. Add to that the fact that disability is removed from or not entered into the AI’s dataset.

Understanding how AI is programmed, because of the human creators, can help us to shift the way AI algorithms are conceived in the future. If we begin to centre disability in algorithmic design, we will be able to increase our knowledge of AI bias. We can also understand ourselves as a society and why these biases exist in the first place. Learning how social structures, norms, and institutions impact, create, and fester prejudice can only allow us to develop and grow as whole society.