Natural language processing

What will a robot make of your résumé? The bias problem with using AI in job recruitment

What will a robot make of your résumé? The bias problem with using AI in job recruitment



summary
Summary

Our guest authors shed light on the growing influence of AI in job recruitment and its potential to perpetuate biases. Their research reveals the subtle and overt ways AI can heighten biases in the hiring process.

The artificial intelligence (AI) revolution has begun, spreading to almost every facet of people’s professional and personal lives – including job recruitment.

While artists fear copyright breaches or simply being replaced, business and management are becoming increasingly aware to the possibilities of greater efficiencies in areas as diverse as supply chain management, customer service, product development and human resources (HR) management.

Soon all business areas and operations will be under pressure to adopt AI in some form or another. But the very nature of AI – and the data behind its processes and outputs – mean human biases are being embedded in the technology.

Ad

Our research looked at the use of AI in recruitment and hiring – a field that has already widely adopted AI to automate the screening of résumés and to rate video interviews by job applicants.

AI in recruitment promises greater objectivity and efficiency during the hiring process by eliminating human biases and enhancing fairness and consistency in decision making.

But our research shows AI can subtly – and at times overtly – heighten biases. And the involvement of HR professionals may worsen rather than alleviate these effects. This challenges our belief that human oversight can contain and moderate AI.

Magnifying human bias

Although one of the reasons for using AI in recruitment is that it is meant to be to be more objective and consistent, multiple studies have found the technology is, in fact, very likely to be biased. This happens because AI learns from the datasets used to train it. If the data is flawed, the AI will be too.

Biases in data can be made worse by the human-created algorithms supporting AI, which often contain human biases in their design.

Recommendation

In interviews with 22 HR professionals, we identified two common biases in hiring: “stereotype bias” and “similar-to-me bias”.

Stereotype bias occurs when decisions are influenced by stereotypes about certain groups, such as preferring candidates of the same gender, leading to gender inequality.

“Similar-to-me” bias happens when recruiters favour candidates who share similar backgrounds or interests to them.

These biases, which can significantly affect the fairness of the hiring process, are embedded in the historical hiring data which are then used to train the AI systems. This leads to biased AI.

So, if past hiring practices favoured certain demographics, the AI will continue to do so. Mitigating these biases is challenging because algorithms can infer personal information based on hidden data from other correlated information.

For example, in countries with different lengths of military service for men and women, an AI might deduce gender based on service duration.

This persistence of bias underscores the need for careful planning and monitoring to ensure fairness in both human and AI-driven recruitment processes.

Can humans help?

As well as HR professionals, we also interviewed 17 AI developers. We wanted to investigate how an AI recruitment system could be developed that would mitigate rather than exacerbate hiring bias.

Based on the interviews, we developed a model wherein HR professionals and AI programmers would go back and forth in exchanging information and questioning preconceptions as they examined data sets and developed algorithms.

However, our findings reveal the difficulty in implementing such a model lies in the educational, professional and demographic differences that exist between HR professionals and AI developers.

These differences impede effective communication, cooperation and even the ability to understand each other. While HR professionals are traditionally trained in people management and organisational behaviour, AI developers are skilled in data science and technology.

These different backgrounds can lead to misunderstandings and misalignment when working together. This is particularly a problem in smaller countries such as New Zealand, where resources are limited and professional networks are less diverse.

Connecting HR and AI

If companies and the HR profession want to address the issue of bias in AI-based recruitment, several changes need to be made.

Firstly, the implementation of a structured training programme for HR professionals focused on information system development and AI is crucial. This training should cover the fundamentals of AI, the identification of biases in AI systems, and strategies for mitigating these biases.

Additionally, fostering better collaboration between HR professionals and AI developers is also important. Companies should be looking to create teams that include both HR and AI specialists. These can help bridge the communication gap and better align their efforts.

Moreover, developing culturally relevant datasets is vital for reducing biases in AI systems. HR professionals and AI developers need to work together to ensure the data used in AI-driven recruitment processes are diverse and representative of different demographic groups. This will help create more equitable hiring practices.

Lastly, countries need guidelines and ethical standards for the use of AI in recruitment that can help build trust and ensure fairness. Organisations should implement policies that promote transparency and accountability in AI-driven decision-making processes.

By taking these steps, we can create a more inclusive and fair recruitment system that leverages the strengths of both HR professionals and AI developers.The Conversation

What will a robot make of your résumé? The bias problem with using AI in job recruitment

Source link