Overcoming AI Bias in Recruitment: Strategies for Fair Hiring Practices

After sales and marketing, AI is here to transform the corporate workforce.

The AI recruitment market is poised for significant growth, according to a Market Research Future report. By 2030, the market is projected to reach 42.3 million USD, reflecting a CAGR of 6.9% from 2020. 

Despite the growing trend towards using AI in the recruitment process, companies cannot risk leaving all recruiting decisions to AI yet. 

The glaring problem? AI bias in hiring.

How AI Bias Shows Up in Recruitment

Companies are actively using AI tools to quickly analyse resumes, take interviews, and analyse candidate-recruiter interactions. The resulting insights connect job-seekers and employers in a smarter way, leading to higher job satisfaction and lower turnover. 

Sounds like a win-win situation for everyone, right?

Not quite. 

Real-world incidents indicate that while AI can find great talent, recruiters should watch out for bias sneaking in. Take the example of Amazon’s (now scrapped) AI hiring tool from 2015, which discriminated against women candidates. Or the infamous COMPAS AI that reinforced racial prejudice. 

Thankfully, artificial intelligence algorithms cannot have any inherent bias, opinions, or feelings. And so, the problem of AI bias is a preventable one. 

Keep reading this article to learn how you can combat AI bias and ensure algorithmic fairness in your recruitment process. 

Where Does AI Bias Originate From? 

The root cause of AI bias in hiring lies in the data that is used to train the AI tool. 

Human bias can seep into the AI system at any stage, such as: 

  • Data collection: If the collected data is not examined for bias, it may result in unfair outcomes. For instance, suppose the data used for training an AI system for facial recognition only reflects a particular race. The AI system may struggle to identify people of other races and fail to reflect diversity. 


  • Data annotation: Data annotation is the process of labelling data to train AI systems. If the annotators have varying interpretations of the same label, it creates room for bias. For example, if they mostly see pictures of female nurses, they might be more likely to label a picture of a woman as a nurse.
    Sometimes, instructions given to annotators might also be unclear or biased, leading them to label data inconsistently or inaccurately.


  • Training the model: If the model architecture is not designed to handle diverse inputs, the model may produce biased outputs. For example, a resume screening AI might prioritise keywords like “MBA” or “Ivy League” education, potentially overlooking candidates without those specific markers, even though they are qualified for the job. 


  • Deployment: An AI system constantly trains and develops. If the system does not get access to diverse data after deployment, it can acquire bias in the future. 

Now that you know how an algorithm can become biased, here is what you can do about it. 

3 Ways Companies Can Mitigate AI Bias

#1 Use Regular Audits to Reveal Bias

Monitor your AI system to check if certain groups are consistently being screened out or ranked lower.

For example, the Amazon AI recruitment tool ranked candidates with phrases like “Captain of the women’s chess club” lower for leadership positions than male candidates with similar qualifications. This bias was revealed through an audit. Amazon’s training data included resumes mostly from male leaders, so the model prioritised keywords traditionally associated with male leadership styles. 

Similarly, for all AI tools, it is vital to conduct regular audits to identify bias and investigate the cause. Then, companies can take steps to address the bias, like retraining the model with a more balanced dataset or widening the model architecture for a range of leadership qualities.

#2 Data Diversity

The training data should represent the population you are recruiting from. Include resumes, job descriptions, and performance data from various genders, ethnicities, ages, and educational backgrounds. 

Take the previous example of a resume-screening AI tool which prioritised keywords related to “Ivy League” universities and pushed down qualified candidates from non-elite institutions. To mitigate this, the training data can include resumes from a range of universities, including state schools, online programs, and boot camps. 

#3 Explainability and Human Supervision of AI 

Use an AI tool designed to explain the basis of its decision and flag rejected applications. Applying human oversight in conjunction with AI, recruiters can review flagged applications or those with close calls to ensure qualified candidates aren’t unfairly filtered out. 

Suppose your AI recruitment tool flags a candidate for a software developer position because they lack a specific programming language in the job description. However, their resume shows extensive experience with a different but equally relevant language. The AI provides an explanation “Candidate lacks keyword: Python.” Now, a human recruiter can identify this as a potential bias and make an informed decision based on the candidate’s overall skillset, not just the absence of a specific keyword.

Mitigating bias and fair hiring practices help companies build diverse teams with qualified candidates. However, hiring teams may soon be required to take measures to overcome AI bias and adhere to legal frameworks. 

Let us dig deeper into the legal and compliance issues around AI and hiring. 

AI in Recruitment: Laws, Compliance, and Future Implications

In May 2022, the US EEOC (Equal Employment Opportunity Commission) sued the ITutorGroup, alleging that their AI-based recruitment system rejected older applicants. The case was settled for 365,000 USD, and the money was set to be distributed to the rejected applicants.

Around the globe, job-seekers and companies alike fear more such scenarios in the future. To combat this issue, governments are designing laws to support citizens. In the UK, the use of AI in employment is subject to detailed regulation under the Equality Act 2010. In the US, the EEOC has declared that a company will be held liable for discrimination if its recruiting software displays bias. 

If you have adopted AI in your recruitment processes or are planning to do so, here is what the EEOC has suggested in their technical assistance document: 

  • Employers must evaluate potential bias in their tools, especially against protected classes.
  • Employers are advised to regularly test AI hiring tools to ensure their reliability and accuracy. 
  • Training data must be comprehensive, relevant and representative of the diverse applicant pool. Additionally, adequate data management practices should be established to maintain data privacy and security.
  • The use of AI tools should not violate any federal laws or regulations. 

Hire Better With Reliable AI Recruitment

With Techademy’s interviewer-as-a-service, you can cut down hiring time, eliminate bottlenecks, and find the right fit for the job – even from within the company. 

With rigorous evaluation and unbiased assessments, we help you find the right fit for any role. 

Deliver employee satisfaction, improve performance, reduce employee churn and let Techademy do the heavy lifting for you. Visit Techademy’s website to learn more about our services and book a demo.

How our LXP works in the real world
and other success stories

Want to see Techademy in action ?