How to Reduce the Effects of AI Bias in Hiring

colleagues gathered around a table

By Zac Amos, Features Editor at ReHack

Zac Amos serves as the Features Editor at ReHack, where he covers AI, big data, and automation. He is especially interested in how technological advancements can be applied to the HR sector.

For more of his work, follow him on Twitter or LinkedIn.


Artificial intelligence (AI) can streamline the hiring process, making it easier for recruiting teams to acquire new talent. While AI can support better decision making and reduced hiring bias, it actually comes equipped with the same discriminations as the people who created it. So how can companies overcome this challenge?

Two men listening while a woman talks

What is AI bias?

There are numerous biases in the hiring, management and firing processes. Many are unconscious or subtle — for example, hiring slightly fewer women overall or letting go of older employees too soon.

About 50% of people think their race, gender or ethnicity has made it harder to land a job. Some companies have implemented AI in their TA functions to help make decisions without factoring in these protected classes.

The issue is that AI doesn’t work that way. It’s only as good as the data set programmers use to train it, and any errors or inherent biases will be reflected in the AI’s output. These aren’t emotional biases, but programming errors that lead to unwanted outcomes. Several common problems create biases in AI.

Data May Reflect Hidden Societal Biases

Looking up the word “beautiful” on Google reveals mostly photos of white women. The algorithm used a specific training set that contained these types of images. The search engine doesn’t have racial preferences, but the samples from which it draws its results were made by people who did.

Algorithms Can Influence Their Own Data

An algorithm can influence the data it receives, creating a positive feedback loop. As photos rise to a search engine’s front page, more people click on them, creating a positive feedback loop where the algorithm suggests them even more. AI can magnify its own biases.

Not Everything Is Quantifiable

It’s hard to quantify certain features when creating training data. For example, how do programmers quantify good writing? Writing assistance software often looks for proper grammar, correctly spelled words and sentence length, but it has trouble detecting nuances of human speech, such as rhyming and idioms.

People Can Manipulate Training Sets

Bad actors can purposely corrupt training data. Tay, an artificial intelligence chatbot released by Microsoft through Twitter in 2016, was only online for a few hours before people taught it to post inflammatory content. It spewed violent, racist and sexist misinformation, and Twitter was forced to take it down a mere 16 hours after its launch. Open-source or public AI often falls victim to this issue.

Unbalanced Data Affects the Output

Data scientists use the phrase, “Garbage in, garbage out” to explain that flawed input data produces flawed outputs. Programmers may inadvertently train AI on information that doesn’t have the same distributions as in real life. For example, facial recognition software often has trouble recognizing faces in persons of color because the original training set mostly contained photos of white people.

Data sets can also contain correlated features the AI unintentionally associates with a specific prediction or hidden category.

For example, suppose programmers don’t give the AI a sample containing female truck drivers. In that case, the software will automatically link the “male” and “truck driver” categories together by process of exclusion. It then creates a bias against women and may conclude they should not be hired as truck drivers based on previous patterns.

Why AI bias is a challenge in hiring

Talent teams are committed to treating candidates fairly in the hiring process. But, with significant workloads, many teams have turned to AI and automation software to help them sort through resumes or job applications.

Before COVID-19, the average job opening received 250 applications. Yet applicant flow for many roles has increased. For example, some entry-level jobs have received thousands of candidates, with one receiving an overwhelming 4,228 applications.

Many hiring teams use AI programs, but this software must be unbiased. It can mean the difference between automatically discarding an application and hiring the most qualified candidate.

The AI recruitment industry is worth over $500 million, and recruiting teams use it for everything from predicting job performance to assessing facial expressions during a video interview.

Group of varying age, race, and gender, meeting professionally

However, many applicants report these types of software rejecting them based on having foreign-sounding names or including certain words in their resumes. People’s names and word choices aren’t a protected class, but often indicate race, gender or age.

In 2018, Amazon scrapped a recruiting tool that automatically penalized resumes that included the word “women’s,” as in “women’s studies” or “women’s university.” That’s despite the fact that orgs in the top quartile for gender diversity are 25% more likely to make above-average profits than those in the lowest.

Reducing the effects of AI bias in hiring

How can well-meaning recruiting teams avoid these types of bias when using AI in their hiring process?

Double-Check AI Predictions

First, it’s important not to take AI predictions at face value. Algorithms do their best to make good forecasts, but can get it wrong.

Someone should review AI suggestions to accept, veto or examine them further. One body of research suggested a 50% chance of AI automating all jobs within 120 years, but it failed to account for nuances like checking for bias.

Report Biases Immediately

Recruiting teams should report any biases they notice in AI software. Programmers can often patch the AI to correct it.

Seek Transparency

Programmers should strive to provide transparency in their AI algorithms — in other words, allowing users to see which types of data the software was trained on. This process can be challenging because of hidden, hard-to-interpret layers, but it’s still better than hiding the information altogether. Talent acquisition teams should specifically look for transparent AI software.

Collect Better Data

Reviewing an algorithm’s recommendations for protected classes to check for inherent bias is also a good idea. Programmers could create balanced AI algorithms by including more training data using protected classes like minorities.

Collecting this type of data is a double-edged sword — people in protected classes may not want to hand over their personal data to train an algorithm. It can feel like a violation of privacy or invoke fears that it will be used to target them. However, collecting information on protected classes is crucial to reduce future bias.

Get Different Perspectives

Having a sociologist or psychologist on the team is valuable when leveraging new AI software. They may notice biases in training sets and offer advice on correcting them.

Ask Questions

Programmers should perform a few final checks before releasing new AI software to the public. Does the data match the overall goals? Does the AI include the right features? Is the sample size large enough, and did it contain any biases?

There may eventually be a standardized process to vet new AI software before launching it. Until then, programmers must double-check their work.

Improve Diversity, Equity and Inclusion

Almost 50% of recruiters say job seekers are inquiring about diversity and inclusion more than they did in 2021. Companies should seek to create a culture of diversity, equity and inclusion (DEI) beyond just improving their AI use. For example, 43% of businesses said they were removing bias from the workplace by eliminating discriminatory language from their job listings.

Two men leaning over a table reading a document

Look to create balance with AI in recruiting

AI is simply a tool that does what it was designed to do. Training it with biased data leads to skewed results.

Recruiting teams must scrutinize any software for hiring new employees. Above all, it’s always best to have a real person make the final decisions — because if a company wants to hire human beings, it should treat them as such.

Download our Automation and AI in Recruiting report today to learn more about the risks and rewards of leveraging artificial intelligence and automated workflows for talent acquisition.