The Ultimate Rundown of AI Bias in Recruitment
April 2, 2020
Synne Linden
If you’re a recruiter or HR professional, and you’re on top of current trends in your field, you’ve undoubtedly heard of AI tools for recruitment. Perhaps your company has even implemented some of them.
Artificial intelligence holds incredible potential for saving recruiters time, building candidate databases and helping in talent acquisition. Many service providers also advertise that AI tools remove human bias from the hiring process. This is a highly debated topic, though — and AI bias is one of the greyest areas in developing and designing artificial intelligence-powered software. In this post we'll guide you through all the ins and outs of bias in AI recruiting.
What is bias, anyway?

AI bias comes from human bias. It’s as simple as that. That’s why, in order to understand how AI bias impacts recruitment tools, we need to know a bit about how human bias is formed. The more you know about your own subjectivity, the more your efforts to prevent it will pay off.

How is our bias formed?

Cognitive bias is ultimately developed by our brains to help us solve problems, whether it’s a simple matter of securing food, or a more complicated challenge of who you should hire for a job. These biases can be divided into two main categories: innate bias and learned bias. An innate bias is something you’re born with. It could be color preference, action/inaction in different situations, or why a baby prefers one flavor over another.

Learned bias, on the other hand, is bias that’s formed through the course of our life, as a response to experiences we have, or information we’re given that in turn informs our decisions. This can manifest as something big — like sexism or racism — or something small — like preferring red mailboxes to green ones because of the one you had as a kid.

There’s a huge list of different types of cognitive bias, but they all have one thing in common: they’re not, objectively speaking, rational.

Bias and how it affects recruitment

The best tool we have against cognitive bias is awareness. The vast majority of your bias will manifest as subconscious, which means that unless you actively seek it out, it’s going to sneak into your routine and affect your decisions. This is also where human bias begins to have an impact on AI systems for recruitment.

Putting artificial intelligence aside for a moment, though, bias is one of HR’s biggest challenges globally. Because this subjectivity is so often subconscious, discrimination at all levels and stages of an organization tends to occur subtly. Let’s say that the recruiter in charge of hiring for a new position has a bias against chequered shirts. It doesn’t really matter why they have this bias, but it’s there, and it can become an irrational and guiding component in the recruitment process.

AI tools in recruitment today

Before we take a closer look at exactly how AI bias is formed and affects recruitment tools, here’s a snapshot of how artificial intelligence has made its way into the human resources field.

How AI recruitment tools can help human resources departments

The main problem that recruiting systems solve with artificial intelligence is freeing up time. The group of potential candidates for any given job has grown substantially over the last few years. Today, a job seeker pool could come from anywhere in the world, and recruiters are finding themselves with bigger and bigger piles or resumes to go through. AI-powered software can process this information extremely fast, and a whole lot faster than a human.

The more sophisticated these systems get, the better they perform. For the recruiter, that means it becomes possible to rely on the AI to provide them with solid candidate shortlists. Instead of skimming through hundreds of applications, HR can focus on the more complicated tasks of hiring, onboarding and employee relations.

Which AI recruitment tools are out there today?

New AI recruitment tools are popping up everywhere. There are already some big players in the game, and many of them claim to remove human bias from the recruiting equation. At present, the main application of artificial intelligence in the field is candidate screening, at different stages of the hiring process.

Candidate screening can be both interactive and analytical when it’s AI-powered. Analytical candidate screening is when the software goes through enormous datasets (like, say, 5000 resumes) and produces a list of candidates that correspond to the job’s criteria. This functionality frees up a considerable amount of time for human recruiters.

Interactive candidate screening is when applicants interact with the AI system. This can be in such things as tests, games or problem-solving tasks — or in video or chat interviews. Because good AI systems are so powerful they are able to adapt the level of testing to fit the individual in an effective way, and weigh together the results as fit to a specific role.

Although some AI service providers offer interviews by an actual physical robot, most platforms are based on either video or chat interviews. On the opposing side of the AI for recruitment argument, these systems (particularly in video) have come under a lot of scrutiny for being unethical. We will take a closer look at why further down in this guide. In practice, these video interviews mean that an AI-powered system is able to interview candidates in early stages of the hiring process, using preset questions.

Furthermore, some vendors offer facial analysis as part of the video interview, where facial movements, eye contact and other subtleties are analysed and used to determine a candidate’s suitability for a job. The analysis is often based on datasets from existing employees, and so sometimes perpetuates an existing company bias.

What’s fair in a recruitment context?

The question of AI bias in recruitment pops up when you start to look at the concept of what’s fair. Recruiters are supposed to judge each candidate they meet objectively, and base their evaluation on the position they need to fill. This brings us back to that human bias — which tends to manifest at a subconscious level. Bias ultimately means that no matter how hard a person tries to be objective, it’s almost impossible. Adding to the difficulty is that what’s objectively fair to one person isn’t necessarily objectively fair to the next.

This creates some serious problems when you’re designing and developing a system driven by artificial intelligence. Just like a person processes information, software is based on computation. It takes information, analyzes it, and compiles an output — in the case of recruitment, a candidate list or ranking report.

The EU recently presented a white paper for legislation concerning high-risk AI applications, which AI used for recruitment is considered to be. The motion has come as a response to the increased, and largely unregulated, implementation of artificial intelligence-powered software for recruitment. Five key components are outlined, all of which attempt to stipulate fairness in AI recruitment:

  • The training data needs to be good enough to prevent discrimination
  • The developers need to keep track of the data sets used to train the AI
  • The end user should always be informed when they’re dealing with an AI
  • The AI system needs to be good enough to produce reliable results
  • Humans have to have oversight over the AI system, and final say about the decisions it makes
A Hubert Guide  Recruiting science: The structured interview – A high volume hiring approach  How does the structured interview compare to other screening methods? And how  can technology enable all candidates to be interviewed in high-volume settings?  Download Guide


What is artificial intelligence and AI bias?

As simply and generally put as possible, artificial intelligence refers to a machine that can learn — and act on the basis of what it’s learned. AIs essentially do this in the same way that humans do: by processing information that’s given to them. This also means that all artificial intelligence systems are dependent on highly qualitative training data in order to function optimally.

How does artificial intelligence learn recruitment?

In the world of AI, input is everything. If you want good results, you need to provide the software with good training data. You need to give it enough solid and qualitative information for it to perform according to the parameters you’ve set up for it. If we look at this from a recruitment point of view, it means that

  1. The AI needs enough information about which candidate characteristics are valuable, and which aren’t (input, or training data)
  2. You need to provide the AI with information about what a good candidate is (also input or training data)
  3. The AI needs to know what the purpose of the computation is (the parameters, or framework for the processing)
  4. You need to tell the AI which outcome you want, ie which kinds of candidates you’re looking for (the objective of the processing)

The process of providing this data is where AI bias tends to be inflicted in recruitment tools.

What is AI bias?

AI bias is when a recruitment tool produces a skewed output. Artificial intelligence isn’t just supposed to save recruiters time — it’s also supposed to help them evaluate candidates objectively. The problem is, human bias can easily be transferred to the software if you are not careful. It can happen at several stages and in many different ways, but ultimately, the AI that was supposed to be objective won’t be if it learns the same subjectivity that humans struggle with.

Here are some examples of AI bias gone wrong, both in recruitment contexts and in general:

The Amazon recruitment scandal

Amazon were early adopters of artificial intelligence for recruitment. And initially their system seemed flawless. The company was able to make recruitment considerably more effective by using historical data on existing employees. Problem was, Amazon had serious problems with existing gender bias in the company. Eventually it was discovered that the program Amazon was using discriminated against female applicants. This happened because of bias in the training data.

AI bias by Google Ads

In 2015, three researchers from Carnegie Mellon University in Pittsburgh published a study on Google’s ad privacy settings, and how the AI system behind it was found to discriminate against the users. The paper identified several issues, but the main one was that men, to a much larger extent than women, were shown ads relating to coaching services for high paying jobs.

Gender and skin tone in AI facial recognition

More recently, another study looked at AI bias in facial recognition and focused primarily on discrimination of gender and skin color. The findings showed a clear difference between men and women as well as between lighter skin tones and darker skin tones. In all classifying software, women with darker skin tones were worst off, with an error rate up to 35%.

All of these AI biases can be traced back to flaws in design, development or training data.

The ways AI bias in recruitment can be explained

There’s a number of ways that AI bias can manifest. It can happen at different stages of the system development, and depend on several different factors. There are two main areas to watch out for, though. These are the training data itself (ie, the information you provide the AI with) and the framework you tell the AI to process within (ie, the rules and objectives you put in place for the candidate computations).

Not enough training data

If you know a thing or two about statistics, you’ll know that a data sample needs to be of a certain size in order to produce reliable results. Let’s say you’re in charge of HR and recruitment in a 20-person business, and you’d like to use a recruitment tool powered by artificial intelligence. Because you’re really happy with the current team, you want to base the training data on existing employees.

Based on the task you give the AI system (like ‘Find me a perfect next candidate, based on the people already working here’), it will start to look for patterns in the training data that answer this question. Problem is, with just 20 people to process information from, the results won’t be dependable. Any patterns that the AI finds can’t be confirmed with enough samples. Let’s say that three of your 20-person team got their degrees in London. This is probably a coincidence, but it’s still a pattern. When the sample size is so small, it might mean that the AI will begin to favor new candidates with a degree from a London university.

Bias in the training data itself

At the very start of this guide, we told you that AI bias comes from human bias. This correlation becomes particularly noticeable when you look at AI bias in recruitment tools as a result of biased training data. The Amazon example above is a good one. Even if you have a good sample size (like, say, the entire staff at Amazon), existing bias will still cause AI bias to arise. That’s because the computations the system does will be based on an already skewed employee and recruitment culture in the company.

Feed an AI-powered recruitment tool a sample size of 1000 employees in which 800 employees are men, and that tool will favor male candidates in your next hiring process. An AI looks for patterns in the input it’s provided, and produces output based on the information it’s given. When basing training data on current employees, it’s important to be aware of the existing recruitment subjectivity in the organization.

When the training data collection goes wrong

Let’s say you’re an HR executive who believes that people with a postgraduate degree are better suited for management positions than people with an undergraduate degree. Because of this assumption, you decide to base all training data for the next round of AI-powered hiring on employees with a postgrad. This goes to show how AI bias can also come from subjective training data collection.

Moving forward with this example, the data collection would be severely flawed, and so the AI would end up producing unreliable results. That’s because only data that confirmed your beliefs would be used in the hiring process. And potentially, the ensuing AI bias would discard a perfect management candidate — because they didn’t hold a postgraduate degree.

Programming the ranking system

An AI will produce output according to its input — and according to what you’ve told it to look for. This latter component is important at the design and development stages of creating artificial intelligence software for recruitment. When ranking candidates, the AI follows the parameters that’s been set for it. If you’ve told it to rank by level of education, it’ll do that. If you’ve told it to rank by level of experience, it’ll do that.

AI bias often arises when the framework it computes within is subjective. Basically, this means that you’re weighting factors inaccurately, or at least not objectively. This, in turn, produces a ranking and candidate shortlist that’s skewed. Setting up the ranking system requires programmers and developers to consider a whole range of social, cultural and educational data points. When they don’t, it means the AI might miss some really good individuals in the hiring process.

AI bias from patterns that don’t make sense

AI designs have a strong analytical ability. This is not to be mistaken for evaluative ability, which in many ways is what separates AI bias from human bias. Our bias is often subconscious, but when we’re made aware of it, we have the capacity to evaluate it and ultimately do something about it. An AI is locked to the parameters of its processing. It can learn — but it can’t learn independently of the framework that it’s been programmed to follow.

In practice, this means that an AI can find patterns that are hugely biased. If it found a pattern that indicated women were best suited for secretary positions, for example, it wouldn’t automatically stop itself to critique this result. What’s important to remember, though, is that biased output can always be traced back to either skewed input or poor design. An AI can’t produce a calculated bias on its own.

The desired outcome — whatever the cost

Because AIs lack the emotive and evaluative ability that humans have, they will always go after their desired outcomes. And if moderating components aren’t incorporated into their design, an AI will do this whatever the cost. Sometimes, removing emotion and evaluation will have a double-reverse effect — and ultimately cause a predatory behavior that gives way to AI bias.

Let’s say a company wants to attract more applicants for the positions they publish. An AI system is put into place, which screens for all individuals that could be prospective candidates. Eventually, the algorithm discovers that it attracts more applicants when it targets individuals that have been unemployed for a long time. The AI finds hundreds of potential candidates, but none of them meet the minimum requirements of the job post.

The subtleties that exist in an AI system — or that arise unexpectedly — almost always lead to incorrect outputs. This is why it’s so important to be aware not just of the human bias that can be transferred into these systems, but also the AI bias that can be generated within them.

A Hubert Guide  Recruiting science, The structured interview – A high volume hiring approach  How does the structured interview compare to other screening methods? And how  can technology enable all candidates to be interviewed in high-volume settings?  Download Guide
The recruitment industry’s take on AI bias

AI holds incredible potential in the field of recruitment. In order to make it a viable and trustworthy solution for recruiters and HR departments, though, its learned bias needs to be addressed. Vendors can no longer hide behind the fact that humans also have biases (in many cases much worse than AI) anymore. Removing bias as much as possible is something that needs to happen for AI recruiting tools to keep expanding beyond the early adopters.

Because so many tools already exist in the market, this is already a hot topic, with a range of different opinions and approaches. Here we list some of them:

The AI bias debate

Right now, AI in recruitment is a widely debated topic and opinions range from hugely positive to near-apocalyptic. What the vast majority of both tech and other communities agree on, though, is that AI bias is something that needs to be addressed and recognized in further developments.

The main focus here is looking for innovative solutions that can be added to all stages of the design process. Even though bias is an important factor to consider in AI for recruitment, pretty much every HR department across the globe agrees that they need to streamline their hiring processes. AI offers a viable solution to this problem. Generally, both sides of the fence also agrees that there’s no easy fix for AI bias — and that we need to keep looking for ways to reduce it, the same way we do with human bias.

How the industry is responding to AI bias

On the cautious side

As this guide has shown you, factoring bias into the development of AI tools is difficult. It’s a complicated process, and there’s no simple or single answer. On the cautious side of the fence are people and research communities that believe using too much artificial intelligence will disadvantage candidates, and blindside recruiters and HR professionals.

In 2018, Upturn published a report on how hiring algorithms, because they take human evaluation out of the equation, end up discriminating in an unfavourable way. In a chat with Business Insider, Aaron Rieke from the organization stressed his concerns that the rapid development in AI for recruitment could lead to tools that didn’t take AI bias sufficiently into account.

Similarly, MIT Media Lab under the leadership of Joy Buolamwini have produced the movie Coded Bias. The project was driven by Buolamwini’s MIT thesis that uncovered serious flaws in facial recognition software. The AI bias she discovered came from poor training data, which didn’t contain sufficient input on skin tones, particularly in women.

AI recruitment tool juggernaut HireVue has also come under sharp criticism recently, from research centers and rights groups alike. The Electronic Privacy Information Center filed a complaint against HireVue with the Federal Trade Commission, and petitioned the commission to set the standard for what’s considered fair trade practices in AI tools.

On the optimistic side

On the other side of the fence are those organizations, companies and individuals who argue that AI tools do in fact do a much better job than humans when it comes to bias. Their main argument holds that so long as developers are mindful of the potential for bias during the design stages, it won’t arise in the recruitment tools.

Eric Sydell is the executive VP of innovation at Modern Hire, and is tackling the challenge of AI bias head on. He doesn’t believe that artificial intelligence in recruitment should be banned, or even regulated too stringently. Modern Hire has, however, incorporated a Code of Ethics into their practice, which informs businesses on how to use AI recruitment technology in an ethical way.

Hilton is one of HireVue’s biggest customers — and one of the technology’s most prominent advocates. The AI recruitment software has decreased their hiring time considerably, and Hilton stresses that candidates also have a more enjoyable recruitment experience.

HireVue themselves are also taking on the challenge of AI bias in their further software developments. The main focus is on how it’s possible to alter the design of the technology so long as you recognize that human bias can cause subjectivity in AI computations. HireVue particularly stresses diversity and fairness as key values in their development process moving forward.

How will service providers deal with AI bias in the future?

AI recruitment tools are here to stay, which means vendors are working relentlessly to identify the areas that their software needs to improve on. As the above examples have shown, AI bias is clearly one of them; it’s both self-perpetuating, and often evolves in the AI’s processing blackbox, which makes it difficult to address. Here are some potential solutions that we’ll likely see in efforts to limit bias in AI-powered software for recruitment.

Stricter self-evaluation practices

Chances are, AI tools for development will be coupled with more comprehensive practices for human evaluation at the problem-framing, design and programming stages. This can be something as simple as a bias checklist or as complicated as running the design plans through software to search specifically for bias.

Transparent screening

One of the biggest concerns of AI proponents is the lack of transparency in many existing solutions. It’s not immediately clear to the end user that they’re dealing with a machine and not a person; they don’t have control over how their personal data is used, and worst of all; they’re not guaranteed access to their results and get no chance of improving. Transparent screening will most probably become bigger and bigger within AI for recruitment, both to ensure trust and to abide by upcoming legislation.

AI design for AI bias

On the programming side of things, it’s likely that AI recruitment tools will incorporate more and more frameworks that specifically identify and flag bias. In combination with practices for self-evaluation, it will become easier for developers to see which parameters and input conditions need to be put in place to make this as airtight as possible. The challenge here remains striking the right balance between ice-cold computations and empathetic programming evaluations.

A shift away from the ‘AI removes bias’ discourse

As we’ve already mentioned, there’s a growing acceptance for the fact that artificial intelligence is a whole lot more than an objective machine. It’s likely that service providers, as part of their preventative measures, will begin to recognize AI bias. AI adoption is largely based on trust in the software’s reliability and ethics. This means that vendors will probably have to find ways to solve AI bias out in the open, rather than try to conceal the problem in the shadows.

Regulations, laws and stipulated frameworks

AI for recruitment purposes is considered high-risk by the EU, and there is also a growing consensus in the US that the technology needs laws to abide by in order to ensure fairness. When and if current suggestions become law, service providers won’t just have their own aspirations of ethics to strive for — they may be fined for not following the rules. Because of this, future software powered by artificial intelligence will likely be more transparent and end user-oriented.

AI tools for recruitment hold incredible potential within human resources. And because they’ve all come from a place of tech innovation in the first place, it’s likely the developers, program designers and service providers will face the AI bias challenge in this exact same spirit.

Implementation period
Insight
The Ultimate Rundown of AI Bias in Recruitment
April 2, 2020
Synne Linden
Contact
Give us a call
General inquiries
hello@hubert.ai
Swedish office
Vasagatan 28, 111 20 Stockholm, Sweden
Update cookies preferences