Struggling to Get a Job? Artificial Intelligence Could Be the Reason Why

Applying for jobs is hard work. We all know the drill—you polish your cover letter, spruce up an old resume and summon the courage to click send. After that, your career is in the hands of the hiring manager.

Except that isn't always the case. In many instances, instead of your application being tossed aside by a HR professional, it is actually artificial intelligence that is the barrier to entry. While this isn't a problem in itself—AI can reduce workflow by rapidly filtering applicants—the issue is that within these systems lies the possibility of bias.

It is illegal in the U.S. for employers to discriminate against a job applicant because of their race, color, sex, religion, disability, national origin, age (40 or older) or genetic information. However, these AI hiring tools are often inadvertently doing just that, and there are no federal laws in the U.S. to stop this from happening.

Kevin Butler is CEO at Centigy, a company that assesses human resources AI software. He told Newsweek there are "over a thousand AI HR tools on the market right now. Lots of organizations are using them, the uptake is pretty high."

These programs range from screening resumes and rating competency, to using facial recognition tools and analyzing a person's voice, body language or even, controversially, their emotions.

But how can a computer system, devoid of feeling or social prejudices, discriminate?

Well, a recent example of a piece of HR AI software that may have displayed racial bias was a facial identification tool that Uber used to conduct security tests on its drivers.

The software, created by Microsoft, has a record of not recognizing darker-skinned faces, and in 2018 there was a recorded 20.8 percent failure rate for females with darker skin, as opposed to 0 percent when tested on white men.

Fourteen Uber couriers lost their jobs after the new security measure was adopted last April, and claimed this was down to the AI failing to recognize their dark skin color.

An Uber spokesperson told Newsweek that while "there is always room for improvement" the company ensures the software is "fair and important" with additional human reviews.

Dr. Robert Elliot Smith, chief of science and technology at Centigy, explained to Newsweek that these problems can arise because "many facial recognition software programs have been trained primarily with white faces, oftentimes with white male faces."

He said: "When you've trained a program on data, it will automatically absorb whatever biases existed in the data." However, issues around racial biases also happen because hiring AI is often trained on historical data and classifications of race have evolved over time.

Smith, who wrote the book Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All, said: "When I was a kid, growing up in Alabama, certainly you were asked to record your personal data but if you were asked about race it would be white or Black. They might have had 'other,' or Native American perhaps.

Woman at computer with graphics
A stock image of a Black woman at her laptop, with computer graphics overlaid. AI hiring tools are currently unregulated and which means they are open to biases. Getty Images

"Well America is a much more diverse place than that now, and categories have expanded. If you think about when 50 years ago there was segregation of races and a lot of people didn't identify as mixed, it just didn't exist when people were segregated."

While many hiring tools may have inbuilt racial biases because of such blind spots, this historical data can also mean there is gender-based discrimination.

Smith explained: "A lot of those programs, if you would say 'the doctor came into the room', they would insert the pronoun 'he' because the vast majority of data would be that men are doctors.

"If it is trained on historical data it can very subtly and very undetectably absorb the language biases that are from the past. There are tons of examples of gender biases in computer programming."

This is problematic when a company is using a tool to seek out a candidate as it will favor men. In 2018 Amazon scrapped plans for its own AI hiring system, as Reuters reported that it had "taught itself that male candidates were preferable" due to them often having more tech industry experience on their resume.

An Amazon spokesperson told Newsweek it had trialed the system but due to its failings it was "never used by Amazon recruiters to evaluate candidates."

However, there have also been examples of AI systems being discriminatory even before someone applies for the job. Back in 2015 the MIT Technology Review found that Google's algorithms meant men were more likely to be advertised better-paid and more senior roles than women.

artificial intelligence could be showing prejudice when
AI hiring tools are often inadvertently discriminating against a job applicant because of their race, sex, age or religion. iStock

For people who are transgender, Butler explains, AI hiring tools can also be unfair. He said: "I was talking to a representative from the trans community a few weeks ago. She struggles to get a job in some organizations because she can't pass the background check. The system is trying to find her name, and can only back so far, because her name was changed a few years ago … You're just not hired because it is too difficult."

However, one of the most contentious uses of an AI hiring tool pertains to not what a person is, but how a person feels. In January HireVue, one of the industry leaders in the hire tech sector, scrapped its widely criticized software that used a candidate's facial expressions from video interviews as a factor its algorithms considered.

HireVue told Newsweek its decision to remove the visual analysis was based on its own research and it had been independently audited twice recently, with no issues in terms of bias.

Smith explained how analyzing facial expressions could be discriminatory. "Take if you are a person who is a minority part of neurodiversity, you are a person who emotionally reacts differently, a person on the spectrum, for instance," he said. "You are going to be discriminated against based on emotion detection because the way that you encode emotion and react to emotion is entirely different from the average person."

But there is good news, as these inadvertent biases will not go legally unchecked for much longer. The European Union has proposed regulation governing artificial intelligence, which includes the HR sector. Within a year companies using these hiring tools will need access to employment attorneys who understand technology, AI, as well as these EU privacy and tech regulations.

In America, too, this is beginning to gain traction. In February 2020 New York City introduced a bill that regulates the use of AI in hiring, compensation and other HR decisions. If enacted this will prohibit the sale of AI technology unless it has been audited for bias or passed certain anti-bias tests in the year prior to its sale.

However, Smith predicts that before the U.S adopts widespread regulation, there will be legal battles. He said: "America is very anti-regulation compared to European countries. However, it is very driven by litigation—people get sued.

"There will be people out there looking for patterns of decision making in algorithms that effectively violate anti-discrimination laws. So what we may see in the future in America is before there is regulation there may be lawsuits. Those lawsuits could be a class actions."

Newsweek has contacted all the companies mentioned in this article for comment.

Update 05/20/21, 12:40 p.m. ET: This article was updated to add a comment from HireVue.