How an Amazon AI Recruiting Engine Taught Itself to Discriminate Against Women
The limits of machine learning were realized when Amazon tried to build an AI engine to screen for the top talent in a pool of job applicants.
Amazon wanted an automated process for screening job applicants so it developed an engine that used artificial intelligence to screen for talent. What they didn’t expect was for the AI engine to teach itself to discriminate against women. Following the discovery that the recruiting engine was biased against women and without room for improvement, it was scrapped.
The AI recruiting tool was developed by a team of computer programmers to review the resumes of job applicants. The experimental hiring machine awarded scores on a range of one to five to job candidates. It was designed to pick the best five out of every 100 resumes so that job recruiters wouldn’t have to spend hours vetting job resumes to make a pick.
The AI Tool Reengineered Itself to Penalize Women Based On Certain Trigger Words
The AI team that developed the tool began building the recruitment engine in 2014. But by 2015, Amazon management found that the secret recruiting tool was not rating job candidates without bias to gender. It was discovered that since men submitted the most applications for the past 10 years, the recruiting tool observed this pattern and then gave preference to men when it came to candidate selection.
The search engine re-engineered itself to penalize resumes that contained trigger words such as “women’s” as in “women’s class president”, among other gender-based words. A number of female applicants who graduated from all-women’s colleges were downgraded by the tool.
Amazon’s programming team edited the AI powering the recruiting engine to remove the gender trigger words and other possibly biased triggers which discriminated against people. However, Amazon was never fully confident it wouldn’t program other forms of bias into the engine. So the company disbanded the AI team and pulled down the job engine.
Reuters reported that Amazon said it looked at the recommendations made by the AI engine but never solely relied on that data for screening talent.
The Limits of Machine Learning
The failed Amazon project underscores the limitations of machine learning. The retail company is not the only major organization to deploy artificial intelligence in hiring employees; Hilton Worldwide Holdings Inc. and Goldman Sachs Group Inc. have revealed they will also be doing the same. A 2017 CareerBuilder survey revealed that 55 percent of U.S. human resource managers said they will be relying on AI for employee recruitment in the coming years.
However, machine learning specialists such as Nihar Shah, a teacher of artificial intelligence at Carnegie Mellon University, told Reuters that AI tools are not infallible. He said it is best to not rely 100 percent on AI tools since they are subject to unforeseen and inexplicable glitches. Unfortunately, gender bias was not the only problem found with Amazon’s recruitment tool; several hires the tool instigated were not unqualified and hired in error.
“I certainly would not trust any AI system today to make a hiring decision on its own,” said John Jersin, vice president of Talent Solutions at LinkedIn. “The technology is just not ready yet.”
But Amazon is not done yet. The company has set up another AI team to develop an automated employment screening machine that thrives on diversity. Meanwhile, the reduced version of the first failed tool will be used to carry out basic functions such as removing duplicate candidate profiles from the company’s database.