After a year of tragically high unemployment rates, 2021 is expected to experience a huge hiring boom. According to the U.S Bureau of Labor Statistics, the number of job openings reached a series high of 8.1 million on the last day of March. Although the April jobs report ended up being slightly lower than expected, many experts are forecasting that unemployment will continue to decline as vaccination efforts ramp up and states reopen.

Author Rena Nigam

During the pandemic, HR professionals were forced to move to an entirely digital job search process, which has naturally increased reliance on hiring technology. However, as we continue to embrace new technology that improves our efficiency in hiring, it is important to keep in mind that it is a double-edged sword. Two and a half years of building hiring technology has shown us that technology can’t do it all. It’s an invaluable tool, but you cannot eliminate the human element of hiring.

Related: How bias in ‘moments that matter’ hurts DEI recruiting efforts

These days, 88% of companies use technology in their hiring process, according to Mercer, largely to help narrow the applicant pool. Nearly all of these tools are intended to automate rejection. That is, to filter out sections of candidates based on certain qualities or lack thereof (missing a key requirement, wrong location, etc). The issue is, for many of these tools, certain groups and demographics disproportionately get filtered out in this early stage of the job search. What happens when technology isn’t rejecting applicants for not having certain skills, but for having certain names? Or because they attended a women’s college? Here are a few key areas to look out for, and how to make sure you are able to combat bias within them.

  1. Online job adsare the new newspaper ad, but not as straightforward, and in some cases, they may be biased. Companies aren’t legally allowed to choose who their job advertisements are shown to, but studies show that large social media companies may be more likely to show specific kinds of job ads to particular cohorts. While these algorithms may be based on engagement, they still perpetuate harmful patterns. For example, a study found that when Google believed job seekers were men, the algorithm was more likely to show ads for high-paying jobs. So while an employer isn’t permitted to advertise on race, age or gender, they’re likely boosting jobs on social media, job boards and other online tools. As a company, make sure you keep a close eye on these metrics. Nearly all social media platforms will show you the gender breakdown of your audience when you boost something. If you notice a pattern of bias, consider changing the way you boost the ads. Choose new media, interests, locations, etc.
  2. Facial recognition software has been shown to struggle to identifypeople of color, but there iscurrently no accountability process for training AI. Video interview tools have become increasingly integrated into the hiring process, so much so that colleges have begun classes instructing students how to conduct themselves on video. While companies often assure job seekers that a human will review the interview recording, many popular video interviewing tools claim that their technology has the capacity to judge candidates on language, facial expressions, enthusiasm and more. Numerous studies have shown that facial recognition and voice recognition technology are lacking diversity-wise. As another example, voice recognition notoriously isn’t good at understanding women or foreign accents. All of these issues could be mitigated by increasing diversity in the data sets that this technology is trained on.  So while a company can sell and use software to make hiring decisions, nobody is making sure they’re using enough women’s faces, foreign accents, individuals with disabilities, or differing dialects in the data. If a tool can’t understand a candidate’s accent, how can it judge their fitness for a job? Consider where in your search process these tools are being used. Are certain candidates doing well? Are others being filtered out? Measurement and accountability are key for any technology.
  3. AI grading systems are perhaps the most dangerous for amplifying bias through technology applications in hiring. Amazon has the shining example here. In 2015, Amazon’s machine-learning specialists built a resume grading program to identify strong candidates. They trained it with historical data of successful candidates at their company, a heavily male-dominated pool. They unknowingly taught their program to discriminate against women. It began to favor traditionally male language patterns (the use of “power” words like “founded” and “led” over “helped” or “was part of a team”), considered women’s colleges as less desirable education and marked the use of the word “woman” or “women’s” against the candidate. For example, “basketball team captain” was good, but “women’s basketball team captain” was bad. As soon as the company noticed the issues, it scrapped the tool. But it’s indicative of a larger issue. While candidates don’t share their gender, race, age, or other personal markers on their resumes, there are simple enough proxies for these aspects and training a tool on data in which one group is overrepresented will disadvantage others. One study found a resume screening tool favoring candidates named Jared who played lacrosse in high school. Again, proxies for race and gender. If you use an AI-based hiring tool, it’s imperative that you keep track of candidate demographic breakdowns as they go through the systems, and as they exit.
Advertisement

As we grapple with the potentially devastating effects of allowing machines to make important decisions for us, it’s imperative that we begin to understand the processes by which these algorithms were made and the implications of how they’re being used. There needs to be a larger accountability process and standards of measurement across the industry for hiring technology. While technology is a tool we can utilize to hire, it cannot and should not replace the hiring process.

Related: How the EU’s AI rules will alter HR’s relationship with tech