Is Your Business Ready for EEOC’s Scrutiny of AI Hiring Technology?

Categories
AI tools Diversity Hiring process HR Management & Compliance mental disability Recruiting Technology

The use of artificial intelligence (AI) in the recruiting and hiring process has seen increased popularity in recent years. As businesses seek to lower hiring costs and reduce potential discrimination claims, many have turned to AI as an efficient solution for handling functions such as locating talent, screening applicants, performing skills-based tests, and even administering certain phases of the pre-hire interview process.

ai hiring

However, as the adoption of these AI tools continues to rise, so too have concerns around the risks associated with introducing this emerging technology into the workplace. This growing concern inspired a recent May 2023 guidance issued by the Equal Employment Opportunity Commission (EEOC), providing employers with implications and potential violations to consider when using AI-assisted technology to find their next great employee.

Here’s a closer look at what employers need to be considering as the EEOC continues to monitor AI hiring technology and its growing presence throughout the workplace.

Disparate Impact

While automating various aspects of hiring (and post-hire performance management) processes can eliminate the potential for intentional discrimination, this isn’t the only type of discrimination federal and state antidiscrimination laws prohibit. Discrimination can also occur when employers use tests or selection procedures that unintentionally disproportionately exclude individuals based on one or more protected characteristics.

This is known as “disparate impact” or “adverse impact” discrimination.

The protected characteristics that could form the basis for disparate impact discrimination come from several federal antidiscrimination laws. Title VII of the Civil Rights Act of 1964 protects against discrimination based on race, color, national origin, religion, and sex, as well as sex-related factors such as pregnancy, sexual orientation, and gender identity.

The Americans with Disabilities Act (ADA) prohibits discrimination based on actual, perceived, or historical disability, and the Age Discrimination in Employment Act (ADEA) protects individuals aged 40 or older from discrimination.

In the case of AI, if the tool a business uses inadvertently screens out individuals with physical or mental disabilities (e.g., by assessing candidates based on their keystrokes, thereby excluding individuals who can’t type because of a disability) or poses questions that may be more familiar to one race, sex, or cultural group than another, this could result in disparate impact discrimination.

EEOC Guidance

The EEOC’s May 2023 guidance confirms that rooting out AI-based discrimination is among its top strategic priorities.

The guidance also confirms that when such discrimination occurs, the agency will hold the employer, not the AI vendor, responsible, meaning the employer could be held liable for the same types of damages available for intentional discrimination, including back pay, front pay, emotional distress and other compensatory damages, and attorneys’ fees.

Issues to Consider Before Implementing AI Tools

Because of the risks involved, you should consult with employment counsel before implementing AI tools in the hiring and performance management processes. Although not an exhaustive list, the following are tools counsel can use to help businesses mitigate risk.

Question your AI vendor about the diversity and anti-bias mechanisms it builds into its products. Many vendors claim their AI tools actually foster diversity. By selecting vendors that prioritize diversity and by asking them to explain how their products achieve this goal, you can decrease the likelihood your AI solutions will yield a discrimination finding.

Make sure you understand what your AI product measures and how it measures it. As noted above, measuring typing speed or keystrokes or using culturally biased hypotheticals can increase the likelihood an AI tool will be deemed discriminatory. By questioning vendors in advance about the specific measuring tools built into the AI product, you can more easily distinguish between helpful and potentially costly AI.

Ask for your AI vendor’s performance statistics. Whether an AI-based technology causes a disparate impact involves a complex statistical analysis. While not used in every occurrence, one rule of thumb the EEOC uses in assessing potential disparate impact is the “four-fifths rule.” This rule compares the percentage of candidates from one protected classification (e.g., men) who are hired, promoted, or otherwise selected through the use of the AI technology with the percentage of candidates chosen out of other protected classifications (e.g., women).

If the percentage of women chosen, when divided by the percentage of men chosen, is less than 80% (or four-fifths), this can indicate discrimination occurred.

Although even a passing score of 80% or more doesn’t necessarily protect you from liability, you should learn whether your AI vendors have analyzed their products using the four-fifths rule and other statistical and practical analyses and what the results of those analyses show.

Test your company’s AI results annually. Just as you should question your AI vendors about their statistical findings before implementing an AI hiring solution, you should also self-monitor after the AI product is implemented. At least annually, consider running your own internal statistical analyses to determine whether the AI product yields fair, nondiscriminatory results.

Offer accommodations to disabled individuals. When candidates disclose that they have a physical or mental disability that prohibits (or limits) their participation in AI-driven processes, you should work with them to determine whether there’s another hiring or performance management process or a reasonable accommodation that can be used instead of the AI.

When in doubt, seek indemnification. Because the AI vendor is ultimately in the best position to design AI tools to avoid both intentional and unintentional discrimination, consider building into the vendor agreement indemnity language that protects your business in case the vendor fails to design its AI in a manner that prevents actual and/or unintended bias.

Shannon S. Pierce is an attorney with Fennemore Law in Reno, Nevada. You can reach her at spierce@fennemorelaw.com.

The post Is Your Business Ready for EEOC’s Scrutiny of AI Hiring Technology? appeared first on HR Daily Advisor.