Background
Screening Blog

Hire Thinking

Hire Thinking Blog

Are Technological Advancements and AI Creating Instances of Disability Discrimination?

ai
The ADA and the Impact of Using AI in Employment-Related Decisions

The Equal Employment Opportunity Commission (EEOC) recently issued a technical assistance document (TA) addressing how existing ADA requirements may apply to the use of artificial intelligence (AI) in employment-related decision making. The TA discusses how using these technologically advanced tools may create disadvantages for job applicants and employees with disabilities. According to the TA, “When this occurs, employers may risk violating federal Equal Employment Opportunity (“EEO”) laws that protect individuals with disabilities.”

For employers, the use of AI is becoming more prevalent due to its various business advantages. It streamlines both the hiring and the employee evaluation processes, improving efficiency, accuracy, and cost savings. Employers understand the value automation in their businesses provides to free up employees’ time to work on other matters. And in an effort to increase automation, many have increased the use of a wide variety of technological tools, including AI, for hiring, monitoring work performance, and determining salary and raises.

Some of the more common AI tools include “chatbots” that ask job candidates about their qualifications, video interviewing software that evaluates facial expressions and speech patterns, testing software, and algorithm-based resume scanners.

Are the Cons Outweighing the Pros?

Despite the benefits, there are inherent problems involved in using AI, including the potential for bias against those with disabilities. Reliance on AI to make hiring decisions or monitor employee performance could adversely impact and discriminate against individuals with disabilities in that it cannot adequately take disability status into account or an individual’s need for accommodation. For example, if it detects a gap in employment, the algorithm may dismiss the candidate simply because of that gap, with no option for the individual to provide an explanation that they could have been hurt or sick resulting in a disability that now must be accommodated under the ADA.

Ultimately, AI makes inferences based on background information, and the EEOC is questioning whether this process will screen out candidates based on traits related to disabilities even when they have no impact on the candidate’s ability to perform the job.

We’ve seen discriminatory issues resulting from the use of AI before; namely, with respect to race and gender. For example, a manufacturer posted a job and excluded certain zip codes because it wasn’t on their delivery line. This could have been a completely legitimate business decision. But if the excluded zip codes are in an area that has a large minority population, then that company could find themselves in trouble with the EEOC.

This TA cautions the same type of behavior with respect to those with disabilities. While developers have taken steps to reduce bias, to date, they are focused on race and gender, and not disability status. In a question and answer format, the TA details various instances of the use of algorithmic decision-making tools with regard to the ADA in general, reasonable accommodations, screening out qualified individuals with disabilities, and medical examinations.

Specifically, the TA lists three situations in which an employer’s use of AI could potentially violate the ADA:

  1. Failure to comply with the duty to provide reasonable accommodations to applicants and employees due to algorithmic evaluations.
  2. Use of algorithmic decision-making tools that screen out (intentionally or unintentionally) qualified candidates and discriminate those with disabilities who are able to do the job with reasonable accommodations.
  3. Use of tools that fail to comply with the ADA’s restrictions on disability-related inquiries.
Recommended Practices for Employers

The TA also provides recommended practices for employers to help with ADA compliance when using AI decision making tools. They include:

  • To help ensure they provide reasonable accommodations:
    • Train staff to recognize and promptly process requests for reasonable accommodation.
    • Develop alternate means of rating job applicants or employees when the current evaluation process is not available or has been shown to create a disadvantage to someone.
    • Instruct any third party administering an algorithmic decision-making tool on their behalf to promptly forward all requests for accommodation or enter into an agreement for the third party to provide reasonable accommodations.
  • To help to minimize the chances that they will create a disadvantage to individuals with disabilities:
    • Use algorithmic decision-making tools that have been designed to be accessible to individuals with disabilities.
    • Inform all job applicants and employees that reasonable accommodations are available and provide clear instructions for requesting such accommodations.
    • Clearly describe the traits the algorithm is designed to access, the method by which those traits are assessed, and the variables and factors that may affect the rating.
  • To help minimize the chances that algorithmic decision-making tools will assign poor ratings to individuals who are able to perform the essential functions of the job, with a reasonable accommodation:
    • Ensure that the algorithmic decision-making tools only measure abilities or qualifications that are truly necessary for the job.
    • Ensure that such abilities and qualifications are measured directly, rather than by way of characteristics or scores that are correlated with those abilities or qualifications.

The EEOC urges employers to confirm that the tool does not ask questions likely to elicit information about a disability or seek information about an individual’s physical or mental impairment or health unless such inquiries are related to a request for reasonable accommodation. They also shouldn’t rely on AI data that is not relevant to whether a candidate can perform essential job functions. Nor should they rely on a third party’s representation that an AI tool is unbiased. In fact, the EEOC specifically states that this reliance will not protect the employer from liability for discrimination that resulted from using the AI tool.

With regard to accommodations, employers should also ensure that they are providing the same accommodations whether AI is used or not. As an example, if they are offering more time to complete a pre-employment evaluation or provide an audio option for the visually impaired, they must offer it in both scenarios and ensure there is an interactive dialogue in all circumstances.

Other best practices include informing candidates and employees of the evaluation process and explaining the factors being considered, adding human checks to review AI analysis before a final determination is made, and incorporating methods so candidates and employees can request reasonable accommodation directly with the employer.

While many employers will continue to utilize AI tools to save time and money, few, if any, will truly understand how the behind-the-scenes algorithms work. It makes sense—most of us are not very technologically advanced. Instead, employers rely on the software to do it, and merely scratch the surface of what they feel they need to know. However, employers must shift that perspective and take a serious look at the unintended consequences of the software they are using or face serious repercussions with the EEOC.

Click here for more information.

At Hire Image, we work with employers to understand the EEOC’s guidance as it relates to their hiring practices. Please contact us if you have any questions about the laws and rules governing background checks and drug screening.

Scroll to Top