As artificial intelligence (AI) continues to proliferate in the applicant screening/employee selection industry, federal contractors have been left with little to no guidance from the OFCCP regarding their compliance obligations. President Biden issued Executive Order 14110 late in 2023, which called for U.S. government agencies, including the Department of Labor, to publish guidance for Federal contractors on the use of AI relating to nondiscrimination in employment decisions. On April 30, 2024 the OFCCP released their guidance along with a landing page for the use of AI systems.
The federal contractor community welcomes this direction from the agency Back in 2023, they added item 21 in the current compliance review itemized scheduling letter, which requests, among other things, information about any artificial intelligence, machine learning, or automated systems or other technology-based selection procedures. With no Federal guidance or regulation, contractors have only been able to speculate what the agency intends to do with that information. Moreover, the agency has not clearly defined what they consider “artificial intelligence,” despite other state-level regulations like the NYC Local Law 144 taking a firmer stance and putting forth what constitutes an “automated employment decision tool (AEDT)” in their view.
This new guidance from the OFCCP includes a brief list of FAQs that address some important questions for employers and provides what they call “promising practices” for the use of AI. Their FAQs and recommendations largely confirm what we have known about how AI tools in selection processes: they need to be closely monitored and when they result in disparate impact, they need to be sufficiently job-related. Overall, guidelines from the agency give a good foundation for employers to meet their compliance obligations when using AI systems.
Below is a recap of the key takeaways from the guidance:
Defining AI and Automated Tools
The agency defined what they mean by artificial intelligence, algorithms, and automated systems. They have aligned with NYC LL144 and provide a good amount of clarity in what they mean in the above terms (see FAQs #1-3). Broadly speaking, any algorithmic process or software used to automate workflow involving decisions about applicants or employees will be closely scrutinized.
Compliance Obligations and Liability for Contractors Using AI
The agency described the EEO compliance obligations for contractors, including record maintenance (e.g., types of resume searches), reasonable accommodations for applicants with disabilities, and providing information about their AI systems. There is still some gray area here, in what information, specifically, the agency would request. Vendors hardly ever allow access to the algorithms in their products, should the agency request that information there may be pushback by vendors. The current Mobley v. Workday case should help clear this up, as the court’s decision will inform whether vendors have any liability when their tool is resulting in disparate impact for one of their customers (see FAQ #4). The agency further explains that, currently, the contractor (not the vendor) is responsible for defending their use of any third-party products and services such as personality assessments and, of course, AI tools alike (see FAQ# 9). As the agency puts it, “employers cannot escape liability for the adverse impact of discriminatory screenings conducted by a third party.” This is true for the time being.
Potential Risks in Using AI for Selection Decisions
The agency offers helpful examples of the risks associated with using AI for hiring decisions, in case contractors weren’t already aware (see FAQ #5).
How Contractors Can Fulfill Their Compliance Obligations
As it relates to compliance in using AI systems, the agency rightly points contractors to the most current and relevant source of federal guidance on the use of selection procedures of any kind – the Uniform Guidelines on Employee Selection Procedures (UGESP). For contractors, these are part of the Federal regulations. Despite many calls for the abandonment of the UGESP because they are outdated or no longer relevant, they are still the court deferred standards for evaluating the use of selection procedures. Employers using AI (or any selection procedure) need to be mindful of the technical requirements for using selection tools that result in disparate impact, and not just violations of the 4/5th or 80% rule. The agency WILL use statistical tests to evaluate the severity of the disparity in selection rates, not just the 4/5th rule you see in NYC LL144. Why does this matter? AI is most relevant for high-velocity applicant flows, where tens of thousands of applicants are being screened. The statistical power in these settings is high and even 5% differences in selection rates can result in 4, 5, or 6+ standard deviation (SD) differences. Are employers dead in the water in this situation? Not necessarily. They can provide evidence of job-relatedness via a criterion-related validation study. The UGESP lays out the technical requirements for such a study, however contractors may find it challenging to meet those requirements due to the number of logistical complexities with AI systems (e.g., parameter drift). See FAQ# 7 for the agency’s stance on the UGESP and their relevance.
Promising Practices for Using AI in the EEO Context
Finally, the agency provides several great pieces of advice to contractors in fulfilling their EEO obligations. These obligations fall into four main areas:
1. Providing Notice to Applicants, Employees and Representatives – the agency is encouraging employers to inform people who will be subject to a process involving AI how their data will be captured, how it will be used, and how the AI system may have contributed to their hiring/rejection.
2. How Contractors Should be Using AI Systems – employers need to have humans involved in the AI systems, including the routine monitoring of the system for disparate impact. Contractors need to train their staff who will be touching AI systems and, in general, contractors need to set up an internal governance program to monitor and manage their AI systems.
3. Considerations in Working with AI Vendors – we applaud the agency here for the list of considerations. We have been suggesting many of these same points to our clients and find several AI vendors struggle to address them. The main questions contractors need to ask boils down to:
- the data used for training (and any potential bias in those data)
- the validity of the algorithm in predicting job performance
- the explainability of the algorithm
4. Practices Related to Accessibility and Disability Inclusion – without careful consideration for individuals with disabilities, AI tools can veer off-course to discriminate against individuals with disabilities. Further, AI tools involving pattern recognition in speech or visual cues (e.g., eye contact) are prone to discriminate against individuals with disabilities. Vendors must pay attention to these considerations, and how their tools are built, to keep applicants and employees safe from discrimination.
This set of guidelines is the first direction from OFCCP to contractors using AI systems, and the agency generally hit the mark. The main take aways for contractors are to fully understand any AI tools they are using (training data, explainability, disparate impact), be prepared to complete a validation study to show the job-relatedness of their AI system per UGESP standards, and create an internal governance system that regularly monitors and maintains the AI systems to meet compliance obligations.
For more information and assistance in this area, stay tuned to our webinars or reach out to Brian Marentette, Ph.D., our Director of People Insights and resident expert in testing and validation services.