Adverse Impact (AI) analysis between two groups is simple and straightforward. Historically, Whites were considered the majority compared to all other groups, but today best practices dictate a closer look at all race groups. AI investigations become significantly more complex when there are two or more groups to analyze, such as the case with individual race (e.g., Asian, African-American, Hispanic, White). Specifically, when there are more than two groups, how does one identify the “Reference” (Advantaged) group, to which all other groups will be compared?
Many EEO analysts reference the federal Uniform Guidelines on Employee Selection Procedures (Guidelines) for direction: the group with the highest selection rate that also comprise a minimum of 2% of the applicant data may be defined as the Advantaged group. While this sounds intuitively appealing, what if this methodology results in misleading findings? This is concerning since the courts have accepted the Guidelines with deference and it carries so much weight in influencing EEO practice. Can it be possible for Guidelines methods to result in misleading findings?
Unfortunately, in this case, the answer is, “yes.” The Guidelines’ method for identifying the Advantaged group in individual race analyses may lead to false negative findings – a conclusion of no adverse impact, when there is in fact, adverse impact. A simple example will help to explain this not so uncommon phenomenon.
In the example below, 100 applicants competed for 36 positions. Applying Guidelines methods, NATs would be defined as the Advantaged group, because it meets the two prongs as defined in the Guidelines:
NATs had the highest SR (Selection Rate) at 67%
NATs applicants (3) comprised over 2% of all 100 applicants
If NAT was used as the Advantaged group for AI analysis, no AI is observed for ASN, BLK, HSP, or WHT applicants. See the Fisher’s Exact Test statistics, FET (ADV=NAT). However, if WHT was defined as the Advantaged group for AI analysis, significant AI is observed for ASN, BLK, and HSP applicants (see FET ADV=WHT).
When Guidelines methods are applied, no AI is found, but clearly there is AI against ASN, BLK, and HSP applicants when WHT is defined as the Advantaged group. These are surprising findings. What is the explanation behind this?
The answer is fairly straightforward when one understands: 1) what is required to obtain statistically significant AI, and 2) the mechanics underlying the Guidelines method for identifying the Advantaged group:
Statistical methods, such as the FET, need two (2) ingredients to obtain statistically significant AI:
Effect Size: The size of the difference in SR between groups. In this example, the difference in SR between NAT (67%) and WHT (52%) applicants is small, and so the Effect Size is small. The difference in SR between NAT (67%) and BLK (13%) is large, and so the Effect Size for that comparison is large. Large Effect Size are more statistically powerful.
Sample Size: The headcounts of the groups under consideration. In this example, there are only three (3) NAT applicants while there are 50 WHT applicants. Large Sample Size are more statistically powerful.
Guidelines method for identifying the Advantaged group primarily rely on Effect Size (group with the highest SR), but largely ignores Sample Size (applicants need only comprise 2% of the total applicant pool).
In this example, NAT has a slightly higher SR than WHT applicants (67% v 52%), but they are significantly smaller in Sample Size (3-NAT v 50-WHT). In fact, with a Sample Size of only three (3) NATs, it is nearly impossible to obtain statistically significant AI. In contrast, a large Sample Size of 50 (WHT) with a slightly lower SR will provide plenty of statistical power to find statistically significant AI (if that exists).
Given this, what is the recommended method for proactive AI analysis when there are more than two groups?
Option 1 is to conduct a Power analysis to identify the Advantaged group. Power analyses are complex and complicated, and require significant technical background to conduct.
Option 2 is to simply analyze all possible combinations of SR comparisons for AI. While this may seem complicated, it is not, and the body of results are very helpful in proactive AI investigations.
In practice, Option 2 is recommended for almost all proactive AI investigations. Analyzing all combinations may seem laborious and complicated, but that effort will be rewarded with a very clear picture of the pattern of AI among the different subgroups, which is the goal of proactive AI investigations. As AI methods evolve and advance, it is important for EEO analysts to adapt accordingly. With greater access to more powerful computers and easier to use software, AI investigations need to be more comprehensive and exhaustive. This is the next step in AI investigations.