February 09, 2026
Two Lawsuits Expose AI Accountability Gaps in Hiring: What TA Leaders Need to Know
The accountability question TA leaders can’t ignore: If an AI tool rejects a candidate who’s responsible? The vendor who built the algorithm or the organization that deployed it?
Two major lawsuits are forcing this question into the open. The cases against Workday and Eightfold AI question how AI hiring tools operate, and more importantly, how organizations using them might be held accountable. For Talent Acquisition leaders navigating increased legal scrutiny, understanding these cases isn’t just about risk management. It’s about understanding the governance obligations that come with AI adoption.
Why These Cases Matter Now: The Scale of AI in Hiring
Before examining the lawsuits, consider where the industry stands: A plurality of the Fortune 500 use applicant tracking systems with AI capabilities. LinkedIn reports that 93% of Talent Acquisition professionals are increasing AI use in 2026 to meet hiring goals. What was experimental just a couple of years ago is now infrastructure.
But here’s the tension: regulations built for human decision makers are now butting against automated systems operating with little transparency. This infrastructure is about to be stress-tested in court, and the outcomes could reshape how TA and AI hiring tools operate.
Understanding the Two Accountability Gaps
Gap One: Can Vendors Be Held Liable for Discriminatory Outcomes?
The Workday Case (Mobley v. Workday) sets up the possibility that AI vendors can face liability for discriminatory outcomes, challenging the defense that an AI just implements employer search criteria.
Derek Mobley, an African American job applicant over 40 with disabilities, applied to more than 100 positions through Workday’s platform over seven years. He was rejected every time, often within minutes. His lawsuit alleges that Workday’s AI-powered screening tools discriminate based on race, age, and disability in violation of federal anti-discrimination laws.
In July 2024, Judge Rita Lin ruled that Workday could be held liable as an “agent” of employers, determining that the software actively participates in hiring decisions by evaluating and recommending candidates rather than merely executing employer-defined criteria. In May 2025, the case received preliminary certification as a nationwide collective action under the Age Discrimination in Employment Act, potentially representing millions of applicants over 40.
The implication for TA: Vendor liability doesn’t eliminate employer liability. If anything, it intensifies the need for organizations to understand what their tools actually do and whether they produce discriminatory outcomes. “The vendor handles compliance” may not be a viable defense.
Gap Two: Do Candidates Have the Right to Know They’re Being Scored?
The Eightfold case raises a different but equally critical question: Are AI-generated candidate scores evaluative assessments that require disclosure, access, and dispute rights?
On January 20, 2026, job seekers Erin Kistler and Sruti Bhaumik filed a proposed class-action lawsuit against Eightfold AI. Their claim is that Eightfold allegedly violates the Fair Credit Reporting Act (FCRA) by creating “consumer reports” on job candidates without providing required disclosures, access, or dispute rights.
Eightfold’s system allegedly collects data from applications, LinkedIn profiles, social media, location data, and internet activity. This information feeds into proprietary AI models that generate a “Match Score” from 0 to 5, ranking candidates by predicted job success. Employers can then use these scores to automatically filter applicants, potentially rejecting lower-scored candidates before human review.
The lawsuit’s core argument is that these AI-generated scores function as an evaluative assessment, like consumer reports or background checks regulated by the FCRA, with Former EEOC Chair Jenny Yang, representing the plaintiffs, arguing that AI companies must follow longstanding consumer protection requirements. If true, the suit argues, Eightfold must provide candidates with disclosure, access, and dispute mechanisms.
The plaintiffs claim none of this happens; instead, candidates receive rejections, sometimes within minutes, without knowing an AI system scored them or what data influenced that decision.
The implication for TA: Even if your organization doesn’t generate these assessments directly, using them in screening decisions could create compliance obligations. The question isn’t whether you created the algorithm, it’s whether you can explain, defend, and, if needed, correct, how it’s being used in your hiring process.
Two Gaps, One Crisis: What This Means for Your Organization
These cases expose complementary accountability gaps. The Workday case asks: Can vendors be held liable for algorithmic disparate impact? The Eightfold case asks: Are candidates entitled to know they’re being scored, see the data used, and challenge inaccuracies? Together, they reveal a troubling trend: AI hiring systems wield substantial influence over employment outcomes while remaining largely invisible to the candidates they evaluate and a black box to the TA professionals who use them.
This, of course doesn’t mean that AI tools are inherently problematic. They help organizations process applications at scale and identify qualified candidates more efficiently. But the current implementation model, where algorithms make consequential decisions using data candidates don’t know about, producing assessments candidates can’t see, and TA professionals may not understand is legally vulnerable.
The shift isn’t about whether to use AI. It’s about recognizing that AI deployment creates new governance obligations.
Critical Questions for TA Leaders
Consider these questions about your current tools:
On transparency:
- Can you explain what happens when a candidate applies? Walk through the process: What systems touch the application? What data do they access? How do they influence who advances? If you can’t map this clearly, you can’t defend it.
- Do candidates know when third-party AI systems are assessing them? Disclosure doesn’t eliminate risk, but lack of disclosure creates it. If an AI system is generating scores or rankings that influence hiring decisions, candidates may have a right to know.
- If a candidate requested their AI-generated assessment, could you provide it? Whether or not FCRA applies, this is a good test of whether you understand what your tools are doing and can explain decisions made by them.
On compliance:
- Have you conducted bias audits using your actual applicant data? Vendor-provided validation studies aren’t enough. Has the tool been tested against your candidate pool, your hiring outcomes, your organization’s demographics? If not, you don’t know whether it produces disparate impact in your context.
- What do your vendor contracts say about compliance responsibilities? Who’s responsible for compliance if it applies? Who’s liable if the tool produces discriminatory outcomes? What access do you have to the underlying data and logic? If these questions aren’t addressed in contracts, you’re operating on assumptions.
What human oversight exists to review or override algorithmic outputs? AI should augment human judgment, not replace it. Are final hiring decisions made by people who can understand and override algorithmic recommendations? Or are humans rubber-stamping AI outputs?
The Path Forward
The Workday and Eightfold cases will take years to resolve. But TA leaders can’t wait for definitive rulings. Three actions matter now:
First, audit your current tools.
Map the AI systems that touch your hiring process. For each one, document: What data does it access (from applications, resumes, third-party sources)? How does it generate outputs (scores, rankings, recommendations)? And where can humans intervene or override rankings, recommendations, or decisions?
If you discover gaps in your understanding, that’s the audit working. Those gaps are risks.
Second, push for vendor transparency.
Contracts should specify what the tool does, what data it uses, how outputs are generated, and how the vendor handles bias testing and compliance. In particular, ask for clear documentation of algorithmic logic (not proprietary details, but a functional explanation of the inputs, outputs and their relation), on-demand access to candidate-level data if needed for dispute resolution, specific language on compliance responsibilities in contracts, and opportunities for bias audits with your applicant data.
If a vendor claims they can’t provide this, consider whether the promised efficiency gains are worth the potential for legal liability.
Third, maintain human decision-making.
AI should augment, not replace, human judgment in consequential employment decisions. This means that a person should always be making final disposition decisions. Where AI tools supply recommendations or rankings, people must understand how the tool arrives at them, can override them, and at the end of the day be able to explain and document why someone was hired or rejected.
The Bigger Picture
AI in hiring isn’t going away. The efficiency gains are too significant, and competitive pressure too intense. But the era of unaccountable, opaque algorithmic screening may be ending.
The lawsuits against Workday and Eightfold aren’t aberrations, they’re the beginning of a reckoning. The legal question is shifting from “does AI discriminate?” to “can we defend how AI operates?” The Talent Acquisition leaders best positioned to answer that question will be those who treat AI adoption as a governance challenge, not just a procurement decision.
The technology is moving faster than the law. But the law is catching up. The question facing TA leaders isn’t whether AI will be regulated, it’s whether your organization will be ready when it is.
Dr. Andrew Monroe leads Veris Insights’ research on AI and analytics in Talent Acquisition. This analysis is for informational purposes only and does not constitute legal advice. Organizations should consult with legal counsel regarding their specific AI hiring practices and compliance obligations.
