2026 TA Trends: An Executive Panel | Jan 27 at 1pm EST
Register Now
March 10, 2026

What a Viral AI Doomsday Essay Gets Right—And Wrong—About the Future of Talent Acquisition


At the end of February, a Substack post co-authored by independent investment research firm Citrini Research and Alap Shah drew national attention. Framed as a memo from June 2028, the piece offers a thought experiment on how rapidly advancing AI, at low marginal cost, could trigger a “Global Intelligence Crisis” that displaces most white-collar workers, setting off a cascade of layoffs, collapsing consumer spending, and producing an economic contraction unlike anything in recent memory. 

Several investment firms and economic experts have since pushed back on the piece, calling it a worst-case scenario with shaky macroeconomic foundations. They may be right. But the virality of the piece says something on its own: even a speculative thought experiment, clearly labeled as such, was enough to send people spiraling. For TA leaders, the resonance is more specific than general anxiety about AI. It maps onto something they’ve already been living: changes to headcount, the quiet disappearance of coordinator and sourcer roles, the mandate to “figure out AI” with no explicit guidance on how to do so. 

This piece isn’t an attempt to litigate Citrini’s scenario. It’s an effort to help TA leaders evaluate what in this changing narrative is actually relevant to them, and to offer our perspective on where the function is headed. That perspective is grounded in years of conversations with Heads of Talent Acquisition across every major sector of the Fortune 1000. 

Our read: Talent Acquisition is not at immediate risk of displacement on any timeline that should produce panic. But it is changing in significant ways, and the questions TA leaders engage with now — about where human judgment matters, and where AI can reasonably be slotted in — will shape what their function looks like on the other side. 

 

What Citrini Claims, and Why it Resonates for TA Leaders

Citrini and Shah’s central thesis is that AI is approaching an event horizon where it can replicate the output of most knowledge workers at a fraction of the cost. When that happens, companies will face an irresistible incentive to substitute AI for human labor that triggers a “negative feedback loop with no natural brake.” Improved AI capability prompts white-collar layoffs, displaced workers spend less, slimmer margins push firms to invest more in AI, and the cycle continues. 

The piece was written for investors. But TA leaders who read it probably felt something more personal. They’ve watched coordinator and sourcer roles get eliminated as automated sourcing and scheduling tools like Findem, Fetcher, and Paradox, have grown in capability. They’ve been told by their C-suites to use AI to do more with less. The Citrini thesis lends support to a fear many TA leaders have been sitting with: that by adopting AI, by automating tasks historically core to the function, they might be architecting their own displacement. 

The most detailed public rebuttal came from Citadel Securities, whose macro strategist Frank Flight argued that the data don’t support the scenario. Software engineering job postings are up 11% year-over-year, daily use of generative AI at work has plateaued with “little evidence of any imminent displacement risk,” and new business formation is expanding. Analysts at Deutsche Bank, Fidelity International, Liontrust, and Evercore made similar arguments — AI adoption will follow a gradual arc, starting slowly, then accelerating, then leveling off, just as every major technology wave of the past century has — not the overnight displacement the piece describes.

These are serious critiques from experts. And yet, a TA leader who read the piece wouldn’t be wrong to feel like there was a kernel of truth in it. The macro argument may be wrong about the economy. But does that mean it’s wrong about TA? 

 

What’s Actually Happening in TA Functions

We track AI adoption in TA closely through member conversations, employer benchmarking, and roundtables with Heads of Talent Acquisition at companies across the Fortune 1000. What we see is less dramatic than the Citrini scenario, but less reassuring than the Citadel rebuttal. 

Only at the very far end of the spectrum do we see companies implementing an entirely AI-mediated hiring process. One global professional services firm in our network has built a high-volume entry-level process for their India operations that runs almost entirely without human involvement, and is actively pressure-testing whether they need the final fifteen-minute human conversation at all. That is genuinely frontier. It is also not representative of where most Fortune 1000 TA teams are operating.

For most members, AI adoption in TA looks like AI-assisted job description writing, automated candidate screening, chatbots handling scheduling and early-stage candidate communication, and interview intelligence tools that surface patterns across hiring conversations. Some of this comes from purpose-built vendors; some from internal GPTs and homegrown automations. The efficiency gains in throughput, time-to-fill, recruiter capacity, are real. But the scale of change falls well short of what it would take to fundamentally reshape the function, let alone eliminate it. 

Sourcing automation is another common use case, but more contested. Tools like LinkedIn Hiring Assistant, Findem, HireEZ, SeekOut, and Fetcher have made AI-driven sourcing genuinely viable. Some leaders in our network have begun pulling back on sourcing headcount as a result. One estimated a 75% reduction in his sourcing team following the adoption of an AI sourcing agent. But others are going the opposite direction, leaning more heavily on human sourcers precisely because the application avalanche — AI-assisted candidate applications flooding their pipelines — has made inbound volume unmanageable without human judgment to triage it. 

What almost no one in our network is doing yet is deploying AI across the full hiring lifecycle in a way that meaningfully reduces the need for experienced recruiters in professional-level roles. Legal exposure remains a significant brake. Concerns about bias, and unpacking the “black box” of impermeable AI algorithms at scale, haven’t been resolved.  Active lawsuits like Mobley v. Workday (alleging AI screening tools discriminate based on race, age, and disability) and Kistler v. Eightfold (arguing AI hiring platforms must comply with consumer data protection laws) suggest the legal framework around automated hiring decisions is still very much being written. Until organizations can automate decision-making without liability, full AI deployment is unlikely.

 

What Happens When AI Leads the Hiring Process

Most organizations in our network haven’t crossed the line from AI-assisted to AI-led hiring. But the question of what happens when they do is worth taking seriously.

The most rigorous data point we have comes from a study by researchers at the University of Chicago Booth and Erasmus Rotterdam that ran more than 70,000 applicants through AI and human-led interviews for entry-level customer service roles. When given the choice, nearly 80% of candidates chose the AI interviewer. The AI-led process produced 12% more job offers, 18% more job starters, and 16% higher 30-day retention. Women in particular reported feeling less judged and more able to express themselves.

The results make a real case that AI could outperform human interviewers in some contexts, but the details are worth examining before drawing broad conclusions. The roles were entry-level and high-volume, the context is specific to one labor market (the Philippines), and even among candidates who chose the AI interviewer, many found it significantly less natural than human conversation. In other words, the conditions under which AI-led hiring outperformed a human process may not generalize to the hiring contexts that matter most to large US employers. 

But the study raises a question worth considering: if AI can outperform humans on measurable hiring outcomes for certain roles and populations, what does that actually tell us about where human recruiters are genuinely needed, versus where they’ve simply been the default? 

 

When the Human Touch is an Asset, Not a Liability

Even if AI can do the job, candidates are drawing clear boundaries around where they want it to. Veris Insights’ survey of experienced professionals suggests that interview scheduling is largely a non-issue — 60% of candidates are “somewhat” or “very comfortable” with AI handling it. But the numbers shift meaningfully when AI moves into evaluation. Only 44% of candidates are comfortable with AI evaluating resumes or cover letters, and still fewer (39%) are comfortable with AI evaluating pre-recorded video interviews. The line candidates are drawing is between AI that moves process versus AI that makes decisions. 

Thoughtful organizations are defining explicit human touchpoints that AI doesn’t touch. One panelist at a recent Veris Insights executive roundtable described theirs: closing the loop with candidates personally, providing context around decisions, maintaining consistent communication throughout the process. Those moments, they argue, carry outsized weight. A candidate who doesn’t get the job but feels respected throughout the process is still a future applicant, a potential referral, someone who leaves a Glassdoor review that reflects well on the company. 

There’s also a dimension that doesn’t appear in any survey: what a fully AI-mediated candidate experience signals about your culture. If every interaction, from the application acknowledgment to the screening conversation to the offer, runs through automation, candidates notice. When the labor market eventually thaws, the brand equity being built or eroded right now will matter. 

In a world where AI can conceivably handle almost anything in the recruiting process, the real strategic work is deciding where to draw lines, and being able to defend those lines when the C-suite asks you why. In a recent Veris Insights executive panel, Fred Roeneker, VP of Global Talent Acquisition at Proctor & Gamble, articulated the essential question: It’s no longer ‘Where do we put AI in our process?’ It’s increasingly, ‘Where do we choose not to put AI?’” 

 

Questions Every Head of TA Should Have Answers To 

That may be the right question. But in most organizations, TA leaders don’t get to answer it alone. 

The C-suite is already forming its own views. CHROs and CFOs are reading the same headlines and asking pointed questions about AI’s implications for headcount, meaning the window for TA leaders to shape that conversation is narrowing. 

The strongest TA leaders won’t wait for that conversation. They’ll initiate it, armed with clear answers to questions like:  

  • For which roles and stages does a human touchpoint meaningfully change the candidate’s experience or decision? 
  • Where in the process are we exposed — legally, reputationally — if an AI tool makes a wrong or biased decision? 
  • Are there parts of our process where we’ve defaulted to AI because it was available, rather than because it was the right call? 
  • Are there stages where AI would genuinely serve candidates and hiring managers better than our current human-led process? 
  • What does our balance of AI and human interaction signal about our culture, and is that the signal we want to send? 

 

These questions don’t have easy answers, but they will get answered, with or without TA at the table. The leaders who show up to that conversation first will have far more influence over how it goes.

 

Where the Function is Actually Headed

Citrini would have you believe that AI will eliminate most of Talent Acquisition within the next two years. In our view, that’s unlikely. But the honest answer is that no one knows with certainty how the function will change. A few scenarios strike us as plausible: 

  • The function gets smaller. Coordinator and sourcer roles become largely automated, tasks historically offshored or outsourced get absorbed by AI agents, and the remaining work splits into two tracks: more data- and strategy-oriented roles on one end, more relationship-oriented roles for high-touch processes on the other. 
  • TA merges with adjacent functions and responsibilities become more distributed. The boundaries between TA and L&D, Internal Mobility, Total Rewards, and Workforce Planning have already considerably softened. Moderna’s decision to merge HR and IT under a single leader to manage both human and digital labor is one example of a more radical shift in that direction. 

 

What Citrini’s framing misses, and what much of the media coverage has missed too, is that capability doesn’t equal inevitability. Just because AI can do something doesn’t mean it should. Those are decisions organizations will need to make intentionally in the coming years. Those decisions will be reflective of their values, cost structures, risk tolerance, and, ultimately, what they believe the hiring process is actually for. 

What the evidence suggests is that the function isn’t on the verge of collapse. But it is changing in ways that are already visible and will become more so, and the shape of that change is still, at least in part, up for grabs.

See more on how AI is Reshaping the TA Function
Explore