AI Did The Interview. The Results Shocked Everyone

In one of the largest randomized experiments ever conducted on artificial intelligence in the workplace, a team led by Brian Jabarian, economist at the University of Chicago Booth School of Business and its Roman Family Center for Decision Research, set out to answer a radical question: 

Can AI not just analyze human behavior, but interact with it, shape it, and outperform people at tasks we thought only humans could do?

The task in question: conducting job interviews.

Working with PSG Global Solutions, a subsidiary of outsourcing giant Teleperformance, Jabarian and his co-authors randomized more than 70,000 job applicants across three conditions. Some were interviewed by human recruiters. Others by an AI voice agent. A third group was given a choice between the two.

The results are stunning.

12% MORE JOB OFFERS. 18% MORE STARTS. 17% HIGHER RETENTION.

Applicants who interviewed with the AI were 12% more likely to receive a job offer, 18% more likely to actually start the job, and 17% more likely to stay on the job for at least 30 days.

Equally astonishing: when given a choice, 78% of applicants chose the AI.

“That number is what saddens me most as a human being,” Jabarian, lead author of the study, tells Poets&Quants. “We’re talking about a high-stakes interaction — one that has real consequences for someone’s livelihood. And people preferred talking to a machine.”

A NEW ROLE FOR AI: TALKING TO HUMANS, NOT JUST ANALYZING THEM

Chicago Booth’s Brian Jabarian: “Stop doing only surveys. Stop relying only on vendor case studies. If you’re going to invest in AI, invest in randomized testing. That’s the only way to know what’s working and what’s not”

What makes this study unique isn’t just the scale — it’s the shift in what AI is doing.

“For decades, AI has been used to support human decision-making: sorting resumes, scoring assessments, analyzing patterns,” Jabarian explains. “But this is a new paradigm. The AI wasn’t processing data — it was generating it by talking directly to people.”

The experiment focused on customer service roles across 43 global clients — half of them Fortune 500 companies — in industries ranging from healthcare and tech to retail and logistics. The candidates, based in the Philippines, were randomized at the interview stage. All were evaluated by human recruiters, who remained blind to the treatment condition.

“We wanted to know whether the source of the interview — AI or human — affected what information was extracted, how it was evaluated, and ultimately, who got hired.”

THE POWER OF CHOICE — AND THE PSYCHOLOGY BEHIND IT

One of the most dramatic findings wasn’t about hiring metrics at all. It was candidate preference.

“When applicants were told, ‘You can choose who interviews you — AI or human’ — nearly 80% chose the AI,” Jabarian says. “That’s not just a convenience issue. It’s a psychological signal.”

In follow-ups, many candidates — especially women — reported feeling less judged, less anxious, and more able to express themselves with the AI. Even though there was no evidence of discrimination by human recruiters, the perception of bias shaped behavior.

“It’s not irrational,” Jabarian says. “Historically, job interviews have been sites of discrimination, especially for women and minority candidates. Even if you know the recruiter is fair, there’s anxiety baked into the interaction. The AI, by contrast, feels neutral — and that changes how people perform.”

NOT JUST FAIRER — FASTER

Speed also played a major role in applicant preference.

“These are candidates who want to start working tomorrow,” Jabarian says. “They’re not looking for lengthy back-and-forth. The AI is always available, and when it says, ‘You can start tomorrow,’ that’s a huge win.”

And it’s not just candidates benefiting from speed. For employers, a few percentage points of improvement — across tens of thousands of applicants — adds up to millions in savings and productivity.

“In high-volume markets like customer service,” Jabarian notes, “even modest improvements have massive economic value.”

WHY RANDOMIZATION MATTERS

Jabarian is emphatic on one point: randomization is non-negotiable.

“Too many firms adopt AI and then retroactively try to prove its value through surveys or correlations. That’s not science. If you haven’t randomized your AI adoption, you simply don’t know whether it caused the effect you’re seeing.”

This study used pre-registered protocols and strict randomization at the interview level. Candidates were aware at the start of the interview that their file would be evaluated by a human recruiter who would make the decision as well. All data — transcripts, scores, and outcomes — were shared transparently with the research team, under a formal agreement that prioritized scientific integrity over corporate messaging.

“AI vendors love flashy numbers,” Jabarian says. “We wanted to know: does this actually work in the real world?”

THE NEXT FRONTIER: PERSONALIZED AI INTERVIEWERS

Jabarian’s team isn’t just looking at who candidates talk to—but how that conversation unfolds.

One key finding: AI voice agents elicit more of the behaviors most strongly correlated with job offers, including vocabulary richness and interactivity — the number of conversational exchanges between interviewer and candidate. Both are positive predictors of hiring, according to transcript-level analysis in the study.

“Humans are slightly better at eliciting syntactic complexity,” Jabarian says. “But on the whole, the AI shifts the interaction toward the strongest positive signals.”

Just as notably, AI interviews reduced reliance on negative predictors, such as filler words, excessive backchanneling (“uh-huh,” “I see”), or the raw number of questions asked. In short: the AI kept candidates talking — and saying the kinds of things that matter.

THE TEAM BEHIND THE STUDY

Jabarian, who completed his Ph.D. in economics at the Paris School of Economics in 2023, was joined on this study by co-author Luca Henkel of Erasmus University Rotterdam. On other projects within the partnership he leads with PSG Global Solutions, he has also been joined by co-authors from Florida State University, MIT, and Stanford, along with research assistants at the University of Chicago.

Together, they’re already launching follow-up experiments on everything from hybrid AI-human interview protocols to interview prompt optimization and voice personalization.

From this study, Jabarian’s final message is for firms racing to adopt AI without evidence: 

“Stop doing only surveys,” he says. “Stop relying only on vendor case studies. If you’re going to invest in AI, invest in randomized testing. That’s the best way to know what’s working and what’s not.”

And while the study results are promising, he’s quick to note that AI is not a magic solution.

“These aren’t 80% gains. They’re 12%, 18%, 17%. That’s not hype — that’s real, measurable improvement. And in a volume-driven hiring environment, that’s gold.”

Read the paper’s abstract here.

DON’T MISS THE MOST IMPORTANT AI BOOK YOU HAVEN’T HEARD ABOUT YET and THE TOP U.S. & EUROPEAN MBA PROGRAMS WITH AI CONCENTRATIONS

© Copyright 2025 Poets & Quants. All rights reserved. This article may not be republished, rewritten or otherwise distributed without written permission. To reprint or license this article or any content from Poets & Quants, please submit your request HERE.