Why employers are asking the wrong question about AI and hiring

VP of L&D Adam Hickman, PhD and CEO Diana Tsai explore the question - 'What if candidates use AI to answer interviews or write their applications?”...
HR Grapevine
HR Grapevine | Executive Grapevine International Ltd
Job interview in modern office
Partners VP of L&D, Adam Hickman, PhD, and Upwage CEO, Diana Tsai, explore candidate usage of AI

In boardrooms, HR conferences, and recruitment huddles everywhere, a familiar question keeps surfacing…

“What if candidates use AI to answer interviews or write their applications?”

It’s a fair question, especially in this moment when generative AI tools like ChatGPT, Gemini, and Claude are changing how many of us communicate. But crucially, it’s the wrong question.

The real challenge isn’t AI, but honesty. It's a problem that predates even the first typewriter.

Sad but true! Candidate dishonesty isn’t new

Resume inflation, skill embellishment, and strategic omissions are hardly new tactics in the world of hiring.

In fact, a 2023 survey revealed that 70% of workers openly admitted to lying on their resumes. 35% of candidates confessed to being dishonest at some point in the hiring process, rising to 68% during interviews.

These numbers show us, simply, that dishonesty in hiring didn’t start with AI.

AI, however, does change the scale and sophistication of the issue. It amplifies the potential for misrepresentation by giving candidates tools to craft more polished, convincing narratives at the push of a button.

But at its core, the concern over AI in hiring is not about technology itself. For decades, recruiters and hiring managers have relied on gut instincts, reference checks, and structured interviews to uncover the “real” candidate behind the polished résumé or rehearsed answers.

So, how do we ensure authenticity in a process that, by design, hinges on first impressions, limited interactions, and high stakes?

AI in hiring processes: Enhancement or misrepresentation?

The more important issue is how candidates are using AI.

Some use it to articulate their thoughts more clearly. Others use it to misrepresent who they are. The distinction matters. A well-formed answer improved by AI might accurately reflect the candidate’s experience. Or it might be a polished fabrication.

The presence of AI doesn’t tell us which, and that’s where many organizations falter. Some view AI use as dishonest, others interpret it as a sign of adaptability and technical fluency, while most haven’t articulated a policy at all.

Discerning intent, that gray space between enhancement and misrepresentation, is where the future of hiring lies.

Asking candidates to explicitly affirm that their answers reflect their genuine experiences, not just signing a box but engaging with the pledge, can shift behavior

Adam Hickman | PhD, VP of L&D, Partners Federal Credit Union

Behavioral science offers insight into how employers can encourage truthfulness. A 2023 study found that when participants actively engaged with honesty pledges, such as copying the text, they were significantly more truthful than when they passively agreed.

Similarly, a 2025 megastudy published in Nature Human Behaviour found that oaths specifying the desired behavior (e.g., “I will accurately report my income”) improved honesty by 8.5 percentage points. General moral appeals, on the other hand, had little effect.

Hiring can borrow from this playbook. Asking candidates to explicitly affirm that their answers reflect their genuine experiences, not just signing a box but engaging with the pledge, can shift behavior.

But even with pledges in place, the essential recruiter ‘phone screen’ remains unchanged. No system can replace the nuanced judgment of a skilled recruiter when it comes to assessing whether a candidate’s story feels real, misrepresented, or augmented.

Employers shouldn’t conflate AI use with dishonesty in hiring processes

The role of the recruiter is changing

AI is reshaping early-stage hiring – automating skills assessments, structured interviews, and screening for competencies – but the recruiter’s role is changing too.

A typical recruiter is now less a gatekeeper and more a truth verifier, a human polygraph of sorts. They must now be trained to pick up on micro-discrepancies, to ask the follow-up questions the AI didn’t, to sense when the energy or detail in a conversation doesn’t match the candidate’s polished submission. It’s an edge that is distinctly human. It’s the intuition to know when something doesn’t add up.

Instead of asking whether candidates are using AI, we should be asking more foundational questions about the hiring systems we’ve built

Diana Tsai | CEO, Upwage

Imagine recruiter training that embraced this shift, not just focused on interview best practices, but incorporating skills from investigative interviewing, behavioral science, and even CIA-level listening techniques. In a world of intelligent machines, the most critical skill might just be knowing when a story doesn’t ring true.

Should recruiters ‘detect’ AI use?

Of course, there’s growing interest in AI detection technologies and tools that can flag AI-generated responses through linguistic analysis or consistency checks. And yes, we can build those systems.

But we shouldn’t conflate AI use with dishonesty. Banning AI outright risks penalizing candidates for using the same tools that companies deploy to write job descriptions, craft employer branding, and automate HR communications.

Instead of asking whether candidates are using AI, we should be asking more foundational questions about the hiring systems we’ve built:

  1. What structures actually encourage honesty from candidates? If we want truth, we need to design for it. Behavioral science shows that certain processes, like the clear, specific honesty pledges mentioned earlier, prompt more truthful responses. How can we embed these nudges into the hiring process?
  2. How do we distinguish between misrepresentation and better communication? Not every AI-assisted response is a lie. Sometimes, AI helps candidates better articulate their genuine experience, especially those who may struggle with writing or have nontraditional backgrounds. Are we prepared to evaluate intent behind the polish, rather than penalize the polish itself?
  3. How do we scale hiring without losing human discernment? AI can screen for skills, assess resumes, and even conduct structured interviews at scale. But machines can’t fully replace the nuanced judgment of a human recruiter, someone who can detect when a candidate’s story doesn’t quite add up. How do we design processes that balance efficiency with empathy, automation with intuition?

This is why the debate needs to evolve. AI use isn’t inherently deceptive. It’s the intent behind the use that matters. We need frameworks to distinguish between enhancement and misrepresentation, and human discernment is central to that process.

Adam Hickman, PhD, is the VP of L&D and Organizational Development at Partners Federal Credit Union, a Walt Disney company affiliate.

Diana Tsai is the Co-Founder & CEO of Upwage, pioneering the AI-for-good movement.

You might also like