84%

 of recruiters have reported encountering some form of candidate fraud. (Gartner)

1 in 4

According to research from Gartner one in four candidates will be fake by 2028.

underline

As talent acquisition strategist Rhona Barnett-Pierce put it in an episode of her Throw Out the Playbook podcast: “We’ve officially entered the bot versus bot era of hiring, where fake profiles, AI generated resumes, and even stand-ins for interviews are becoming the norm.

“We’re talking about full-blown candidate catfishing. It’s not just recruiters using AI to screen applicants — it’s candidates using AI to outsmart recruiters.”

The good news? You can stay ahead — without losing sight of the human at the centre of every hire.

The growing scope of candidate fraud:

People have been padding their resumes since hiring began. Rewind ten years, and the problem was more about a few exaggerated skills and work experiences.

But in 2025, candidate fraud is on a whole different scale. Our remote-first, tech-enabled working world has made it much easier to deceive across the entire hiring pipeline. Increased access to AI-enabled tooling and a tech-focused approach to screening with keyword parsing can make it harder to see glaring red flags. Interviews now increasingly take place over Zoom.

And according to research from Gartner, this is only going to get worse: one in four candidates will be fake by 2028.

Generally speaking, these are the four key types of candidate fraud at large in today’s job market — and they all carry differing levels of risk.


AI-generated applications

In 2025, 75% of candidates say they’ve used tools like ChatGPT to write a résumé or cover letter. In a tough job market, it makes sense—AI helps candidates apply faster and more often.

Used well, it’s assistive: sharpening tone, fixing grammar, tailoring experience. But as the tech gets smarter, the line between assistive and deceptive blurs. Entire applications can now be generated in seconds, making it easier to exaggerate or fabricate skills.

For recruiters, that means bigger pipelines—but not always better ones. Applications have skyrocketed, and qualifying genuine talent now takes more time.

As Greenhouse CPO Meredith Johnson put it on the Recruiting Futures podcast: “The number of applications per recruiter has almost tripled.”

Proxy and coached interviews

Even when candidates make it past the screening stage, the misrepresentation doesn’t always stop.

Proxy and coached interviews — where a third party imposter stands in for the live interview or feeds answers to the candidate — have become increasingly common, especially in remote-first hiring.

When they happen, they don’t just cost businesses for the wasted time and resources — they can also expose businesses to huge compliance risks, especially in highly regulated industries.

Skill misrepresentation during technical assessments

Take-home tasks have always been considered a tried-and-true method of testing your candidates’ mettle on the skills needed to perform in the role. Yet again, the lure of AI has made it even easier to game the process.

This has become a particular challenge in technical interviews. Candidates can now use AI to help them generate code, debug errors, and refine their task. The output may look strong, but says little about how a candidate will actually perform on the job. And the problem for all companies here is knowing where to draw the line.

But rather than ignore this shift, some companies are choosing to adapt. In July 2025, for example, Meta announced that it was going to allow some coding job candidates to use an AI assistant as part of their interview process — focusing instead on how candidates interact with AI.

Identity spoofing and impersonation

Hiring is often based on a “trust, but verify” kind of deal. But one of the biggest threats to a company is when they can’t verify exactly who they’ve hired — and the person who shows up at the office doesn’t feel like the person from the interview.

AI is making this type of fraud much easier for candidates to pull off — and much harder for talent acquisition teams to spot. Fake identification and certification can now be generated in seconds, fooling even the most experienced eye.

But a bigger concern is the explosion of deepfakes. Pindrop’s 2025 analysis reports a 680% rise in deepfake activity in calls to retail customer centers — and more than 50% of Americans can’t tell the difference between a real and deepfake video.

What talent teams can do to mitigate candidate fraud

Diagnosing and eliminating candidate fraud starts with awareness.

But setting proactive failsafes will help teams learn over time where the weakest links are in their hiring process, and solve broken processes contributing to bad actors slipping through the net.

1

Audit talent data and processes to identify broken links.

One of the first steps in spotting fraud is knowing where it’s coming from. Combing through your hiring processes and data can help identify some leaky links in the pipeline that could be increasing your risk.

Start by reviewing data across your ATS and employee lifecycle to look for signals. You might not have a clear cause-effect relationship — but repeated patterns in candidate drop-offs by hiring stage, failed onboarding, early attrition, or documentation and verification issues can give you a hint that parts of your process are underperforming.

Then, map out your end-to-end processes step-by-step, and ask:

  • Are there clear points in our processes where candidate fraud could occur more easily — for example, take-home tasks, or phone interviews?
  • Can we identify clear patterns in applicant data — such as similar wording or resumes?
  • Are there any “black box” stages in our process with no clear audit trail or documentation?
  • How early in the hiring process do we verify candidate identity? Is this a one-off step, or is reverification required at key milestones?
  • How stringent are our background check and referencing processes? 
  • How do we protect take-home tasks and assessments against proxy completion or cheating?

2

Create policies that address fraud prevention and acceptable use of AI in your hiring process.

Candidates are going to use AI to help them with their application process. But clear guidelines on what is or isn’t acceptable can help talent acquisition teams set boundaries to eliminate fraud. Make sure this information is accessible and simply-explained — check out this examples by Monzo for some inspiration.

But remember that policy has to go both ways. If you’re planning to use AI to screen candidates, evaluate tasks, or act in the hiring process in any way, you’ll need to tell candidates exactly how you’re doing that.

As part of your policy-setting process, ask:

  • What types of AI use are we okay with? Where do we draw the line?
  • What are our policies regarding
  • How are we currently communicating our AI hiring policy and interview standards (like camera on/off) to candidates upfront?
  • How transparent are we in how we’re using AI to evaluate, screen, or score candidates?
  • Do our policies exclude neurodivergent candidates, or those using assistive technologies?

3

Strip application processes back to basics — with a few curveballs.

The swift rise of candidate fraud and landslide of applications per role has made it tempting for organizations to make their application process harder. 

But that could be counter-intuitive, said Barnett-Pierce.

“Companies are leveraging AI to do all of the things that we don’t want to do or that take up too much of our time, candidates are just using it for the same purpose,” she said. “They don’t want to be filling out these long applications where you’re asking 20 million questions that are irrelevant.”

Instead of adding more friction, consider simplifying the things that don’t matter as much — and sharpening where it does. For example, introduce casual conversation starters as part of the interview that touch on the candidate’s location or interests. Dig into behavioral or hypothetical questions that tap into candidates’ insights, experience, and reasoning skills.

These unscripted moments can help weed out coached or AI-generated responses — encouraging candidates to think in real time.

4

Train hiring managers and external partners to spot fraudulent behavior.

Standardized interviewer training is a critical line of defense in any interview process — helping hiring managers interview fairly and consistently. But most haven’t been trained to spot the signs of fraud, let alone in an AI-enabled hiring landscape.

Giving managers and any external partners practical guidelines and tools that help them spot unusual behaviors or fraud ‘tells’ will help them lead interviews with more confidence:

  • Look for odd pauses in responses, or lapses in eye contact.
  • Observe candidates’ interview environments — are there odd background noises, like talking, or a background blur or filter applied?
  • Observe candidates’ level of eye contact. Are they reading from a screen?
  • Encourage managers to go off-script (as long as it does not impede fairness or consistency) and follow-up if an answer feels off.

This training doesn’t just reflect in-the-room behavior — managers also need to know what to document, and when to follow up with the talent team if they have concerns. Regular calibration across panels, hiring stages, and score cards is also a must — because if one manager’s spidey senses are tingling, it could be a pattern to watch for.

5

Tighten screening and verification controls.

The shift to remote-first hiring and rise of consumer AI has introduced a few security and compliance vulnerabilities that organizations are struggling to get ahead of. In some hiring processes, this may mean introducing additional layers of verification to make sure the candidate is really who they say they are.

Using tech solutions — along with a human in the loop — to verify identity and qualifications can take away some of the heavy lift for these steps:

  • Pre-interview: talent teams and hiring managers can cross-check candidate profiles and history with social media like LinkedIn. 
  • During the interview: Hiring managers and panels must check that the candidate that shows up is the one they’re expecting. Some recruiters are also reporting that they’re now screenshotting video calls as an insurance policy in this regard. But be careful about requiring cameras on — especially when it comes to neurodivergent candidates or those requiring assistive technology.
  • Beyond the interview: Reference checks and background checks will help teams feel confident about who they’re hiring — but beyond the basics, ask about specific responsibilities, technical projects, or performance-based feedback.

Ready to build a fraud-aware hiring process that still puts people first?

Candidate fraud isn’t slowing down any time soon — but talent teams can take steps with their processes, tech, and internal enablement to give themselves the best line of defense. 

In practical terms, this means identifying where your processes break down, setting clear guidelines around your fraud prevention efforts and AI usage, and designing systems that make fraud harder to pull off without penalizing good-faith candidates. But remember that hunting down the bad actors shouldn’t be your focus — creating a consistent, standardized hiring process and great candidate experience should be.

Embedded recruitment process outsourcing can help make that shift operational. At Talentful, our talent experts embed within our customers’ teams to help build inclusive, fraud-aware hiring processes from the ground up — fixing broken processes, flagging risks, and equipping your team with the skills to hire with confidence.