A March 2025 Gartner survey found that only 26% of candidates trust AI to evaluate them fairly. At the same time, more than half believe AI is already being used to screen their applications.
That combination is the defining tension in hiring right now.
Most organisations are already using AI but most candidates don't trust it. And we’re seeing an uptick of drop offs, complaints, and candidates who self-select out before they’ve even started.
To tackle trust issues, we need to understand what's driving the distrust in the first place.
Candidate scepticism about AI isn't random. It clusters around four specific concerns.
The most common concern is that candidates can't see what AI is doing. When a human rejects you, there's at least the possibility of a reason or a conversation to look into. When an algorithm rejects you, it feels final and unreasonable.
The fix isn't to hide that AI is involved. In fact, it's the opposite. Be clear about what AI does in your hiring process from the get-go.
Only 8% of US job seekers believe AI makes hiring more fair, according to 2025 research. That's a striking number given that structured AI screening is one of the most effective tools available for reducing the four main types of bias in traditional hiring.
We need to address this disconnect.
AI screening can reduce affinity bias, confirmation bias, attribution bias, and recency bias which are products of human inconsistency. But because candidates associate "AI" with high-profile cases of algorithmic discrimination, they assume AI makes bias worse and not the other way around.
Changing this perception requires explaining how your AI is designed to be fair based on what questions it asks, how scoring works, and who reviews the outputs.
Hiring is personal. Candidates are making significant life decisions and they want to feel seen.
An AI screening process that offers no human touchpoint or acknowledgement that a person reviewed their application creates a candidate experience that feels transactional at best and dismissive at worst.
Make it clear where AI ends and people begin so candidates know a human is in the loop.
Why does it matter?
Candidate distrust has measurable operational consequences:
Closing the trust gap doesn't require removing AI from your hiring process. Here are four ways to approach it.
There's a difference between disclosing that you use AI and being transparent about it.
While disclosure is a legal requirement in some jurisdictions, it is transparency that builds trust. Candidates need to know:
One of the most powerful things you can tell a candidate is: "Everyone gets the same questions, evaluated against the same criteria."
That's both the promise and the design principle of structured interviews. It's what separates AI screening that reduces bias from AI that merely automates existing bias.
For example, Kiku's AI interviews are built around three components:
This structure is the foundation of bias mitigation in high-volume hiring.
Communication is key in making the human oversight of the process visible. And it doesn't even need to be elaborate. It can be a sentence in your screening invite or a line on your careers FAQ.
The point is that candidates shouldn't have to wonder whether anyone looked at their application or was it all decided by AI.
Teams using Kiku see and increase candidate satisfaction partly because the process is fast and convenient but also because it's designed to feel fair.
See how candidates experience the process.
One of the most cited sources of candidate frustration is never hearing back or receiving a generic rejection with no indication of why. AI makes it possible to give every candidate personalised feedback based on their actual interview responses.
When a candidate receives feedback referencing something they said, it signals the process was genuine. This is also one of the clearest demonstrations of how AI agents differ from simple automation tools — the agent doesn't just filter, it engages.
Most hiring teams are focused on what AI can do for them. Very few are focused on what AI transparency can do for their employer brand.
Candidates who encounter a hiring process that explains how AI is used, assures them of human oversight, and treats them as individuals respond differently. They complete the process at higher rates, accept offers more often, and speak positively about the experience regardless of the outcome.
The 74% of candidates who don't currently trust AI hiring tools aren't a lost cause either. They're an audience waiting to be convinced by how you:
Remember, when AI hiring is universal because it exists in most hiring process, the ability to build trust becomes your differentiator.
The distrust stems from four main concerns:
Most of these are addressable through transparency and process design. For practical guidance, see: The Recruiter's Guide to Explaining AI to Candidates.
Structured AI screening reduces bias by applying consistent criteria to every candidate. Whether AI makes hiring more or less fair depends entirely on how it's designed.
Poorly designed AI can automate existing bias. Well-designed AI with structured questions, transparent scoring, and human oversight is more consistent than most unstructured interview processes.
Read more: Four types of bias in screening processes.
Post-screening candidate satisfaction surveys are the most direct method. Track completion rates (candidates who start but don't finish), offer acceptance rates, and voluntary feedback.
Book a demo and we'll walk you through how Kiku's structured AI screening is designed and how to communicate it to your candidates in a way that builds confidence rather than concern.




