The EU AI Act puts real obligations on the businesses that use AI tools, not just the ones that build them. Under the regulation, if your organisation deploys an AI system that informs employment decisions, you are a deployer with your own compliance responsibilities. Your vendor cannot take those on for you.
That changes how vendor evaluation should work.
Asking "is your tool compliant?" is not enough. Compliance is a shared responsibility, and you need to understand exactly how your vendor is meeting their side of it before you sign.
This article gives you the specific questions to put to any AI recruitment software vendor, what good answers look like, and what should give you pause. Take them into every vendor conversation and into any RFP you're running.
If you're new to the regulation, read our overview of the EU AI Act and what it means for frontline hiring first. For a broader look at legal risk in high-volume recruitment, our guide to legal and compliance considerations for high-volume recruitment covers the wider landscape.
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It classifies AI systems by the risk they carry. AI tools used in employment decisions, including candidate screening, ranking, assessment, and job matching, are classified as high-risk systems and are subject to the regulation's strictest requirements.
The compliance deadline for high-risk AI systems in employment is 2 August 2026. From that date, both the providers (vendors) who build these tools and the deployers (organisations) who use them must meet specific obligations. Non-compliance carries fines and, in some cases, the power of national authorities to withdraw systems from the market.
The regulation applies even if your company is not based in the EU. If the AI system's output affects candidates located in the Union, the Act applies.
AI tools used in recruitment, candidate screening, job matching, or assessment are classified as high-risk under the Act. A vendor who is unclear on this, or who argues their tool falls outside the high-risk category without a well-reasoned explanation, is a red flag.
Ask for documentation of their risk classification and the rationale behind it. If they've completed a formal conformity assessment, ask to see it.
Follow-up: Has this classification been reviewed by legal counsel or an external auditor? When was it last updated?
Under the Act, AI developers (providers) carry significant obligations: maintaining technical documentation, implementing risk management systems, conducting bias and accuracy testing, and registering high-risk systems with the EU database.
A credible vendor should be able to explain their compliance posture clearly. Vague answers about "working toward" compliance, or references to future certifications without current evidence, are not sufficient.
Follow-up: Can you share your technical documentation? What does your risk management process look like in practice?
This is the question most buyers forget to ask.
The Act places obligations on deployers as well as providers. You're required to conduct your own due diligence, implement appropriate human oversight, inform candidates when AI is being used, and maintain records of how the system is used in your process.
A good vendor partner will help you understand what those obligations mean in practice and provide documentation, training, and support to make them achievable. A vendor who hasn't thought about deployer obligations is one to treat carefully.
Follow-up: Do you provide compliance documentation or a deployer readiness guide? What support do you offer customers preparing for an audit?
High-risk AI systems must be tested for accuracy and fairness across different demographic groups. The vendor needs to be running structured evaluations, not just stating that their model is fair.
Ask specifically about the methodology: which groups are evaluated, what thresholds are used, how frequently testing is done, and what happens when issues are found. This is one of the areas where the gap between responsible vendors and the rest is most visible.
Follow-up: Can you share results from your most recent fairness evaluation? What demographic groups were included?
The Act requires that high-risk AI systems be used in a way that allows effective human oversight. No AI tool should be making final employment decisions independently.
Understand exactly where the AI's role ends and the human's role begins. A tool that presents structured, explainable outputs to a recruiter who makes the final call is very different from one that advances or rejects candidates automatically.
Follow-up: What decisions does the system make autonomously, if any? How does a recruiter override or challenge an AI-generated output?
Candidates have a right to know when AI is being used to assess them. Under the Act, deployers are responsible for ensuring this transparency.
Ask your vendor how they handle candidate disclosure, and what information is made available if a candidate asks how their assessment worked. Candidate transparency is also a dimension of legal defensibility. Our legal and compliance guide covers why documented, consistent candidate treatment matters beyond the AI Act specifically.
Follow-up: How does your system support candidate rights to explanation, correction, or deletion of their data?
Regulators can request evidence of compliance. National authorities have the power to impose fines and, in some cases, withdraw non-compliant systems from the market.
Your vendor should maintain logs of model outputs and updates, and be able to support you in the event of a regulatory review. If they can't describe what that support looks like, that's worth knowing before you're in a position where you need it.
Follow-up: Do you maintain an audit trail of how the system performs over time? What's your process when a model is updated?
Some answers in vendor conversations are worth paying close attention to.
If a vendor cannot explain how their model produces a score or ranking, that's a problem. Explainability is required under the Act, not optional.
If a vendor uses facial recognition, emotion recognition, or biometric data in their assessment process, ask very direct questions about how this is classified. These technologies carry significant restrictions under the Act and several are prohibited outright.
If a vendor suggests that compliance is solely their responsibility and you don't need to worry about it, be cautious. That's not how the regulation works.
Responsible AI vendors are building compliance in from the ground up, not retrofitting it before the enforcement deadline. The principles that make a vendor trustworthy on the AI Act are the same ones that make them trustworthy as a long-term partner: transparency about how their systems work, honesty about limitations, and a genuine commitment to fair outcomes for candidates.
At Kiku, we've published our AI Manifesto to make our principles explicit. It covers how we approach human oversight, safety, candidate dignity, and our obligations as a provider under the EU AI Act. We encourage every vendor you're evaluating to be able to articulate something equivalent.
If you're assessing the broader market, our guide to the best AI recruiting tools for 2026 covers the leading platforms by use case and hiring stage, which can help you understand the landscape before you start conversations.
That gives you time to make informed decisions now rather than rushed ones later. Vendor evaluation is the right place to start.
The organisations building compliant, well-documented hiring processes today won't be scrambling next year. The ones who assume their vendor has it covered may find out otherwise.
Kiku is happy to walk through how we approach EU AI Act compliance and what we provide to support our customers' own obligations. Get in touch.




