AI regulation has been on the horizon for a while. Now it's real.
The EU AI Act officially came into force in August 2024, making it the world's first comprehensive legal framework for artificial intelligence. For anyone using AI to screen, rank, or evaluate job candidates in the EU, the clock is now running. The compliance deadline for high-risk AI systems in employment is 2 August 2026.
If that date feels distant, it isn't. The preparation required is substantial, and the organisations that start early will be in a materially better position than those that don't.
This article covers what the EU AI Act is, how it classifies AI tools used in recruitment, why we think it's the right direction for the industry, and how Kiku is building to meet it. If you're evaluating vendors, read our guide to EU AI Act compliance: how to evaluate AI recruiting vendors alongside this one.
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Passed by the European Parliament in March 2024 and entering into force in August 2024, it establishes a risk-based regulatory framework that governs how AI systems can be developed, deployed, and used across the EU.
The Act classifies AI systems into three categories based on the risk they carry:
Unacceptable risk systems are banned outright. This includes AI that manipulates behaviour through subliminal techniques, social scoring systems, and most uses of real-time biometric identification in public spaces.
High-risk systems are permitted but subject to strict obligations. This is the category that covers the majority of AI tools used in employment decisions.
Minimal risk systems face little to no regulation. Most consumer AI applications fall here.
For HR, talent acquisition, and staffing businesses, the high-risk category is where attention is needed.
Yes. AI systems used in employment decisions are explicitly classified as high-risk under the Act. That covers a wide range of tools in common use today:
If your business deploys any of these tools in the EU, or if the output of these tools affects candidates located in the EU, the regulation applies to you. This is true whether your company is based in the EU or not.
One important point: the Act applies to deployers as well as providers. If you use an AI recruiting tool, you share in the compliance responsibility. Your vendor cannot absorb your obligations on your behalf.
The core obligations for high-risk AI systems in employment are:
Risk assessments. Providers and deployers must document how their AI systems work, what risks they carry, and how those risks are managed.
Bias testing. Systems must be evaluated for demographic fairness across candidate pools. If a model produces discriminatory outcomes, that is a compliance failure, not just an ethical concern.
Human oversight. No AI tool should make final placement, rejection, or evaluation decisions without a qualified human in the loop.
Transparency. Candidates must be informed when AI is being used to assess them. The logic behind decisions must be explainable on request.
Continuous monitoring. Compliance is not a one-time certification. Systems must be monitored, updated, and re-assessed over their operational lifetime.
Technical documentation. Providers must maintain documentation sufficient to demonstrate compliance. Deployers must maintain records of their use of the system.
The compliance deadline for high-risk AI systems in employment is 2 August 2026.
Kiku was built for frontline hiring. Our customers are employers filling high volumes of roles at scale. In that environment, AI is genuinely useful. It reduces time-to-hire, increases screening capacity, and makes it possible to give every applicant a fair shot regardless of when they apply.
It also carries real risk if it's designed or deployed poorly.
The blunt truth about the AI recruitment market is that not all tools are equal. Some systems make consequential decisions about people's employment without being able to explain how. Some score candidates on signals that have never been validated for job relevance. Some vendors describe their technology as "AI-powered" in ways that are more marketing than engineering.
The EU AI Act is designed to address exactly this. It forces the question that buyers should always have been asking: how does this actually work, and can you prove it's fair?
We welcome that pressure. It's better for candidates and it raises the standard for what responsible AI in hiring should look like. Our AI Manifesto sets out the principles we build to. Published not because regulation required it, but because we think transparency about how AI should work in hiring matters.
We've been working toward AI Act readiness as a product priority and it's a component of our AI manifesto. It's not a compliance afterthought. Here's what that looks like in practice.
Transparency by design. Kiku makes clear to candidates when they're interacting with an AI system. We're not trying to trick candidates into thinking that they're interacting with a human, we're honest that they're interacting with AI.
Bias monitoring. We continuously monitor our platform to ensure that we're not introducing bias. We screen every candidate in the same way so that no candidate is treated differently.
Human in the loop. Kiku surfaces structured, objective information to help recruiters make better decisions. It does not replace recruiter judgement. Final hiring decisions stay with the humans running the process.
Documentation. We maintain the technical documentation that the Act requires, and we're building the processes to support our customers' own compliance obligations as deployers.
Candidate rights. Candidates can ask how they were assessed. We support correction and deletion requests. As we continue to build, we will always support the candidate's right to understand their data and the process.
If you're using AI tools in your hiring process and operating in the EU, there are concrete steps to take before August 2026.
Audit your tools. Identify which AI systems you use in your hiring process and whether they qualify as high-risk under the Act. Candidate screening, scoring, and assessment tools almost certainly do.
Review vendor contracts. Understand how compliance obligations are divided between you and your technology providers. You cannot delegate your deployer obligations to a vendor, but a good vendor will provide documentation and support to help you meet them.
Implement human oversight structures. Ensure no AI tool is making final hiring decisions without human review, and that your oversight processes are documented and demonstrable.
Prepare for candidate transparency. Build processes for informing candidates when AI is used in their assessment and for responding to requests for explanation or data deletion.
Start now. The organisations treating this as a last-minute compliance exercise will be in a difficult position. Those building the right foundations today won't.
For a detailed framework for evaluating whether your current or prospective AI recruiting tools are AI Act ready, see our guide to EU AI Act compliance: how to evaluate AI recruiting vendors. Our broader guide to legal and compliance considerations for high-volume recruitment covers the wider legal landscape for employers hiring at scale.
When does the EU AI Act come into force for recruitment software? The compliance deadline for high-risk AI systems in employment, which includes most AI recruiting tools, is 2 August 2026.
Does the EU AI Act apply to companies outside the EU? Yes. If your AI system's output affects candidates located in the EU, the regulation applies regardless of where your company is headquartered or where the technology is hosted.
What happens if a company doesn't comply? National authorities have the power to impose significant fines and, in some cases, to withdraw non-compliant AI systems from the market.
Who is responsible for compliance? The vendor or the employer? Both. The Act distinguishes between providers (vendors who build AI systems) and deployers (organisations that use them). Each carries specific obligations. Using a compliant vendor does not remove your obligations as a deployer.
What AI recruiting tools are classified as high-risk? Tools used for candidate screening, CV ranking, automated assessment, job matching, and performance monitoring are all explicitly covered. If your tool informs or influences an employment decision, it is likely high-risk.
Is Kiku EU AI Act compliant? We're actively building to meet the Act's requirements as a product priority. We're happy to walk through what that means in practice for your use case. Get in touch.




