Is AI safe in recruitment? A practical guide to ethical and compliant AI hiring
2026-05-13
Greg Dunbar
Is it safe to use AI in recruitment? This is a discussion I have every day with senior TA leaders of major global enterprises. Some are excited about the potential, while others have a blanket “no AI” policy.
So I thought I’d share my perspective on this below. Because whether to use AI or not, is not a binary choice. Used in the right way it can absolutely be ethical, compliant and legally defensible. But people are right to have major concerns. AI use in recruitment is an inevitable reality, but not in the way most are viewing it today. So, let me try to break this down as I see it.
The macro reality: AI is accelerating fast (and raising real concerns)

AI development in 2026 is already moving at one hell of a pace. Experts predict that the “hyperscalers”, the foundational Large Language Models (LLM providers like OpenAI, Anthropic, Google, Meta, Grok etc.) plan to spend anything between $600bn and $1tn in 2026 on infrastructure to support the expected explosion in AI use. This is the biggest tech infrastructure build-out in human history. 

Simultaneously, the models are getting smarter, and AI is becoming increasingly accessible as native, on-device capabilities emerge with low barriers to use. And we are now well into the “agentic AI era” where business and consumers can build their own agents, seemingly to perform any tasks autonomously, for the cost of a cheap gym membership. Let's be honest, all this is exciting, but also super-scary, right? Are we on the cusp of redefining the Terminator “Skynet” storyline as a prophecy rather than Hollywood science fiction?

At the same time some analysts are continuing to ask if this is a huge bubble? A huge rush to invest in massive compute power while ROI is still elusive for many enterprise use cases. With the cost of AI “tokens” effectively subsidised and artificially suppressed and polarised points of view on whether the cost will rise or fall dramatically in future. Personally, I think many of these are redundant questions. Whatever happens, there is no doubt AI is here to stay and capabilities are improving faster than ever.

The real challenge: Knowing where AI should (and shouldn’t) be used

Despite all this progress, it's never been more important to know exactly where, why, and how to use AI, especially in the context of recruitment. There are potential landmines everywhere; legally, ethically and in terms of the quality of AI output. Despite the huge progress, LLMs for example are still fundamentally architected to provide probabilistic results, not deterministic, and remain prone to errors. This is not a bug, it is a feature in their fundamental design. There is now evidence of these generative models exhibiting deception and in some cases even becoming capable of blackmail for self-protection. So far we’ve only had to worry about the models hallucinating (making up facts or inaccurate outputs). But things are getting outright sinister in some cases, with an increasing cohort of industry leaders, like ex-Google CEO Eric Schmidt calling to pause or even “pull the plug” on further development. Unfortunately, the better the models get, the harder it becomes to spot when they have got it wrong, which could have catastrophic impacts (or at the very least could undermine the efficiency promise on the label). The term “AI slop” has emerged to define inefficiencies created by mass-generation of inaccurate outputs that kill human efficiency as they try to navigate what is correct, high-quality and what is not.

And as for the inherent bias risks in how these massive models are trained - some have stopped denying that bias exists, and admitted LLMs are in fact prone to this after all, and that we should perhaps see it as different LLMs having different points of view, or personalities even perhaps. This is not necessarily a bad thing, but it has to be a consideration when deciding how to use them in an enterprise environment. Again, especially in the context of a field that touches fundamental human rights like recruitment.

AI is not one thing

At this point you’d be forgiven for wondering why someone whose job it is to sell AI technology to enterprises might be sounding so negative about the technology. In fact I am far from negative about it, but with great power comes great responsibility, and we must dig deeper to understand how to use the technology in the right places, responsibly. And importantly, AI is not “one thing”. Not all AI is GenAI (LLMs, or agentic). There are many branches of AI with different attributes that lend themselves well to different use cases. Used in the right way, AI in recruitment can be both ethical and legally compliant.

Zooming in: AI’s impact on recruitment

When it comes to recruitment, there is no doubt that AI has a huge role to play. With candidates themselves using AI and applying on mass to jobs, the recruitment supply chain has undergone a rapid, once-in-a-generation structural change which has left recruiters' heads spinning, broken long-established processes, and forced TA leaders to seek new ways of working. Ironically, AI can definitely play a major role in helping to reshape processes and provide a next generation experience for candidates.

But AI use in recruitment is rightly one of the higher-risk categories called out in forthcoming regulations like the EU AI Act. Because recruitment touches fundamental human rights that are already protected by employment laws in most of the western world. So as TA leaders embark on their journey to adopt AI it is critical to break down problems and use cases into specifics to be super-clear where the risks really lie, and to understand how to avoid the bear traps.

At Hubert we believe, for example, that it is for the most part acceptable and safe to use GenAI (LLMs) to generate job descriptions, summarise a role to a candidate, orchestrate and schedule meetings and send basic candidate communications. The risks here to candidates or to the enterprise are relatively low. It can also be ok to use conversational agents to gather basic information for application and screening purposes. So long as you’re asking hard qualification questions that are a simple “pass / fail” and that you don’t break any local employment laws in the data you ask for and make decisions on. There is of course a risk to employer and brand reputation if a chatbot goes off-piste with its responses, or says something inappropriate*. That could be highly damaging, but you shouldn’t fall foul of the most serious legal or ethical risks.*[Note: Hubert also mitigates against this through a unique hybrid conversational model combining proprietary Small Language Models for higher degrees of control, with containerised use of 3rd party LLMs to sprinkle in rich responses to specific questions to enhance the experience without handing control of the conversation to the LLM].

Where AI becomes dangerous: Assessment and decision-making

Where the rubber hits the road from a risk perspective, so to speak, is when it comes to assessing, scoring and ranking candidates either to inform recruiters shortlisting decisions or for any kind of automated decision making. This is where the biggest risks lie for all parties, and it’s at the crux of some of the highest profile legal cases on this subject in the US. Using LLMs, that are effectively black boxes, to assess, score, compare and rank candidates is dangerous. No matter the guardrails and training you may provide the model via prompts. There is no guarantee to what extent guardrails or methodologies have been followed, results can be inaccurate or subjective, and they lack consistency, repeatability and explainability. Furthermore, the large language models themselves are continuously being updated, with new versions being released, adding further uncertainty as to how that impacts consistency.

The limits of “human-in-the-loop”

Most HR professionals are already aware that AI-powered automated decision-making is more or less a no-go. However, some vendors are attempting to get around this by pointing to the fact that the human recruiter is responsible for checking the data before making decisions. Human-in-the-loop (or human-on-the-loop). But any attempt to absolve responsibility as a platform provider is likely to go unheard. Claiming that an AI screening system, for example, is just providing efficient insights to help a human recruiter make the final decision, will most likely not be enough to satisfy regulators. Especially if busy recruiters start to become over-dependent on these AI insights making them a default proxy for an automated decision. 

If you want a shocking but timely analogy for this point, check out this example where American Journalist Shane Harris, National Security & Intelligence Editor at The Atlantic, asks Claude (Anthropic's LLM) how it feels about being used to identify military targets. “Thats not the human making a decision in any meaningful sense, it is simply ratifying an algorithmic output under time pressure with incomplete information and institutional pressure to move fast”. And “it’s not human judgement, it is automation bias with a human signature attached”. Sound familiar? Not to compare recruitment to a battlefield. In recruitment lives are rarely at risk, but the decisions we take are impactful to human rights and society.

Regardless of whether you might agree with this point of view on future regulatory interpretation, my personal opinion is that we should care about this from a point of view of good ethical practice, brand equity and for the sake of the human race.

A better approach: Hybrid, responsible AI

All that said, there are ways that both candidates and companies can get enormous benefits from AI-powered hiring and screening systems, without the risks laid out above. At Hubert, we have built a unique, hybrid platform that comprises a constellation of AI models alongside scientifically-backed, mechanical assessment methods to ensure a high degree of ethical and legal compliance. For example, Hubert uses a proprietary Small Language Model (SLM) for large parts of the chat interface, giving customers total control and trust over how the conversation will respond - eliminating any risk of off-topic, inappropriate conversation or hallucinations or prompt-injection attacks. This is enriched with LLMs for rich generative responses only in specific parts of the chat to ensure a personalised “agentic” feel and high-quality experience. Critically however, assessment and candidate scoring is conducted by over 100 deterministic, crystallised algorithms. These algorithms lean on long-established, scientifically-proven interviewing methodologies. While trained and optimised by both human experts and AI (Machine Learning, complimented with LLMs for context), once tested for bias the algorithms are ultimately locked down.

Why does this matter? The key ingredients to the responsible use of AI in recruitment are; accuracy, explainability, repeatability (consistency), and transparency. Hubert’s unique “glass box” approach ensures exactly that. Combining probabilistic LLM models where rich communication supports a great experience, with deterministic algorithms where auditability, explainability and legal defensibility is critical.

The market problem: Noise and false choices

Many companies are yet to decide how to adopt AI in recruitment while others have already had their fingers burned by the ever-growing list of over-promising vendors caught in a race to be first, overlooking ethics and exposing their customers to legal jeopardy. And the market is getting noisier by the minute as it gets easier and easier to vibe-code your way into being a tech-start-up, with impressive “agentic” demos that are initially impressive to the eye. Tools that have been rushed to market soon unravel under the scrutiny of real-world use at scale, with customers reporting poor candidate experiences, inconsistencies, inaccuracies and indefensible scoring. In response some leaders are seeing AI adoption as a binary decision to use LLMs / GenAI, or not. Hubert believes this is not a binary choice. There is a middle ground that genuinely augments technology while protecting human rights and social fabric.

A safer path forward

Hubert is an AI screening agent offering recruiters the fastest path to the best talent, while ensuring awesome candidate experiences. We put candidate safety first, meaning customers can benefit from the promised efficiencies of AI without the risks.

Insight
Is AI safe in recruitment? A practical guide to ethical and compliant AI hiring
May 13, 2026
Greg Dunbar
Contact
Give us a call
General inquiries
hello@hubert.ai
Swedish office
Vasagatan 28, 111 20 Stockholm, Sweden
Update cookies preferences