AI is moving fast in recruitment. Large Language Models (LLMs) are being positioned as tools to screen, summarise, rank, and assess candidates at scale. The promise is speed. The risk is something far more damaging: unaccountable decisions that nobody can properly defend.Accountability cannot be delegated to an LLM.
As you probably know by now, an LLM is an AI system trained on immense amounts of data, making it capable of understanding and generating natural language and other types of content to perform a wide range of tasks such as answering questions, writing, summarizing, and translating. Following the launch of ChatGPT in late 2022, solutions using LLMs have increasingly been applied to candidate assessment, with tools designed to evaluate CVs, analyze interview transcripts, and even assess social media behavior.
In recruitment, TA leaders are accountable - whether or not they personally reviewed a specific decision. When a candidate challenges an outcome, when legal asks how a decision was made, or when a journalist investigates systemic bias, the organisation must explain itself.
And that explanation has to be more than:
Those answers don’t hold up legally, ethically, or reputationally.
LLMs generate outputs based on probability, not verification. They are exceptional at producing fluent, convincing text. They are not designed to provide stable, auditable reasoning chains for high-stakes decisions about people’s livelihoods. A hallucination looks exactly like a fact until you check the source - and in hiring, the damage is already done by then.
At Hubert, we see explainability as a matter of trust. As AI takes on a larger role in recruitment, the demand for transparency is growing - from candidates, recruiters, and regulators alike. That’s why Hubert is placing extra emphasis on explainability in Q1: our users shouldn’t just receive a result, but understand why.
This is essential for fair decision-making, stronger candidate experiences, and an employer brand that stands up to scrutiny as the technology is examined more closely.
Candidates expect, and deserve, hiring processes that are:
When a candidate feels rejected by a system that cannot clearly articulate why, trust erodes fast. And this doesn’t stay private, it shows up in reviews, social posts, union conversations, and media narratives. It will shape how your organisation is perceived as an employer.
The reputational risk isn’t that AI is used. It’s that AI is used in ways no one can confidently stand behind.
Use LLMs where they add value - not where you need proof. LLMs can meaningfully improve candidate experience: clearer communication, faster responses, better guidance, multilingual support. They can help recruiters scale processes.
At Hubert, we use LLMs to manage dialogue, acknowledge responses, and support candidates through the interview journey. LLMs help guide and structure candidate conversations, ensuring every candidate receives a clear, consistent, and respectful interview experience - regardless of volume, location or language.
But they are not used to directly assess candidates. Assessment is different.
Assessment requires consistency, validity, and defensibility. It requires a methodology that can be explained to a candidate, a hiring manager, a regulator, and a court if needed. That’s not an innovation tax - it’s table stakes for responsible hiring.
Hubert’s assessment is grounded in structured interview science, not generative language.
We assess candidates using:
Importantly:
At Hubert, we are of the opinion that LLMs, today, can neither truly explain why an individual received a specific outcome, nor satisfactorily reproduce that same score. LLMs could definitely produce an elaborate, convincing, and even plausible explanation that looks great at first sight. But this “explanation” would still be made up and generated post-hoc, it would not be true as we like to think of truth.
Before allowing LLMs to influence who progresses and who doesn’t, ask: If this decision is challenged in 12 months’ time, can we explain it clearly, evidence it confidently, and stand behind it publicly?
If the answer is anything less than yes, the risk isn’t abstract. It’s reputational.And once it surfaces, it’s your name, not the model’s, attached to it.