The good news is: there are very practical things talent acquisition leaders and recruiters can do to reduce ageism – without making hiring slower or harder. And when used responsibly, AI technology can be part of the solution.
If there’s one takeaway I’d underline, it’s this: a structured recruitment process beats an unstructured one for fairness and for quality.
That means:
When hiring lacks structure, people naturally fall back on shortcuts, and that’s where any type of bias thrives (age bias included).
We spoke about this in the context of things like work gaps. A gap is not a verdict, it’s a prompt for a conversation.
If your process automatically screens people out because of a gap, graduation year, or “too much experience,” you’re likely filtering out strong candidates for the wrong reasons. And there’s a cost: you risk not hiring the best talent.
As recruiters, you should avoid strict rules that are really just stereotypes in disguise.
Structured assessment helps here. When candidates are evaluated on how they respond to job-relevant questions rather than on how their CV “looks”, you can instantly reduce the influence of age-related shorthand.
AI in hiring isn’t “good” or “bad” by default. It’s more accurate to say: there’s a spectrum. The good, the bad, and the ugly.
Responsible AI in hiring should mean:
At Hubert, our approach is simple: we use deterministic machine-learning models to assess responses to structured job-related questions. Our assessment doesn’t know the candidate’s age, doesn’t evaluate appearance and doesn’t infer from graduation years. It focuses on what the candidate actually says in relation to the job criteria.
So my advice is: don’t be afraid of machines, especially for high-volume hiring. Machines are great at doing repetitive tasks at scale. But you must stay in control and make sure you’re using technology that reduces bias, not automates it.
This is the part that often gets missed in AI debates.
A tool will generally reflect:
If your criteria are vague (“high energy,” “culture fit,” “digital native”), you’ll get vague and biased outcomes. If your criteria are clear and job-relevant, you’re much more likely to get fair, defensible decisions.
Simon landed an important point in the webinar: inclusivity needs to be shown, not just stated.
For TA teams, that can look like:
Candidates are trying to understand whether the process is fair. If it’s not obvious, they’ll assume the worst, especially in a market where people already feel the system is stacked against them.
If you want a quick starting point, here’s what I’d implement first:
Ageism is real, and it’s often invisible. But it’s not inevitable. With structure, accountability, and responsible use of AI, we can build hiring processes that are not only fairer but also better at identifying the best talent.
If you’d like to continue the conversation or learn more about responsible AI in screening, feel free to reach out.
On Thursday, March 5, we’re hosting a webinar on gender bias in recruitment and how to manage it in practice. The session will be held in Swedish, with English subtitles available.
Want to join us? Sign up here: