Navigating ageism in hiring: practical steps recruiters can take now
2026-03-02
Fredrik Törn
Ageism in hiring is one of those topics that can be hard to “see” until you start looking for it. In our recent webinar with Simon Bucknell, we spoke about how age bias often shows up quietly: in job descriptions that cap years of experience, in “culture fit” language, or in how quickly certain candidates are filtered out during CV screening and shortlisting.

The good news is: there are very practical things talent acquisition leaders and recruiters can do to reduce ageism – without making hiring slower or harder. And when used responsibly, AI technology can be part of the solution.

1) Start with structure (it’s the simplest bias-reducer)

If there’s one takeaway I’d underline, it’s this: a structured recruitment process beats an unstructured one for fairness and for quality.

That means:

  • Define what “good” looks like for the role (skills, behaviours, must-haves vs. nice-to-haves)
  • Put those criteria into the job description
  • Use these criteria to assess every candidate consistently

When hiring lacks structure, people naturally fall back on shortcuts, and that’s where any type of bias thrives (age bias included).

2) Don’t confuse “shorthand” with “signal”

We spoke about this in the context of things like work gaps. A gap is not a verdict, it’s a prompt for a conversation.

If your process automatically screens people out because of a gap, graduation year, or “too much experience,” you’re likely filtering out strong candidates for the wrong reasons. And there’s a cost: you risk not hiring the best talent.

As recruiters, you should avoid strict rules that are really just stereotypes in disguise.

Structured assessment helps here. When candidates are evaluated on how they respond to job-relevant questions rather than on how their CV “looks”, you can instantly reduce the influence of age-related shorthand.

3) Use technology, but choose it deliberately

AI in hiring isn’t “good” or “bad” by default. It’s more accurate to say: there’s a spectrum. The good, the bad, and the ugly.

  • The good: structured, skills-based assessments that focus on job-relevant criteria, where irrelevant personal characteristics (age, gender, ethnicity etc.) are not part of the evaluation.
  • The bad: unstructured tools or workflows that add noise and inconsistency.
  • The ugly: dumping CVs into a generic LLM and asking it to rank candidates without defined criteria, which can raise serious privacy, compliance, and bias issues.

Responsible AI in hiring should mean:

  • Clear, predefined job criteria
  • Transparency about how candidates are assessed
  • No reliance on irrelevant personal data
  • Human oversight and final decision-making
  • Compliance with privacy regulations

At Hubert, our approach is simple: we use deterministic machine-learning models to assess responses to structured job-related questions. Our assessment doesn’t know the candidate’s age, doesn’t evaluate appearance and doesn’t infer from graduation years. It focuses on what the candidate actually says in relation to the job criteria.

So my advice is: don’t be afraid of machines, especially for high-volume hiring. Machines are great at doing repetitive tasks at scale. But you must stay in control and make sure you’re using technology that reduces bias, not automates it.

4) Stay in control: the machine does what you ask it to do

This is the part that often gets missed in AI debates.

A tool will generally reflect:

  • the criteria you give it
  • the data you feed it
  • the decisions you outsource to it

If your criteria are vague (“high energy,” “culture fit,” “digital native”), you’ll get vague and biased outcomes. If your criteria are clear and job-relevant, you’re much more likely to get fair, defensible decisions.

5) Make fairness visible, to candidates and hiring managers

Simon landed an important point in the webinar: inclusivity needs to be shown, not just stated.

For TA teams, that can look like:

  • reviewing job ads for age-coded language
  • publishing or at least tracking age diversity metrics internally
  • being transparent about your selection process 

Candidates are trying to understand whether the process is fair. If it’s not obvious, they’ll assume the worst, especially in a market where people already feel the system is stacked against them.

A simple checklist to take back to your team

If you want a quick starting point, here’s what I’d implement first:

  • Replace “years of experience” requirements with skills and outcomes
  • Use structured screening questions tied to job criteria
  • Avoid graduation year filters and age-coded language
  • Choose responsible assessment tools like Hubert and avoid “shadow AI”
  • Audit where candidates are being filtered out and why

Ageism is real, and it’s often invisible. But it’s not inevitable. With structure, accountability, and responsible use of AI, we can build hiring processes that are not only fairer but also better at identifying the best talent.

If you’d like to continue the conversation or learn more about responsible AI in screening, feel free to reach out.

Upcoming webinar on bias

On Thursday, March 5, we’re hosting a webinar on gender bias in recruitment and how to manage it in practice. The session will be held in Swedish, with English subtitles available.

Want to join us? Sign up here:

https://page.hubert.ai/sv/mot-en-mer-k%C3%B6nsneutral-rekrytering-l%C3%A4rdomar-fr%C3%A5n-securitas-sverige

Implementation period
Insight
Navigating ageism in hiring: practical steps recruiters can take now
March 2, 2026
Fredrik Törn
Contact
Give us a call
General inquiries
hello@hubert.ai
Swedish office
Vasagatan 28, 111 20 Stockholm, Sweden
Update cookies preferences