Historically, gender discrimination has been a significant issue in hiring processes. However, research in Nordic countries shows that explicit gender discrimination has decreased significantly in recent decades.
That doesn’t mean bias has disappeared.
According to Moa Bursell, researcher at Institute for Futures Studies, bias in recruitment often appears in more subtle ways. These include:
Because these biases are often invisible to decision-makers themselves, they can be difficult to detect in traditional recruitment processes.
This is where structured and data-driven approaches can help.
Securitas recruits approximately 2,000 employees per year in Sweden, from roughly 45,000 applications annually.
Most roles are operational security positions such as:
These roles attract a large applicant pool but the industry remains strongly male-dominated, with about 25% of applicants being women.
To manage these volumes while ensuring fairness, Securitas Sweden introduced structured digital interviews through Hubert. Candidates who met basic eligibility requirements were invited to complete an interview where they answered standardized questions about motivation, experience, and competencies.
The goal was simple: evaluate candidates based on relevant skills and potential, rather than assumptions from CVs.
Shortly after implementing the new process, Securitas’ recruitment team observed a pattern in the early interview scores.
Women were on average scoring slightly lower than men in the initial interview stage.
Because improving diversity and fairness in hiring has long been a priority for Securitas, the team immediately investigated the results.
Together with Hubert and independent researchers, the team analyzed the scoring criteria in detail. The issue turned out not to be the AI itself, but the evaluation criteria used in the process.
Several factors contributed to the imbalance:
Because the security industry has historically been male-dominated, these criteria unintentionally favored candidates who already had industry experience – something Securitas quickly identified as an opportunity to refine the model.
The insights from the analysis led the Securitas team to refine the recruitment model and introduce several changes:
These changes shifted the focus from narrow industry background to skills, motivation, and potential.
The result? Within a short time, the slight difference in average score between genders leveled out.
The project highlights several important insights for organizations adopting AI in recruitment.
1. Technology is not a “set and forget” solution
AI tools must be continuously monitored and evaluated.
2. Bias often comes from criteria, not algorithms
The biggest risks may lie in how hiring requirements are defined.
3. Collaboration matters
Close cooperation between recruiters, technology providers, and researchers helped identify and fix issues quickly.
4. Human responsibility remains essential
Even with AI support, recruiters remain accountable for hiring decisions and candidate communication.
AI and automation have the potential to make recruitment more consistent, transparent, and scalable. But technology alone cannot guarantee fairness.
Organizations must combine structured processes, thoughtful criteria, and continuous evaluation.
As this project shows, when companies actively monitor outcomes and adapt their processes, technology can become a powerful tool for building a more inclusive workforce.
Watch the full webinar with Securitas and IFFS on-demand here: https://page.hubert.ai/sv/mot-en-mer-k%C3%B6nsneutral-rekrytering-l%C3%A4rdomar-fr%C3%A5n-securitas-sverige