As artificial intelligence shapes more service decisions, the risk of bias grows. We explore how leaders can reduce that risk through inclusive design, diverse data, and ongoing evaluation.
Artificial intelligence (AI) is increasingly used to support decisions in workforce development; from recruitment, service delivery and resource allocation. While these systems promise speed and scale, they can also reflect and reinforce the biases in their training data. Without intervention, AI can inadvertently reinforce existing inequalities.
In a recent Capita-hosted webinar, AI and inclusion: Breaking barriers and inspiring inclusion in work and learning, experts from Microsoft, techUK and Capita discussed how bias takes root in AI systems, and what it takes to build fairer alternatives. Their message was clear: tackling bias requires leadership, transparency, and design that puts people first.
Preventing ‘dystopia’: The viscous cycle of bias in AI
Bias in AI often starts with the data. When historical data reflects social inequalities or lacks representation, those patterns can carry through into AI systems. For example, facial recognition tools trained mostly on lighter-skinned faces have been shown to misidentify people with darker skin. Similarly, if past hiring data reflects gender bias, AI models may learn to favour male candidates and reinforce exclusion.
As Nimmi Patel, Head of Skills, Talent and Diversity at techUK, noted during the webinar: “If AI models are trained on biased data, they are likely to produce discriminatory outcomes that can negatively impact marginalised communities.”
This problem is exacerbated by an emerging AI feedback loop. Patel highlighted that “more than half the content available online has been touched or entirely created by AI.” This means new models are likely to be trained on data that’s already been shaped by previous AI systems, which creates a cycle in which bias continues to compound. Without active intervention, we risk what she described as a “dystopian” scenario: biased AI shaping future data, which then produces more biased AI.
When bias leads to discrimination
Some degree of bias in AI models is expected - no dataset is perfect. But when those biases lead to consistent disadvantages for certain groups, such as lower approval rates or restricted access to services, they cross the line into discrimination.
That’s why proactive testing and monitoring matters. One approach is to simulate how a system performs across different demographics using known or synthetic data. Tools such as IBM’s AI Fairness 360 and open-source auditing frameworks are being used by organisations to detect, measure and address potential disparities before deployment.
As Nimmi Patel noted in the webinar, some companies, like Wipro, are embedding “fairness checkers” into their AI development pipelines. This allows them to surface problems during training, not after damage is done.
Why diverse teams build better AI
Bias in AI is not just a data or engineering issue, it’s a people issue. When the same kinds of people build and test systems, they are more likely to overlook how those systems affect users whose lives, needs or identities differ from their own.
Only 22% of AI professionals globally are women, and that gap matters. Not because any one group holds the key, but because diverse teams bring broader perspectives, ask sharper questions, and spot risks others might miss.
As Hector Minto, Director of Commercial Accessibility at Microsoft, put it: “The people who know most whether your product works for them or not, are your DEI communities.” Involving employee networks and inclusion groups early in testing isn’t just good practice, it’s how better, fairer technology gets made.
Oversight, transparency and the role of leadership
Fairness in AI does not end at deployment. As systems operate in changing environments, organisations must continually monitor their impact. This includes reviewing outcomes to check whether particular groups are being unfairly affected - and being ready to adapt models or processes when needed.
Transparency plays a central role in maintaining public trust. People should be able to understand how decisions are made, especially when those decisions shape access to services or support. Clear, accessible explanations help make systems more accountable. Just as vital is providing ways for users to query or challenge outcomes. Effective feedback loops not only strengthen fairness but reveal real-world issues that may not have been visible during development.
Delivering on this vision takes more than technical fixes - it requires leadership, policy and culture. Leaders set the tone by making inclusion a strategic priority, supported by training, procurement standards and governance frameworks. As regulations such as the EU AI Act raise the bar on transparency and non-discrimination, forward-thinking organisations will not wait for compliance. They will act early to build AI that is not only compliant but also trusted.
Building AI that reflects public values
Bias in AI is a solvable problem, but it takes intention, vigilance, and inclusive design to address. When organisations lead with fairness, they not only reduce the risk of harm but unlock AI’s full potential to improve lives across society.
By designing with diverse users in mind, monitoring real-world impact, and making systems transparent and accountable, we move from theoretical fairness to meaningful change. That’s how we create AI that works for everyone - not just the majority.
Designing for inclusion from the start
The best way to reduce bias is to design for inclusion from day one. This means rethinking how systems are planned, developed and tested, with a clear understanding of who they serve and how. Key principles include:
- Representative data
Collect and curate datasets that reflect the full diversity of the population. This may involve sourcing additional input from underrepresented groups and reviewing historical data for gaps. As Minto described, organisations must create “deliberate datasets” filling known blind spots, not assuming neutrality. - Involve users early
Inclusive design starts with listening. Engaging directly with people who will use or be affected by the system helps identify risks and priorities that developers may overlook. These insights should inform the system’s features and functionality, not just validate them later. - Build inclusive features
Small design choices can support equity and trust. This might include the ability to specify pronouns, receive explanations for decisions, or give feedback on outputs. All of these make systems more responsive and respectful. - Test for fairness, not just function
Bias-aware testing must be part of standard quality assurance. That means assessing whether systems produce fair and equitable results across different groups and access needs. It’s not just about whether something works, it’s about who it works for.
By embedding these practices, public sector organisations can develop systems that better reflect the people they serve.
Ready to build AI that’s fast, fair and inclusive?
Watch the full webinar with experts from Capita, Microsoft and techUK. Gain practical insights on improving data, governance and transparency to create AI that works for everyone.
Find out more about how our people and services are AI-enabled:
Or get in touch with the team to find out more about our services:

Justice Onwuka
Senior learning and development consultant, Capita
With global experience in L&D, Justice joined Capita in 2023 to provide value to clients across the public and private sector. He’s committed to helping organisations recognise and harness the multifaceted value of their workforce. With a passion for creating environments where employees thrive, Justice leads the way in developing bespoke, value-driven solutions. By focusing on initiatives that maximise engagement, he ensures each organisation can achieve its goals while empowering its people to reach their full potential.