Artificial intelligence is reshaping how we work, learn, and deliver services. But unless inclusion is built into its foundations, it risks amplifying inequality. Here’s how leaders can make AI work for everyone.

To explore how artificial intelligence (AI) can be designed and deployed more fairly, Capita recently hosted a webinar: AI and inclusion: Breaking barriers and inspiring inclusion in work and learning. The session was moderated by Justice Onwuka, Senior Workforce Development Consultant at Capita, and featured:

When AI only reflects a slice of society

AI systems are shaped by the people and data that train them. Nimmi Patel raised concerns that biased data can lead to discriminatory outcomes, particularly when development teams are not diverse. Just 22% of AI professionals globally are women, and representation from other underrepresented backgrounds is even lower. Without diverse voices in the room, the systems being built are more likely to serve a narrow slice of society.

She also pointed to a worrying shift in priorities. A January 2025 study by resume.org, which surveyed 1,000 companies in the United States, found that one in eight organisations planned to reduce or eliminate their diversity, equity and inclusion (DEI) programmes. Whilst, thankfully, this trend is less prevalent in the UK (71% of directors plan to continue with DEI efforts), it’s something to be wary of.

“It seems counterproductive,” said Patel. “When AI is developed from just one perspective, it is not going to be a good product or provide a good service.” She also referenced Ivana Bartoletti, Vice President, Global Chief Privacy and AI Governance Officer at Wipro, who once said that “the problem is not bias, it is when bias becomes discrimination.” In other words, bias may be present in any dataset, but it becomes harmful when it leads to unfair or unequal outcomes.

From policy to practice: Bridging the inclusion gap

The panel discussed the importance of building inclusion into the way AI systems are designed, governed and delivered, not just into high-level strategies.

In the UK, public sector procurement is driving some of this change. Tiina Stephens explained, “Suppliers that demonstrate strong commitments to social value and DE&I are more likely to succeed in securing contracts…that prioritisation in procurement ensures that all suppliers, regardless of their backgrounds, have opportunities to compete fairly.”

But policies do not always translate into usable, inclusive products. Hector Minto warned that exclusion is increasingly driven by the data behind systems. “The data is now the discrimination, not the UX.” He reflected on conversations with deaf and blind users, where everyday tasks can expose the limits of systems not designed with them in mind. AI tools, for example, may not account for how someone who is deaf will interact with a voice-based interface. These oversights are not just design flaws - they risk shutting people out of services altogether. Without attention to real-world accessibility needs, exclusion becomes embedded in the technology itself.

For public sector services, the stakes are high. When systems are not inclusive, the people most in need can be left without access to information, support, or opportunity.

Proven examples of inclusive AI

Despite the risks, the panel shared examples of how AI can support inclusion when it is built around the needs of real users.

Tiina Stephens shared insights from Capita’s early rollout of Microsoft Copilot, which included participants from its accessibility and neurodiversity networks. The pilot revealed both the opportunities and challenges of implementation. Around 75% of users reported improved output, and over 60% said the tool helped them focus on higher-value tasks.

Neurodivergent colleagues highlighted specific benefits, including:

  • Using AI-generated prompts and summaries to kickstart writing tasks.
  • Reducing cognitive load when planning or organising work.
  • Feeling more in control of daily responsibilities.

The trial also surfaced some accessibility barriers. Initial training materials were too text-heavy, prompting the team to redesign them with more visuals and simplified formats to accommodate different learning preferences.

Another example came from Swindon Borough Council, which collaborated with a local group of people with learning disabilities to co-design an AI tool that makes public information more accessible. The tool converts complex documents into ‘easy read’ formats, using plain language and visual cues to support comprehension. It can also translate content into more than 70 languages and has been released as open source to support broader adoption across other organisations.

Together, these examples show that inclusive AI is not theoretical - it is already delivering value when built collaboratively and designed with real-world users in mind.

Inclusion requires awareness, reflection and collaboration

Before wrapping up, moderator Justice Onwuka invited the panel to share their final reflections on what it takes to make AI truly inclusive:

  • Use lived experiences to improve technology
    Nimmi Patel encouraged people to recognise that their experience with AI may not reflect others. She called for more awareness of different lived experiences, and for using that awareness to improve how technology is built and used. She also noted that inclusion is not only about boosting productivity, but about how we choose to use the time AI gives back.
  • Take time to consider human impact
    Tiina Stephens spoke about the pace and pressures of everyday work. She stressed the importance of pausing to reflect on how our work affects colleagues, clients and society, and the need to stay grounded in that impact, even amid pressures.
  • Build partnerships and find common ground
    Hector Minto reflected on the importance of partnerships. No single organisation or voice holds the full solution to inclusive AI. Real progress depends on working across sectors, sharing responsibility, and including those whose perspectives are still excluded from the conversation.

Inclusion is not a choice - it is a responsibility

Artificial intelligence is already shaping how services are delivered, how decisions are made, and who gets access to opportunities. Its potential is significant, but so are the risks when inclusion is not part of the process from the start.

For leaders, this is not just a question of ethics. It is about ensuring services are equitable, accessible, and fit for purpose. Inclusive AI does not happen by default. It requires attention to data, governance, lived experience, and delivery, at every stage and across every team.

Now is the time to ask the right questions: Who is this technology for? Who might it exclude? And how do we ensure it works better for everyone? The opportunity is real. So is the responsibility.

Want to know what inclusive AI looks like in action?

Watch the full webinar to explore real-world examples from Capita, Microsoft and techUK, and learn how AI can remove barriers and support inclusion in work, learning and everyday life.

Find out more about how our people and services are AI-enabled:

Or get in touch with the team to find out more about our services:

 

Written by

Justice Onwuka

Justice Onwuka

Senior learning and development consultant, Capita

With global experience in L&D, Justice joined Capita in 2023 to provide value to clients across the public and private sector. He’s committed to helping organisations recognise and harness the multifaceted value of their workforce. With a passion for creating environments where employees thrive, Justice leads the way in developing bespoke, value-driven solutions. By focusing on initiatives that maximise engagement, he ensures each organisation can achieve its goals while empowering its people to reach their full potential.

More of our insights

 

 

How can we help your organisation?