Why Trust Is The Missing Ingredient In Your AI Strategy
1 April 2025
In the rush to deploy artificial intelligence, many organizations miss the crucial ingredient that determines whether AI initiatives succeed or fail. It’s not processing power, cutting-edge algorithms, or even data quality (though these certainly matter). The missing ingredient? Trust.
As I explored in a recent conversation with Leanne Allen, Partner and Head of AI at KPMG UK, the intersection between AI innovation and trust represents not merely an ethical consideration but a fundamental business imperative.
“Most of the challenges to the barriers of adoption are driven by this trust challenge,” Allen told me. “Trust can take a few different forms. The colleagues themselves, do they trust in the tools that they’re using? Some of that could be driven by their fear of their jobs. Then there’s trust with your customers and consumers. And then there’s trust with regulators.”

The Triple Trust Challenge
Organizations face a three-dimensional trust challenge when implementing AI. First, employees may resist AI tools if they fear job displacement or don't understand how to work effectively with them. Second, customers may reject AI-enabled products or services if they doubt their reliability, fairness, or data-handling practices. Third, regulatory bodies worldwide are increasingly scrutinizing AI applications for compliance with emerging standards.
This multi-faceted trust challenge explains why many AI initiatives deliver disappointing results despite substantial investment. Without trust, adoption lags, customer engagement suffers, and regulatory pressures mount.
Why Responsible AI Is Your Competitive Advantage
KPMG's Trusted AI framework, outlined in detail in their website, emphasizes that responsible implementation of AI isn't just about avoiding harm—it's about creating sustainable business value.
"The initial value is very much around productivity and efficiency gains," Allen noted. "However, although it promises all of these amazing value and amazing gains, unless people start actually using these tools and not just using them in a very infancy side, like a bit of chat here and there, but actually using them to their full potential, you're not going to drive that growth and promise that's being made."
KPMG's approach centers around ten ethical pillars including fairness, transparency, explainability, accountability, data integrity, reliability, security, safety, privacy, and sustainability. These principles guide implementation throughout the AI lifecycle, from ideation to deployment and monitoring.
The Three Waves Of AI Transformation
Allen describes three distinct waves of AI adoption that organizations typically experience:
"The first wave is very much what we call an enabler wave. It's retooling, giving you access to tools to help you do your job a bit better and faster. Wave two is then looking at the actual end-to-end processes themselves and effectively it's the redesign of that process. Wave three is reimagining. That's really thinking about even your organizational structure, going back to what is your value stream of your organization."
These waves highlight how trust must be built into AI systems from the beginning, as each successive wave involves deeper integration of AI into business processes and organizational structures.
Values-Driven AI: Aligning Technology With Corporate Principles
One of the most compelling aspects of KPMG's framework is its emphasis on aligning AI initiatives with existing corporate values.
"The values-driven approach does align to corporate values and most corporate values will have techniques like, or statements like integrity baked into them. They will have social responsibility baked into them," Allen explained.
In practice, this means establishing ethics boards or councils to review AI use cases. These boards aren't compliance teams that simply check boxes against regulations. Instead, they serve as advisors who challenge whether potential AI applications align with organizational values and consider the diversity of thought essential for responsible innovation.
"Putting in ethics boards or ethics councils in place... they're not compliance teams, so they're not there to do the job of saying yes or no, and tick a box against regulation, they're there as an advisory board and sometimes a challenge, to on the ethical side, more than anything," Allen said.
Human-Centric Design: Augmentation Over Automation
The distinction between augmenting human capabilities versus replacing them entirely represents another key aspect of building trustworthy AI.
"Anything that requires decision making is still about augmenting humans, supporting humans, providing them extra information so they can make better decisions, rather than making those decisions directly themselves," Allen emphasized. "And I think that's really the shape of what the workforce of the future is going to look like. It will free up time for more critical thinking, more value, more creative type work."
Organizations should measure whether AI truly augments human capabilities through metrics like time saved and the percentage of AI-generated decisions that humans modify, indicating genuine human oversight rather than rubber-stamping.
Overcoming Implementation Obstacles
When I asked Allen about the most common obstacles organizations face when implementing ethical AI principles, her answer was illuminating:
"The first one is the framework. Do you have your ethical principles clearly defined and how you communicate those? Then there's an element of a higher-level operating model. Then it's going to come back down to education. One of the biggest obstacles is still a lack of education and understanding."
She also emphasized the persistent challenge of poor data infrastructure: "Fundamentally what hinders the acceleration here is the foundational elements. So, infrastructure and data, right? And the quality of the data and access to the data."
Building A Global Consensus
Looking toward the future, Allen identified a significant collective challenge humanity needs to address to ensure AI benefits society as a whole:
"I think the first one is the global lens and we need, in my opinion, a level of consistency of standards or regulation across jurisdictions. And at the moment, I think we're possibly going in the other direction," Allen observed. "Data doesn't have boundaries, right? So the challenge is AI doesn't have boundaries. We have boundaries as countries. And I think that's going to stifle the amount of innovation that can happen, or countries will develop AI in silos."
The Path Forward: Trust By Design
Building trust in AI systems isn't an afterthought—it must be designed in from the beginning. This "trust by design" approach involves embedding control points throughout the AI lifecycle to ensure systems align with both regulatory requirements and ethical principles.
Organizations that succeed in this space will avoid potential ethical crises and gain a competitive advantage through higher adoption rates, greater stakeholder confidence, and more sustainable innovation.
The promise of AI remains extraordinary, but its full potential will only be realized when paired with the human element of trust. As Allen aptly summarized: "We believe in the transformative power of AI. And that it can only reach its full potential when it is paired with human expertise and ingenuity."
For businesses looking to implement AI successfully, this means going beyond the technical aspects to address the human dimensions of trust throughout the organization and its broader ecosystem of stakeholders. Only then will AI truly deliver on its transformative potential.
To find out more about how KPMG is helping to make the difference with AI, visit their webpage here: You can with AI - KPMG UK.
#KPMGPartner
Related Articles
The Important Difference Between Agentic AI And AI Agents
By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]
Forget ChatGPT: Why Agentic AI Is The Next Big Retail Disruption
By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]
OpenAI’s GPT-5 Is Coming: Here’s What We Know So Far
By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]
6 Powerful AI Prompts That Will Help You Learn And Ace Exams (Without Cheating)
By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]
5 Fatal GenAI Mistakes That Could Destroy Your Business In 2025
By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]
4 Game-Changing Quantum Computer Types That Could Transform Everything
By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media