AI Trust Paradox: Why Confidence Is Rising Faster Than Readiness
28 January 2026
Trust in AI is rising, and many organizations are mistaking that confidence for readiness.
Informatica’s CDO Insights 2026 reportlands at a moment when AI has crossed a psychological threshold inside organizations. The conversation has shifted from “Should we try this?” to “How fast can we roll it out?” That change brings energy and momentum, but also a new kind of risk, because confidence can spread faster than capability.
I explored this in a recent podcast conversation with Nathan Turajski, Senior Director for Product Marketing at Informatica, and what stood out was the growing gap between enthusiasm and operational maturity. As Turajski put it, “That experimentation is now truly turned into more of an operational push to operationalize and scale AI, make it productive, realize value.”
When AI goes mainstream inside a company, accountability can get blurry fast, especially when outputs move straight into workflows.

AI Adoption Has Hit A Tipping Point, Agentic AI Is Right Behind It
The topline numbers in the report are striking. Sixty-nine percent of organizations have incorporated generative AI into business practices, and another 25 percent expect to do so in the next 12 months. That is pretty much everyone. Agentic AI is following close behind, with 47 percent already adopting it, and another 31 percent planning adoption in the next 12 months.
Turajski emphasized how quickly the market moved from hype to implementation intent, saying, “Organizations took that and they’re moving really fast.”
The report also shows where leaders want AI to land first. Customer experience and loyalty is the most cited use case category for the next 12 months, followed closely by business intelligence and decision-making, compliance and ESG requirements, employee collaboration and workflows, and post-sale customer support.
This signals that AI is being pushed into customer-facing decisions and regulated processes, as well as internal workflow choices. When AI sits at the heart of decisions that affect customers, employees, or compliance outcomes, the quality of the underlying data becomes a board-level issue, whether or not anyone labels it that way.
The AI Trust Paradox: Why Confidence Can Become A Liability
One of the most interesting findings in the report is what Informatica calls the AI trust paradox. Employee trust in AI and the data behind it is rising, while readiness, governance visibility and literacy are not keeping pace.
Sixty-five percent of data leaders say most or almost all employees trust the data being used for AI. In organizations that have adopted agentic AI, that number climbs to 74 percent. On the surface, that sounds like progress. In practice, it can be a warning light, because trust is being reported alongside persistent reliability concerns and widespread skills gaps.
The report shows why this is paradoxical. 75% of data leaders believe their workforce needs upskilling in data literacy, and 74% say employees need upskilling in AI literacy to responsibly use AI outputs in day-to-day operations. At the same time, 76 percent say their company’s AI visibility and governance have not completely kept pace with employees’ use of AI.
Turajski captured the dynamic behind the numbers. “We have to separate the hype and what the workforce believes is the promise of AI and the opportunity, from the readiness to actually achieve the results that they’re expecting.” He also pointed to the risk of “unbridled enthusiasm” and people using tools “without any sort of oversight or governance.”
There is a practical reason this happens. Generative AI often begins with small, low stakes tasks, rewriting an email, summarizing notes, drafting content. It can feel dependable in that context, which encourages people to generalize that trust into higher-stakes scenarios. Yet the stakes change dramatically when AI is asked to make decisions, trigger actions, or operate as an autonomous system inside workflows. That is where productive skepticism needs to be designed into how AI is used.
Data Quality Is Still The Bottleneck, And Agents Raise The Stakes
If the trust paradox is the cultural risk, data quality is the operational constraint. The report makes that clear: Fifty-seven percent of organizations that have adopted or plan to adopt generative AI cite data reliability as a key barrier to moving projects from pilot to production, essentially unchanged from last year.
More than half are very or extremely concerned about pilots moving forward without addressing reliability problems uncovered by earlier initiatives. At the same time, 61 percent say better data, including higher quality and completeness, is making it easier to transition generative AI pilots into production compared to a year ago.
Agentic AI raises the stakes further. For companies adopting or planning to adopt agentic AI, the most common challenge in migrating agents into production is data quality and retrieval concerns, cited by 50%. Security concerns, observability concerns, lack of expertise, and lack of safety guardrails also feature prominently.
Turajski summarized the core risk in a single line, “Without a governed data foundation, these autonomous agents can create inaccurate customer outcomes at a massive scale.” That phrase, massive scale, is the part leaders should sit with. When an organization scales a traditional process, errors show up one case at a time. When an organization scales an agent, errors can propagate instantly across many customers, many decisions, and many systems.
The Skills Gap Is Becoming The Real Scaling Constraint
The technology is moving fast, and most organizations can buy tools. The harder work is enabling people to use AI responsibly and effectively.
This is where the report is blunt: 75% of leaders say the workforce needs data literacy upskilling, and 74% say AI literacy upskilling is needed. For agentic AI specifically, 42 percent cite lack of agentic AI expertise as a barrier to getting agents into production.
Turajski framed the human challenge in practical terms. “The technology part is actually pretty easy,” he said, adding that it is “that people side” that becomes the constraint when workforces are not trained to use tools and follow processes.
There is also a leadership challenge hiding inside this. When boards and executives feel competitive pressure, there is a temptation to reward speed over discipline. In reality, literacy and governance are what turn speed into sustainable advantage. AI literacy is not a training perk; it is operational risk management.
Unstructured Data Is The Next Governance Frontier
One of the most important shifts in the report is the rise of unstructured data governance. For many organizations, the most valuable context sits in documents, emails, transcripts, PDFs, chat logs, and other forms of unstructured content. AI makes it possible to use that content, and it also makes it necessary to govern it.
The report shows that 38% of data leaders cite the quality and governance of unstructured data as a top challenge for AI success over the next 24 months. It also appears in the top data-related challenges list for the next 12 to 24 months.
Turajski described this as a shift that “breaks down” traditional governance models built around structured systems, while opening a major opportunity to make AI smarter and more context aware.
The implication is clear. The organizations that treat unstructured governance as a first-class capability will move faster with less drama, because they will spend less time cleaning up avoidable problems.
Vendor Sprawl And Hidden Complexity Can Erode ROI
There is one more risk worth calling out, because it is quieter than a security incident and more common than a model failure. Complexity.
The report finds that data leaders believe they will need an average of seven vendors to support data management priorities in 2026, and eight vendors on average for AI management priorities. Those adopting generative AI or agentic AI expect even more vendor partners. Improving data trust is the most common reason cited for partnering with multiple vendors.
Turajski warned that “using more vendors has the potential to add complexity and ultimately reduce scalability,” and that too many tools can “stall the ROI on their AI investments.”
This is where organizations need a clear operating model for AI governance, data governance, and tool strategy. Vendor variety can bring innovation, and it can also bring fragmentation. The goal should be a manageable architecture that supports auditability, security, and consistent policy enforcement, while still enabling teams to move.
What Smart Organizations Will Prioritize In 2026
The report suggests that many organizations already know what needs attention. Eighty-six percent plan to increase investments in data management in the year ahead. The top needs driving these increases include improving data privacy and security, improving data and AI governance, and improving data literacy and AI fluency.
The logic behind these priorities is simple. If AI is becoming embedded across customer experience, compliance, and internal operations, then data trust, governance visibility, and AI and data literacy become the foundation for competitive advantage. AI maturity is not about chasing the newest model. It is about creating the conditions for AI to be deployed widely, safely, and repeatedly, with outcomes leaders can defend.
Or, to borrow Turajski’s practical framing, organizations need “controls” and “guardrails” that curb “blind optimism,” so innovation does not turn into uncontrolled risk.
Click here to download Informatica’s CDO Insights 2026 report.
#InformaticaPartner #Sponsored
Related Articles
The Big Ideas Shaping CES 2026 And What They Mean For The Future Of Technology
By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]
Why CES 2026 Signals The End Of ‘AI As A Tool’
By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.




Social Media