The 4 Myths Holding Back The AI Revolution, According To Nokia Bell Labs
14 October 2025
Picture a future where artificial intelligence doesn’t just think faster but thinks better, where responsible development isn’t a constraint but a competitive advantage, and where the value of your data matters more than the volume. According to Dr. Sean Kennedy, Vice President and head of Nokia Bell Labs’ Artificial Intelligence Research Lab, we’re not there yet, and four persistent myths are standing in our way.
In my role as Futurist in Residence for Nokia, I recently sat down with Kennedy to discuss these myths, and what emerged was a fascinating challenge to conventional wisdom about where AI is heading and what really matters in getting there.

Why Uncertainty Beats Regulation As Innovation's Real Enemy
Ask most tech leaders what's slowing AI development, and regulation usually tops the list. Kennedy argues we're looking at the wrong culprit. "It's not to say that poorly designed regulation isn't going to stifle innovation," he acknowledges, "but what is really slowing us down is that people are uncertain how to move forward to best build responsible technologies."
The evidence backs him up. Research by Bradford at Columbia University comparing innovation in the United States and the European Union revealed that the problem wasn't regulation itself, Kennedy explains, but rather "uncertainty around market fragmentation, uncertainty around limited venture capital, uncertainty around strict bankruptcy laws and uncertainty around immigration policies." When companies are uncertain about the rules of the game, they simply refuse to play.
Nokia's response has been to create its own framework, built on six pillars of responsible AI: fairness, reliability, privacy, transparency, sustainability, and accountability. Rather than waiting for perfect regulatory clarity, they've given their teams the tools to move forward with confidence. Kennedy describes one particularly clever solution, an AI risk assessment tool that uses a conversational interface to help developers understand the implications of their work. "It's not a tool from the top down, like a governance tool; rather, it's a tool that you would give to key AI stakeholders to be able to make decisions about the technology," he notes.
This strikes me as one of those truths that seems obvious in hindsight yet eludes most organizations in practice. The companies winning at AI aren't the ones paralyzed by regulatory ambiguity, they're the ones who've decided what they stand for and built guardrails accordingly.
The AGI Scaling Myth And The Case For Thinking Slow
Kennedy's second myth challenges the widely held belief that simply scaling up today's AI models will inevitably lead to artificial general intelligence. The industry has poured billions into making models bigger, feeding them more data, and giving them more computing power. Kennedy isn't convinced this path leads to Artificial General Intelligence (AGI), as many seem to think.
Drawing on Nobel laureate Daniel Kahneman's framework of System 1 (fast, intuitive thinking) and System 2 (slow, deliberate reasoning), Kennedy argues that current large language models excel at the former but struggle with the latter. "What we've done is really scaled this fast or this system one, this fast intuitive thinking," he explains. "Right now it feels like we're still missing the deep insights and the tools that we will need to really engage and really come to reasoning."
The solution, according to Kennedy, lies in what he calls "Gen AI Plus," combining the fast intuitive capabilities of large language models with symbolic reasoners, knowledge graphs, and physical world models. Think of it as giving AI both the quick reflexes of a chess grandmaster's pattern recognition and the deep analytical power they deploy for complex positions.
This matters because the alternative, continuing to scale models indefinitely, creates its own problems. "These models are gonna be massive. They're gonna be extremely expensive, and they're gonna be in the hands of a few," Kennedy warns. The path to truly intelligent AI may require a fundamental shift in approach, not just bigger versions of what we already have.
Having spoken with both Demis Hassabis at DeepMind, who believes AGI is roughly five years away, and Yann LeCun at Meta, who argues we're much further out because we haven't cracked physical world understanding, I find myself leaning toward Kennedy's more measured view. The confidence that bigger models alone will get us there feels premature. We're missing fundamental pieces of the puzzle, and throwing more compute at the problem won't necessarily reveal them.
The Hidden Costs Of General Models
Kennedy's third myth tackles an assumption that seems almost self-evident: more general AI models create more value. The reality is messier and more troubling.
Nokia's research mapping AI's job impact across the United States revealed a troubling pattern. Major metropolitan areas will weather the transition relatively well, with displaced workers finding new opportunities. But smaller, single-industry towns face a different future. "These small rural one industry towns are going to be not only easy to automate in many ways, it's going to affect their population there," Kennedy observes. "The divide that we've seen is just only going to further widen."
Kennedy doesn’t argue against progress; he advocates for understanding not just the metrics you're optimizing for, but the broader impact technology will have. "You also are a citizen of the world, right? And so if you can build a technology in a responsible way such that it not only improves the value for your shareholders, but has not massive ill effects, I think these are the types of technologies that we should be driving towards."
The practical implication? Companies need to think beyond their immediate stakeholders and consider the societal ripples their AI systems create. I've watched too many organizations optimize for quarterly metrics while ignoring the broader consequences of their technology choices. The irony is that this short-term thinking often backfires.
Overlooked externalities have a way of becoming very expensive problems down the line, whether through regulatory penalties, reputational damage, or the simple reality that products built without considering their full impact tend to fail in complex ways. I believe that Responsible AI is a prerequisite for sustainable success.
Quality Over Quantity In The Age Of Data
Kennedy's fourth myth might seem obvious, yet companies violate it constantly: more data doesn't automatically equal better AI. Walking into a big data conference in London to make this case took courage, but the evidence is clear.
"Time and time again, we find that just having oodles and oodles of data doesn't help," Kennedy states flatly. "It's having the right data, having clean data." At Nokia, they've found that spending time ensuring data quality, even using AI tools to clean existing datasets, delivers far better results than simply accumulating more information.
The company has built medium-sized language models trained on high-quality proprietary data that perform as well as much larger general models. The benefits extend beyond accuracy. "They don't hallucinate as much, they're also much, much cheaper to use because they're much smaller," Kennedy notes. They're also dramatically more sustainable, addressing growing concerns about AI's environmental footprint.
This resonates deeply with what I've observed working with organizations across industries. Too many companies are hoarding data like digital packrats, convinced that volume alone will unlock competitive advantage. What I've found instead is that the real differentiator lies in proprietary, domain-specific data that's been carefully curated and maintained. A smaller, cleaner dataset that reflects your unique operational reality will outperform a massive generic one almost every time. The companies getting this right are the ones investing in data quality from the start, treating their datasets as strategic assets that require active management, not passive accumulation.
Where This Takes Us
Looking ahead, Kennedy sees physical world models as essential for the next leap in AI capability. The work happening at Nokia Bell Labs, where they're merging large language models with digital twins of physical spaces, offers a glimpse of this future.
I find this direction compelling because it addresses AI's curious blindness to physical reality. We've built systems that can write poetry and pass bar exams, yet struggle with tasks a toddler handles intuitively, like understanding that a cup will fall if you let go of it. The integration of physical world models will be foundational to creating AI that can truly operate in our world rather than just theorize about it.
What strikes me most about these four myths is how they all point to the same underlying insight that the AI revolution is being slowed by our collective misunderstanding of what actually matters. The companies that build hybrid systems rather than just bigger models, that create certainty through principle rather than waiting for permission, and that curate data rather than merely collecting it, will be the ones who determine what the AI future actually looks like.
Related Articles
AI Chatbots Are Quietly Creating A Privacy Nightmare
By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]
The Biggest Barriers Blocking Agentic AI Adoption
By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]
Space, AI, And The Future Of Human Potential
By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]
AI Agents Are About To Reshape The Future Of Business
By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]
The Marketing Metrics That Will Matter Most In The Age Of AI Agents
By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]
AI Travel Hacks And Prompts That Will Save You Time, Money And Stress
By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media