Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling and award-winning author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has a combined following of 5 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest books are ‘Future Skills’’, ‘Generative AI in Practice’ ‘Data Strategy 3rd Ed’ and ‘AI Strategy‘.
View My Latest Books

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

Why Picking The ‘Best’ AI Model Is The Wrong Question In 2026

17 March 2026

For years, the AI conversation has been dominated by a single question: Which model is the best? Every major release is accompanied by charts, benchmarks and bold claims, often suggesting that bigger models automatically mean better outcomes.

That way of thinking is now starting to break down.

While general-purpose large language models have reached broadly comparable performance for everyday tasks such as writing, summarization and research, real differences emerge once AI is deployed inside complex organizations. In large-scale coding projects, agentic workflows and highly specialized enterprise use cases, performance varies dramatically.

The most important question for leaders is no longer which model is best, but which combination of models best fits their business, their risks, and their goals.

Why Picking The ‘Best’ AI Model Is The Wrong Question In 2026 | Bernard Marr

Capability Profiles

As AI becomes ever more deeply ingrained in organizational DNA, the way we assess model capabilities increasingly mirrors the way we assess human talent.

After all, people are evaluated across multiple competencies, including their ability to analyze, think creatively, communicate, and make decisions, rather than individual “headline metrics” like IQ or the total value of sales generated.

Model fit, just as with human fit, will increasingly become an issue of culture, too. Employers look for people who are a good fit for their company’s risk tolerance, communication style and expectations around autonomy, and these criteria are just as relevant when choosing AI models.

Some models are better at structured reasoning, some at autonomously creating and executing action plans, while others lead the way when it comes to creativity and rapid iteration of ideas. While the former may be suited for financial operations and analytical tasks, the latter are likely to be a more natural fit for marketing, design or communication workflows.

Another factor we have to consider is that tools tuned for industry-specific use cases are increasingly outperforming generic, multi-purpose platforms. Legal workers may feel more inclined to trust specialist tools like Harvey, CoCounsel, and Spellbook, while those working in the medical fields might feel they need the specialization provided by Abridge or AWS Healthscribe.

This means that the ability to profile AI models, tools and platforms for capability and suitability for specific tasks is quickly becoming an essential skill for leaders in the AI age.

Tasks, Risks And Outcomes

Exercising this judgment at scale involves understanding how to match capability to tasks, risks and outcomes.

Start by defining the task and how it supports critical business operations. A model that we want to triage thousands of customer support enquiries every day will have a very different capability profile from one designed to assign a risk score to a financial transaction or generate boardroom-ready reports from KPIs.

There’s no “best” AI for all these tasks, and selecting the right one means assessing them against the demands of the specific workflow. Should it be optimized for speed and pattern recognition? Or deep reasoning capabilities and the ability to justify its decisions?

Risk analysis also plays an important role. For low-stakes tasks, for example, creative ideation in marketing or prototyping design concepts, highly creative systems can provide richer opportunities for exploration. But it could be dangerous to use models that excel here for higher-stakes healthcare or legal workflows.

Finally, expected outcomes are also a critical factor. Where driving operational efficiency is the goal (for example, reducing resources spent closing support tickets, or accelerating employee onboarding), then autonomous, agentic capabilities might take precedence.

Improving the accuracy of a process, such as reporting, requires models that exhibit strong reasoning and adhere to strict guardrails.

And if the goal is innovation, generating new product concepts or brainstorming new business opportunities, we should look to highly creative models capable of generating diverse ideas, exploring unconventional approaches, and rapidly iterating new concepts.

From Operator To Conductor

Considering task requirements, risk tolerance, and desired outcomes together creates a repeatable framework for selecting the right tool or model for the job. Rather than taking the latest cutting-edge models and finding things to do with them, we look at what we need to do, the acceptable margin of error, and what success looks like. Then we find models that fit the profile.

The ability to do this at scale becomes essential as our organization’s level of AI maturity increases, and we evolve from operating single, all-purpose instruments to conducting an orchestra of specialized models and agentic systems.

Business functions will gravitate toward capabilities that best suit their workflows; marketing teams adopting highly flexible, creative multimodal systems, and finance or legal teams to models built for understandability and compliance.

Taking this portfolio-based approach has secondary benefits, too. It reduces the risk of vendor lock-in and improves resilience against the dangers of single-model failure or degradation.

Most importantly, though, it lets us think of ourselves as conductors of an agentic orchestra, where each instrument plays its own role and contributes to the success of the whole. From there, we can build AI ecosystems that are capable, responsibly governed, and optimized to hit business goals.

As AI becomes embedded across every function of the enterprise, success will depend less on choosing a single standout model and more on orchestrating the right mix of capabilities.

Leaders who treat AI selection as a strategic discipline, balancing fit, risk and outcomes, will build systems that are more resilient, more responsible and ultimately more effective.

The Internet Is Entering The AI Supercycle

In his keynote at MWC, Nokia CEO Justin Hotard framed this shift in a way that I found especially useful. He said, “Every network that has been built has been optimized around its primary workflow, the dominant workload, starting with voice, then we had data, then video and rich media, and now we’re in the AI super cycle, and the dominant workload is AI, and the network needs to evolve ultimately.”

Each phase of the internet has had its defining traffic pattern. Voice networks were built for calls. Broadband and mobile data networks were built for websites, apps and then streaming. AI brings a new dominant workload, and it places very different demands on connectivity.

According to Nokia’s report, total global Wide Area Network, or WAN, traffic, meaning the data moving across long-distance networks that connect cities, countries, data centers, businesses and cloud systems, is projected to grow roughly three to seven times by 2034, reaching between 2,277 and 4,878 exabytes per month depending on the scenario. In the moderate scenario, AI traffic reaches 921 exabytes per month by 2034 and accounts for about 30% of total global WAN traffic. AI-related traffic is also forecast to grow faster than traditional traffic, with a 23% compound annual growth rate in the moderate scenario.

Those numbers are huge. The more important point, however, is what is inside them.

AI Changes The Direction Of Traffic

One of the core insights from Nokia’s research is that the old download-heavy internet model is starting to give way to a much more balanced and dynamic system. Historically, downlink traffic dominated because people were mainly consuming content. In the AI era, users are also sending far more information back into the network.

That includes prompts, images, video streams, sensor data, context from smart glasses, voice interactions and real-time signals from industrial systems. In some cases, people are engaging directly with AI assistants. In others, AI systems are working behind the scenes, calling other models, retrieving context, checking safety constraints and coordinating actions.

This is already starting to show up in everyday consumer use. Nokia’s report says that as people use AI tools to create content and rely more on AI assistants, they are sending more data back into the network, not just receiving it. At the same time, interactive AI services depend more on fast and stable connections, because even small delays can make them feel slow or unreliable. It is also becoming visible in enterprise environments, where machine vision, robotics, industrial telemetry and AI copilots are pushing more operational traffic over wide-area networks.

That means the internet of the near future will be less about passive consumption and more about continuous exchange.

Three Forces Are Reshaping Network Traffic

What I found especially helpful in Nokia’s analysis is that it breaks the traffic shift into three broad forces.

The first is the rise of more immersive and interactive digital experiences. As AI makes online services more responsive, network quality becomes more important. It is not only about speed, it is also about consistency, because these experiences depend on the network reacting quickly and smoothly.

The second is the movement of enterprise and industrial operations toward the edge. Digital twins, robotics, AI copilots, remote support and industrial automation all rely on flows of data between on-site infrastructure and distant compute resources. That makes traffic more distributed and more variable, especially when workloads move between edge and cloud.

The third is that AI is beginning to generate traffic autonomously. This may prove to be the most disruptive change of all. Nokia estimates that by 2034, 37% of total network AI traffic will be machine-generated. In other words, a growing share of internet traffic will come from systems talking to systems.

That is a major departure from the human-centered internet we are used to thinking about.

Generative, Agentic And Physical AI Will Stress Networks In Different Ways

One of the strongest themes on Nokia’s booth was that different categories of AI create different networking demands.

Generative AI tends to be uplink-heavy because users increasingly submit rich, multimodal inputs for inference. That means text, images, audio and video are moving into the network, not just results moving back out. Agentic AI introduces bursty patterns because autonomous systems can trigger waves of searches, model calls, database queries and action loops in short periods of time. Physical AI, which powers machines operating in the real world, makes low latency far more important because delays can affect how a robot, device or system senses and responds. Co-ordination and control of autonomous vehicles and drones requires real-time updates across a wide area on the ground and in the air.

This is where Hotard’s keynote added an important layer. He explained that AI traffic is “bursty, it’s dynamic,” and increasingly “token driven and not stream driven.” Traditional networks have been built around flows that are comparatively linear and predictable. AI workloads are less tidy. They are spiky and far more dependent on timing.

Hotard also argued that future networks will need to deliver “token certainty,” meaning the right token arrives on time, with the required latency, quality, security and trust.

The Inter-Data-Center Internet Becomes Far More Important

Another big lesson from Nokia’s report is that AI traffic does not simply travel from one place to another and stop there. A single AI request can move between local networks, central networks, edge systems, cloud platforms and several data centers before the final answer comes back. So one AI interaction can trigger several separate movements of data across the internet. Nokia estimates that by 2034, user-generated AI traffic could total 921 exabytes per month, while the related traffic moving between data centers could reach 3,260 exabytes per month.

That means the AI internet will depend heavily on data center interconnects, optical infrastructure, routing and the ability to move workloads efficiently between hyperscalers, telecom providers, enterprise environments and the edge. That is why Nokia is placing such emphasis on the shift toward what Hotard described as AI factories and new network topologies that connect them.

Why AI-Native Networks Will Matter

The big strategic implication is that adding AI tools on top of legacy networks will not be enough. Hotard made that case when he said that layering intelligence onto existing networks is “the start of the journey,” while the destination is “an architecture that’s fully AI native.” He also argued that the old siloed model of network domains has to give way to a more unified, software-defined, dynamic cross-domain approach.

That fits with Nokia’s report, which shows that future networks will need to handle more symmetric traffic, tighter latency requirements and massive growth in interconnect demand.

For telecom operators, cloud providers, enterprises and policymakers, this has immediate implications. Capacity planning has to change. Edge strategy becomes more important. Deterministic performance matters more. Security and trust need to be built into the flow of AI traffic itself. The network becomes a strategic enabler of AI adoption, not a passive utility sitting in the background.

The Next Internet Will Be Built For Intelligence

The most important lesson I took away from MWC is that AI will change the internet from the inside out.

Yes, traffic volumes will rise. Yes, AI apps will become mainstream. Yet the real shift is deeper. The internet is moving from a network built mainly to distribute information to one increasingly designed to support connected intelligence, where humans, machines, models and systems are continuously exchanging context and coordinating action.

We are still early enough to shape the architecture of the AI era. The choices being made now about connectivity, interconnect, edge computing, software-defined networking and AI-native infrastructure will determine which organizations are ready for what comes next.

The AI era will not run on yesterday’s internet. It will require networks designed for a world where intelligence is distributed, traffic is dynamic and performance has to be far more predictable than before. After what I saw and heard at MWC, I am convinced that this shift is already underway. The question is whether our networks will evolve fast enough to support the AI Supercycle.

You can read the full Nokia report here: https://www.nokia.com/asset/213660/

#Sponsored #NokiaPartnership

Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

Davos 2026: Jensen Huang On The Five Layer AI Cake, The AI Bubble And Key AI Breakthroughs

By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]

Why Prompt Engineering Isn’t The Most Valuable AI Skill In 2026

By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]

When AI Becomes The New Immigrant: Yuval Noah Harari’s Wake Up Call At Davos 2026

By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]

MWC 2026: The IQ Era Is Here, And These Are The Trends Every Business Leader Needs To Watch

By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]

AI Will Change The Internet Forever: Why Networks Must Evolve For The AI Supercycle

By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]

What We Didn’t See At CES 2026, And Why That Matters

By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]

Sign up to Stay in Touch!

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.

He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.

He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest book is ‘Generative AI in Practice’.

Sign Up Today

Social Media

0
Followers
0
Subscribers
0
Followers
0
Followers
0
Subscribers
0
Followers
0
Yearly Views
0
Readers

Podcasts

View Podcasts