James Quincey ran Coca-Cola (NYSE: KO) for nine years. He guided the company through a pandemic, a sugar reckoning, and the complete restructuring of how the world's most recognized brand reaches consumers. Last week, he handed the role to his COO, Henrique Braun.
His reason was not a board dispute. Not performance. Not age.
"In a pre-AI, a pre-gen-AI mode, we made a lot of progress," James told CNBC. "But now there's a huge new shift coming along."
Doug McMillon said nearly the same thing when he stepped down from Walmart (NYSE: WMT) three months ago. He had held the CEO role since 2014. "With what's happening with AI, I could start this next big set of transformations," Douglas told CNBC. "But I couldn't finish."
Two of the most accomplished CEOs of their generation looked at what's coming and decided someone else should drive.
The Chatbot Phase Is Ending. Get Ready for Agentic AI.
The word "agentic" is everywhere. So what exactly does it mean?
For three years, AI has been a chatty chatbot, a handy tool. You ask it a question. It gives you an answer. You decide what to do. By now, most folks know that what is powering the chatbot is an LLM, a Large Language Model. Tools like ChatGPT, Claude, and Gemini are LLMs. Think of an LLM as a brilliant analyst who hands you a report and waits for your next question. Extraordinarily capable inside domains where it has been trained. Still unreliable when asked to navigate genuinely novel situations requiring human-like judgment. The gap between those two things is where governance failures happen.
An AI agent is different. You give it a goal. It figures out the steps on its own. And if you give it access, it searches the web, opens applications, reads documents, fills out forms, places orders, sends messages, and checks its own work, without a human approving each step. An LLM is the engine. An agent is the vehicle built on top of it. The LLM advises. The agent acts.
That is not a subtle distinction. It is the difference between buying a faster calculator and hiring an employee who never sleeps. And when that employee acts, your company is responsible for that action. Who authorized it? What were its limits? Where is the audit trail? These are not technology questions. They are fiduciary ones.
This is why I can appreciate why leaders like McMillon and Quincey stepped down. They could see the agent economy coming. They understood what it demands from a company. And they were honest enough to say the job required someone who could finish it. There is a massive amount of disruption that is about to happen. We truly are at the beginning of the agentic AI phase - and every enterprise must become an agentic enterprise.
The Numbers Behind the Shift
Matthew Prince is the CEO of Cloudflare (NASDAQ: NET), the company that manages roughly 20% of all web traffic on earth. He has data, not a forecast.
At SXSW last week, Prince told the audience that AI bot traffic will exceed the amount of human traffic online by 2027. The reason is structural: a human shopping for a digital camera might visit five websites. An AI agent completing the same task will often visit 1,000 times more sites, querying 5,000 pages in seconds. That is real traffic and real server load that everyone is having to deal with.
This is what Prince calls agent traffic: the volume of web interactions generated by AI agents operating on behalf of humans, rather than humans visiting sites directly. Before the rise of generative AI, bots were responsible for only 20% of internet traffic, mostly from search engine crawlers. That number is now climbing much faster, with no sign of stopping.
Your company's digital presence, your marketing spend, your customer acquisition strategy, were all designed for human traffic. They are not designed for agent traffic. That is a competitive risk with no committee owner at most companies.
Then there is what Jensen Huang announced at GTC 2026. NVIDIA (NASDAQ: NVDA) launched NemoClaw, an enterprise-grade platform built on top of OpenClaw, the open-source agent framework that has become the fastest-growing open-source project in history. Huang said from the GTC stage: "For the CEOs, the question is, what’s your OpenClaw strategy? We need it. We all have a Linux strategy. We all needed to have an HTTP HTML strategy, which started the internet."
OpenClaw, created by developer Peter Steinberger, allows anyone to build and run AI agents locally, connecting them to tools, apps, and services. It went from a weekend project to 250,000 GitHub stars in weeks. Your employees are using it right now, or something like it, whether your IT team knows or not. NVIDIA’s NemoClaw adds enterprise-grade security and privacy controls, installs open models locally on company hardware, and routes cloud model calls through a privacy layer, all deployed in a single command, with Cisco, CrowdStrike, Google, and Microsoft Security working to embed guardrails into the broader enterprise security stack.
When the CEO of NVIDIA is telling every company they need an agent strategy the way they once needed an internet strategy, the agentic enterprise is not a future state. It is the current one.
What the Data Actually Says
The right response to all of this is not panic. It is precision.
Anthropic's 2026 Agentic Coding Trends Report puts the most useful number on the table. While developers now use AI in roughly 60% of their work, they report being able to fully delegate only 0 to 20% of tasks. The other 80% requires active human judgment: setup, supervision, validation, and decision-making at the moments that matter. AI is a constant collaborator. It is not an autonomous operator, at least not yet, and not without governance.
The report surfaces something else leaders should internalize. Engineers are not being replaced. They are being transformed from implementers into orchestrators, directing agents rather than writing every line themselves. That shift has a name: the role moves from doing the work to governing the work. Sound familiar? It is exactly what board directors and c-suites do. And it is exactly what is now required at every level of the enterprise, not just in engineering.
That transformation is coming for finance, legal, marketing, HR, and operations. Every function. And most boards have not yet had that conversation.
Every Enterprise Will Become an Agentic Enterprise
Here is the statement most AI coverage dances around but never says directly: this is not about one department experimenting with AI. Every function in your enterprise will become agentic. Not some of them. All of them.
Marketing. Sales. Legal. Finance. HR. Product. Operations. Every workflow, in every function, will eventually be powered by agents rather than humans executing each step manually. This is not a prediction for 2030. It is the trajectory visible in every major enterprise right now, accelerating faster than most boards have registered.
But here is what most companies are getting wrong as they charge toward this future: they are thinking about AI by department, and agents do not care about your departments.
An agent does not know or care that a purchase order touches procurement, then finance, then legal for contract review, then operations for fulfillment. It does not respect the boundary between marketing's campaign tools and product's customer data. It does not pause at the handoff between your sales CRM and your finance ERP. A human workflow has always been shaped by your org chart. An agentic workflow is shaped by the task itself, end to end, across every system it needs to touch to get the job done.
Microsoft Research published a paper on the emerging agentic economy that captures what is actually at stake here. The researchers frame what is coming as a fundamental shift in how consumers and businesses communicate, with agents on both sides interacting directly, collapsing the communication friction that today's platforms exist to manage. They describe two possible futures: an "agentic walled garden" where a few dominant platforms control how agents interact, or an open "web of agents" where any agent can connect and transact with any other, much like the World Wide Web itself. The researchers argue that which path prevails will determine whether the agentic economy democratizes economic opportunity or concentrates it further. That is not a technology question. It is a governance question, and it will be decided in boardrooms, not by engineers.
This is the internal transformation directors and executives are almost entirely missing. The conversation about agents has been happening externally: how do agents interact with your customers? But the more immediate, more governable, and more urgent question is internal: how are agents being used inside your enterprise right now, crossing system boundaries your IT team never anticipated, accessing data your legal team never reviewed, making decisions your compliance function never signed off on?
For most companies, the honest answer is: nobody knows.
The fuel powering every agentic workflow is data. Not AI models. Not agent frameworks. Data. An agent can only act as intelligently as the data it can access. And most enterprises do not have their data house in order. Their data lives in silos that mirror their org charts. Their permissions were designed for human access patterns, not agent access patterns. Their data quality was sufficient when humans were filling in the gaps with judgment and context. It is not sufficient when an agent is making decisions at machine speed with no one looking over its shoulder.
Think about what a single agentic sales workflow actually requires. The agent needs to know who the customer is, from your CRM. What they have bought before, from your transaction history. What inventory is available, from your ERP. What the current contract terms are, from your legal repository. What the pricing rules and discount authority levels are, from your finance systems. What the account owner's recent activity looks like, from your sales tools. All of that data, across all of those systems, needs to be accessible, structured, accurate, permissioned correctly, and queryable by an agent that does not pause to ask which department owns which piece.
Most companies cannot do this today. Which means every AI investment you are making is building on a foundation that cannot yet hold the weight.
The companies that win the agentic era will not necessarily have the best AI models. They will have the cleanest data, the most clearly permissioned systems, and the governance frameworks that ensure agents are acting within defined boundaries. Agentic marketing, agentic finance, agentic legal, agentic HR: the prerequisite for all of them is the same. Get the data right. Get the permissions right. Get the governance right. Everything else is downstream.
The Structural Precondition Nobody Is Talking About
While there’s genuine excitement over the potential of things like OpenClaw, NemoClaw, and agentic commerce, we truly are at the very beginning of a massive enterprise and workforce transformation.
Every agent framework only works as a paradigm if your company's systems are agent-readable and agent-writable. Agent-readable means an AI agent can query your systems and get accurate, structured information back. Agent-writable means an agent can act within your systems: placing orders, updating records, triggering workflows. Most legacy enterprise systems are neither. That gap is not a software update. It is a multi-quarter capital commitment with governance implications at every layer.
Think about what this means. A customer asks their AI assistant: find me the best flat-screen TV under $1,000, shipping before Friday, from a brand with flexible returns. The agent evaluates structured data against explicit constraints. There is no brand story. There is no above-the-fold. There is no ad creative. Just the data. If your delivery windows are unclear, if your return schema is missing, if your inventory is not in machine-readable format, the agent skips your offer without a human ever seeing it. You lose that sale. You never know why.
The agents reaching your customers are connecting through a technical standard called MCP, the Model Context Protocol. Think of it as the USB port for AI agents: a standardized interface that lets an agent plug into a company's systems, databases, or software. Stripe, Salesforce, and SAP have all announced MCP servers. But an MCP server is also a doorway into your company's data. Who authorized that doorway? What data can the agent access through it? What are the security and privacy controls? Your Audit and Risk Committees should own these questions before your engineering team ships the integration.
The walls we spent 20 years building to keep bots out are now the things keeping our best customers out. Every product and IT team learned the same lesson over the past two decades: bots were bad. So we built CAPTCHAs. We gated APIs. We built JavaScript-heavy front ends designed to be bot-hostile. That was correct for the pre-AI era before November 2022. It is a structural liability for 2026 and beyond.
This shift is already demolishing the classic web funnel where users search, click through to a site, and convert. AI agents deliver answers directly, potentially bypassing carefully optimized landing pages entirely.
You might have the best product in the world. If it is not agent-readable, it does not exist in the agent economy.
What Happened When the Biggest Companies Tried to Live In It
Walmart did not get its agentic commerce strategy right on the first attempt. Neither did OpenAI.
When OpenAI launched Instant Checkout last fall, promising that shoppers could buy products directly inside ChatGPT, retailers lined up. Walmart made roughly 200,000 products available. Several months later, Instant Checkout turned out to be prone to errors. Onboarding merchants was arduous. OpenAI underestimated how difficult enabling real transactions was going to be.
Walmart’s response was instructive. Rather than letting an AI platform control the checkout experience, Walmart built its own commerce agent, called Sparky, and embedded it into both ChatGPT and Google Gemini. Walmart now owns the customer data, the transaction, and the post-purchase relationship. OpenAI gets distribution. Users who access Sparky through ChatGPT complete purchases at roughly 70% of the rate of those using Walmart.com directly. The difference appears to be trust: customers recognize they are interacting with Walmart’s agent, even inside another app.
Walmart's revised strategy illustrates exactly the dynamic Microsoft Research described: a company choosing not to live inside someone else's agentic walled garden, but instead owning its own agent and renting the distribution. That is the distinction that will separate winners from losers in the agentic economy. Own your agent. Rent the platform.
Then there is Disney (NYSE: DIS), which shows how quickly things can change in the AI era. In December 2025, Disney signed a three-year licensing agreement with OpenAI, agreeing to allow Sora to generate fan-inspired videos using more than 200 Disney, Marvel, Pixar, and Star Wars characters. Disney also planned to take a $1 billion stake in OpenAI.
This Tuesday, OpenAI shut down Sora entirely. Disney is now exiting the deal. Reports indicate no money actually changed hands. But the strategic distraction was real. And the lesson is real.
Sora reportedly earned around $1.4 million in revenue compared to $1.9 billion for ChatGPT over the same period. OpenAI is now prioritizing robotics and autonomous AI systems instead.
OpenAI made a rational strategic pivot. Companies do that. The board’s job is to ensure the company is not catastrophically exposed when they do. A strategic commitment tied to a single vendor’s product, with no fallback position, no vendor diversification, and no governance structure for what happens when the vendor pivots: that is not just on the management team, but on the Risk Committee, too. It ends up being a governance failure.
Who Audits the Agents?
Researchers at Google published a paper this week that deserves to be read in every boardroom. Written by James Evans, Benjamin Bratton, and Blaise Agüera y Arcas from Google's Paradigms of Intelligence team, it argues that the coming intelligence explosion will not be a single monolithic AI mind but something far more familiar: a society. Hundreds of billions of AI agents, interacting with eight billion humans, producing collective intelligence through coordination, disagreement, and distributed decision-making.
Their governance insight cuts directly to the question: "When AI systems are deployed in high-stakes decisions, the question of who audits the auditors becomes unavoidable." They invoke the founders of the US as the governing logic: no single concentration of intelligence, human or artificial, should regulate itself. Power must check power. In a world of AI agents, this means building oversight into the institutional architecture, not bolting it on afterward.
That is not a philosophical argument. It is a governance design specification. The US founders solved the same problem by distributing authority across branches with different mandates and checking powers. Boards have the same architecture already: Audit, Risk, Compensation, Nom/Gov, each with distinct accountability. The question is whether each committee has been assigned its specific AI oversight mandate before agents are making decisions in their domain, or after.
Anthropic's research on agentic coding found something that reinforces this point from the ground up. Even in the domain where agents are most capable, software development, developers report being able to fully delegate only 0 to 20% of tasks. The other 80% requires active human judgment, validation, and oversight. If that is true in coding, the most verifiable domain that exists, consider what the ratio looks like in legal, finance, HR, and strategy, where verifiability is far lower and the consequences of error are far harder to reverse.
The governance conclusion is the same one the Google researchers reached, and the same one the Founders reached in 1787: distributed authority, clear mandates, and structured oversight are not constraints on intelligence. They are what make intelligence trustworthy at scale.
What Your Board Committees Need to Own
The agentic enterprise does not require a new committee. It requires existing committees asking sharper questions.
Your Audit Committee needs to know: When an AI agent executes a transaction, modifies a customer record, or makes an operational decision on behalf of this company, where is the immutable audit trail? Who reviews it? What data does it access through MCP connections, and who authorized that access? Is our data architecture ready for agents to act on, or are agents roaming across systems with permissions our legal and compliance teams never reviewed?
Your Risk Committee needs to know: What AI agents is this company currently operating, and what are they authorized to do without human approval at each step? What is our exposure if our primary AI vendor pivots or shuts down the way Sora just did? Are we living inside someone else's agentic walled garden, or do we own our agent and rent the distribution? Are our agents operating inside verifiable domains where their outputs can be checked, or outside them where errors compound silently?
Your Compensation Committee needs to know: Which workflows in this company are being rebuilt as agentic workflows right now? How many roles are changing in the process? Anthropic's research shows engineers are becoming orchestrators rather than implementers. That same transformation is coming for finance analysts, legal reviewers, HR coordinators, and sales operations. What is the reskilling plan?
Your Nom/Gov Committee needs to know: Does anyone on this board have direct experience governing an AI system? Not using one. Governing one. Shadow AI governance starts with OpenClaw: your employees are deploying personal agents against company systems right now. Does your board know the policy? And critically: has each committee been assigned its specific AI oversight mandate, or has governance been left as an orphaned responsibility with no clear owner?
The EU AI Act enforcement begins in August 2026. Five months from now. Penalties up to 7% of global revenue. That is not a compliance deadline for your legal team. It is a fiduciary deadline for your board.
The Time to Prepare is Now
The gap between AI’s real capability profile and the way corporate leaders currently understand it is enormous. That gap is where governance failures are born.
McMillon said he could start the AI transformation but could not finish it. Quincey said his company needed someone with the energy to pursue a completely new transformation of the enterprise. The Google researchers put it most precisely: the intelligence explosion is already here, in the centaur workflows reshaping every knowledge profession, in the recursive agent ecologies beginning to fork and collaborate at scale, and in the constitutional questions we must now begin to ask.
Every enterprise will become an agentic enterprise. Every function will become agentic. The only variable is whether your board governs that transformation or discovers, too late, that nobody did.
The question is not whether intelligence will become radically more powerful. It is whether you will build the governance infrastructure worthy of what it is becoming.