TL;DR: Bond Capital's Trends in AI Report is a Must-Read
AI is advancing faster than any prior technology wave—outpacing even the internet and mobile—with massive implications for corporate strategy, competition, and governance. The BOND report on Trends in AI underscores that AI is now a geopolitical race, an infrastructure battle, and a platform shift all at once. For board directors, the key imperatives are:
- Treat AI as a board-level priority tied to strategy, risk, and innovation—not just IT.
- Push for “AI-first” thinking across products and business models to avoid disruption.
- Scrutinize the economics of AI investments, including compute costs and ROI.
- Prioritize AI talent, upskilling, and organizational readiness to stay competitive.
- Strengthen oversight of AI risks, ethics, and regulatory exposure through formal governance.
Boards must lead decisively ensuring their companies act with urgency, accountability, and clarity in an AI-powered world where the rules are being rewritten in real time.
As we've seen, Artificial Intelligence isn’t just another tech trend – it’s a seismic shift rewriting the rules of business at record speed. A new 340-page BOND report on AI Trends by famed former Wall Street tech analyst Mary Meeker and team reveals how the pace and scope of AI’s evolution are “unprecedented” in modern history. Having worked in tech for over 25 years, I can attest: this AI wave dwarfs anything we’ve seen before. For board directors, the implications are profound. To cut through the hype, let’s examine the most important AI trends and technologies in the report and what they mean for corporate strategy, governance, and risk oversight. This will provide you with a pragmatic game plan for your board to ensure effective AI leadership and stewardship.
Top Ten Things AI Will Likely Do in Five Years, per ChatGPT
Top Ten Things AI Will Likely Do in Ten Years, per ChatGPT
Unprecedented Speed and Scale of AI Adoption
In previous cycles – whether PC, internet, or mobile – technology adoption was rapid, but AI’s ascent is on another level. Consider the viral success of ChatGPT: it reached 100 million users in just two months, a rate of adoption no other consumer tech product in history has achieved. As of June 2025, it is estimated that ChatGPT has over 800 million weekly users, now handling 365 billion search queries annually, a milestone Google took over a decade to reach. In the U.S., AI-powered tools hit 50% user adoption in 3 years, far faster than the 6+ years it took mobile internet or the 20 years for PCs.
This breakneck growth is compounded by AI’s self-improving nature: smarter tools are helping build even smarter tools. Developers are shipping AI innovations faster, users are integrating them into daily life at record pace, and AI platforms learn and iterate in real-time. The result: AI is growing faster than the early internet ever did. Such velocity leaves little room for a “wait and see” approach. Boards must ensure their companies act with urgency, treating AI not as a curiosity or side project, but as a core strategic priority. The companies that ride this momentum will leap ahead; those that lag risk irrelevance in a world where technology cycles are measured in months, not years.
A Global Race for AI Leadership
AI’s rapid rise has ignited a fierce global competition – often described as a new “arms race” or “space race” for technological dominance. The usual boundaries between startups and incumbents, East and West, are blurring in this race. Everyone is engaged, from garage startups to tech giants and governments, in a way we’ve never seen before. This creates unprecedented opportunity for wealth creation - and the risk of wealth destruction for those on the wrong side of the disruption.
A striking example is China’s AI momentum. No longer content to play catch-up, Chinese companies are matching or even beating Western AI models in key benchmarks. Models like Alibaba Group’s Qwen and ByteDance’s CodeFuse are pushing the frontier, backed by a massive domestic user base and strong government support. China is investing heavily to build a sovereign AI ecosystem – from homegrown algorithms to national cloud infrastructure – explicitly to reduce reliance on U.S. technology. In effect, AI has become a cornerstone of geopolitical strategy, with echoes of a tech Cold War.
For corporate boards, this global context means competition can emerge from anywhere. A breakthrough in Beijing or Bangalore can upend a business in Boston. We must monitor not just our traditional industry rivals, but also international players and disruptive startups leveraging AI. Additionally, access to AI infrastructure is now a strategic advantage akin to access to oil in past eras. Cutting-edge AI demands immense computing power, specialized chips (GPUs, TPUs, etc.), and vast data – resources that nations and corporations alike are investing billions to secure. Whoever controls the computing capacity controls the future of AI. For boards, questions of supply chain and tech sovereignty move to the forefront:
- Do we have assured access to the AI compute and data we need?
- Are we allied with the right cloud and semiconductor partners?
- How might geopolitical tensions or export controls affect our AI plans?
These considerations now tie directly into risk oversight and strategic planning at the board level.
From AI-as-a-Feature to AI-First Innovation
Many companies have so far treated AI as an “upgrade” – a feature to bolt onto existing products or processes. But the trend is now shifting toward AI-native products built from the ground up with intelligence at the core. Think of how GitHub Copilot reimagined coding with an AI pair-programmer, or how Notion AI infuses a productivity platform with generative smarts from day one. In nearly every sector, we’re seeing AI-first startups that don’t just add AI to a workflow – they reconstruct the workflow around AI. Meeker’s report calls this the beginning of a major “platform shift” in tech. It’s happening now, not a decade from now.
This shift poses an existential strategic question for incumbents: Will you be the disruptor or the disrupted? Simply layering AI onto legacy products may not be enough when nimble entrants are designing whole new paradigms. Boards should push management to reimagine offerings through the lens of AI. In practice, this may mean incubating new AI-driven product lines, acquiring innovative startups, or partnering with AI technology providers. Notably, the competitive cycle in AI is extraordinarily fast – rivals can replicate new features in months or weeks, often by leveraging open-source models. In the report, for example, Meeker highlights how even cutting-edge AI capabilities become commoditized quickly (including via open-source and Chinese competitors) and at drastically lower cost. The takeaway for boards: innovation velocity is king. We must foster a culture that encourages rapid experimentation and learning, because if we don’t, someone else will. As we constantly discuss with CEOs and boardrooms, we regularly ask: “What would you build if we started this company today with AI at its core?” Boards should be asking their CEOs the same – and expecting ambitious answers.
The New Economics of AI – Promise and Pitfalls
Even as AI’s strategic value is clear, its economic equation requires careful oversight. Many of today’s AI services use a freemium model – free basic usage to gain adoption, with paid tiers for premium features or API access. This has driven explosive user growth (as seen with ChatGPT’s free tier), but it carries a catch: sky-high infrastructure costs. Training advanced AI models can cost hundreds of millions of dollars, even approaching $1 billion for a single model training run. Running these models (“inference”) also racks up costs – though notably, Meeker’s data shows inference cost per million queries has plummeted 99% in two years thanks to better chips and optimized software. Still, companies offering AI at scale face immense cloud and hardware expenses. In the enterprise context, if we roll out AI features company-wide or to millions of customers, are we prepared for the ongoing cost of compute? Boards must ensure that monetization catches up with usage. Otherwise, even a product with delighted users can become a financial strain. As Meeker cautions, if revenue doesn’t eventually cover the compute bills, “things break”.
Venture capital is certainly betting big on AI – we’re witnessing investors pour money onto the AI fire as fast as possible. But lavish funding doesn’t guarantee profitability. Many AI players (including some well-known startups and cloud providers) are burning cash on infrastructure and talent with uncertain payback. Board directors should interrogate AI investment plans with a balanced lens: encourage bold bets where necessary, but insist on business models and unit economics that make sense in the long run. If our company decides to integrate third-party AI (say, licensing an AI platform or using a cloud provider’s model), we need to scrutinize costs, vendor lock-in risks, and data ownership. If we develop AI in-house, we should possibly start with targeted pilot projects that deliver clear ROI before scaling up. In short, the board’s fiduciary duty now extends to understanding the cost structure of AI - ensuring that the pursuit of innovation does not outrun the sustainability of the business.
Talent, Culture, and the Future of Work
Behind every AI strategy are people - and AI is already reshaping the talent landscape inside organizations. Despite fears that “AI will take our jobs,” the more immediate reality is that jobs are changing, not disappearing overnight. AI acts increasingly as a “co-pilot” for knowledge workers – assisting writers, software developers, designers, customer service reps, and more. We see employees leveraging tools like code generators, content summarizers, and design assistants to amplify their productivity instead of replacing them. Reflecting this shift, demand for AI-related skills is skyrocketing: job postings citing AI have surged 448% since 2018, even as overall job listings in some areas slow down. By 2030, virtually every professional role will require some level of comfort working with AI, even if one isn’t an AI engineer.
For boards overseeing corporate strategy, this trend mandates a human capital response. Companies need to upskill their workforce at all levels – from training front-line staff to use new AI tools, to reskilling mid-career professionals for evolving roles. At the leadership level, boards should ensure management is building a robust pipeline of AI talent and expertise. This might mean hiring specialized AI engineers and data scientists, but it also means cultivating a culture of continuous learning so that every employee can adapt to AI-driven workflows. Notably, the companies that attract and empower the best developers and AI talent tend to win; as one tech leader famously emphasized, “developers, developers, developers” are often the key to success. As directors, we should ask: do we have a plan to recruit and retain top technical talent, and to make our company a magnet for the brightest minds in AI? If not, we risk falling behind competitors who do.
Boards may also consider the organizational structure needed for effective AI adoption. Is there clear ownership of AI initiatives? Is it being run by a business leader, a technical leader, or both? Should the board form a business transformation, technology or innovation committee to dive deeper into these issues? Ensuring the right people are in place – both on the board and in management – is crucial. Some boards are already adding directors with AI expertise (both business and technical) to provide informed oversight. In an era when “everyone wants in – and no one wants to fall behind” on AI, a company’s ability to lead will depend on its people as much as on its algorithms.
Governance, Ethics and Risk Oversight in the AI Era
Amid the fervor to adopt AI, boards carry the critical responsibility of guarding against AI’s risks – ethical, legal, and reputational. It’s a role that cannot be abdicated or left solely to the IT department. The Bond Capital report underscores familiar yet pressing challenges: AI systems can hallucinate false information, inherit or even amplify biases, and be misused to spread misinformation. These issues can erode customer trust and invite regulatory backlash if not managed. Yet today, the governance conversation lags far behind the tech. Regulations are still catching up, and industry standards are in flux. In this gap, board leadership is needed to set the tone for responsible AI.
Pragmatically, what should directors do? First, insist on transparency around AI initiatives. Management should be able to explain how an AI model makes decisions (at least in broad terms), what data it’s trained on, and where its limitations lie. If we are deploying an AI tool in, say, hiring or lending or medical diagnostics, do we have procedures to test it for bias and accuracy? Are we keeping a human in the loop for critical decisions? Boards should ask for regular audits of AI systems, whether internal or via third-party experts, to ensure they meet ethical and compliance standards.
Second, incorporate AI risk oversight into the enterprise risk management framework. This might involve new policies – for example, guidelines on employees’ use of public generative AI (to prevent confidential data leaks or compliance breaches), or protocols for incident response if the AI produces a harmful outcome. Just as cybersecurity became a standing item on board agendas over the past decade, AI risk and ethics should now be a recurring agenda item. This includes staying abreast of evolving laws (such as the EU’s proposed AI Act or sector-specific regulations) and anticipating how they will impact the business.
Crucially, boards must champion a culture of ethical AI from the top. That means aligning AI projects with the company’s values and stakeholder expectations. If our company’s mission is to serve customers or communities, we need to ensure our AI doesn’t inadvertently harm those groups through unfair or opaque practices. As directors, we set expectations that AI should be used to enhance trust, not undermine it. The report calls for “thoughtful leadership and smarter frameworks” to keep AI’s growth in check before it breaks trust at scale – and the boardroom is exactly where such leadership must come from.
Key Takeaways for Directors
In this moment of both exponential innovation and heightened risk, board directors have a critical role to play. Here are some practical recommendations to ensure effective AI leadership and stewardship:
- Elevate AI to a Strategic Priority: Treat AI as a board-level concern and opportunity. Insist on a clear AI strategy from management that aligns with the company’s overall vision and competitive context. Regularly review progress on key AI initiatives just as you would finance or compliance reports.
- Invest in Knowledge and Talent: Ensure the board and C-suite are AI-literate. This may involve expert briefings, director education sessions, or adding board members with AI expertise. Likewise, support management in attracting top AI talent and upskilling the broader workforce, knowing that the companies with the best people will lead in innovation.
- Embrace an AI-First Mindset: Challenge your organization to move beyond pilot projects. Encourage exploration of AI-first business models and products, not just incremental features. Foster a culture where experimentation is encouraged and failure is treated as learning – the speed of iteration is crucial in the AI era.
- Scrutinize Economics and Infrastructure: Closely oversee the costs and ROI of AI investments. Ask tough questions about the sustainability of any “growth-at-all-costs” approach. Make sure the company has a plan for the computing resources and data needed to fuel AI ambitions, whether through cloud partnerships, internal capacity, or hybrid models. Be prepared to support significant investment here if justified, as compute has become the lifeblood of AI competitiveness.
- Strengthen AI Governance and Ethics: Establish or refine governance structures to manage AI risk. This could mean setting up an AI ethics committee, adopting ethical AI guidelines, and integrating AI risk into audit and risk committees’ purview. Proactively seek assurance that AI systems are monitored for biases, errors, and cyber vulnerabilities. Champion transparency and accountability in AI use, both internally and in customer-facing applications.
- Stay Ahead of Regulatory and Societal Expectations: The regulatory landscape for AI is evolving. Boards should stay informed on emerging laws and industry standards, and guide their companies to not only comply but lead by example. Engaging with policymakers or industry groups can be wise. Equally, consider the broader societal impact of your company’s AI – issues of privacy, fairness, and employment – and prepare for stakeholder scrutiny. In a world of fast-moving tech, trust and reputation are competitive advantages that the board must help safeguard.
We are at a crucial inflection point. AI’s unprecedented trajectory presents unmatched opportunities to those prepared and existential risks to those complacent. In this environment, board directors cannot afford a passive role. Effective AI stewardship requires the board to be intellectually rigorous – diving into the technology’s implications – and visionary yet pragmatic in response. By asking the right questions and empowering bold action with proper oversight, boards can ensure their organizations harness AI’s transformative power responsibly. The companies that succeed will be those whose leaders, at every level, anticipate the future and act decisively. In the end, navigating the AI revolution is not just management’s job – it is a collective leadership challenge, and the boardroom must lead from the front.