Skip to content

The AI Companies Are Telling You Something. Are You Listening?

The AI Companies Are Telling You Something. Are You Listening?
Published:

In the span of 24 hours, the two of the most valuable AI companies on earth made public statements that should force every board and C-suite to recalibrate their assumptions about what is coming and how fast it arrives.

On Monday April 6th, OpenAI published a 13-page policy paper calling for robot taxes, a national Public Wealth Fund, and containment playbooks for AI systems that can replicate themselves. On Tuesday April 7th, Anthropic announced Project Glasswing, a cybersecurity coalition with Apple , Amazon , Microsoft , Google , NVIDIA , and others, built around a new unreleased model called Claude Mythos Preview that has already found thousands of zero-day vulnerabilities in every major operating system and web browser, including bugs that survived decades of human review and millions of automated security tests.

Read together, these two announcements tell a single story: the companies building frontier AI believe their own technology is about to change everything, and they are racing to position themselves as the adults in the room before that change arrives. Directors and executives who are still debating whether to add AI to the board agenda are debating the wrong question. The question is whether their governance infrastructure can absorb what is already here.

The Capability Curve Waits for No Company

Two years ago, Leopold Aschenbrenner , a former OpenAI researcher, published Situational Awareness: The Decade Ahead, a series of essays arguing that AGI was plausible by 2027, that an intelligence explosion could compress a decade of progress into a single year, and that the national security implications would rival the Manhattan Project. At the time, many considered it alarmist. This week’s announcements suggest the trendlines he identified are holding.

Article content
Source: Leopolf Aschenbrenner

The numbers are concrete. Training compute is growing roughly 5x annually. Algorithmic efficiency improves approximately 3x per year. LLM inference costs are halving roughly every two months. Global compute capacity is doubling every seven months, according to Epoch AI. OpenAI’s compute grew from 0.2 GW in 2023 to 1.9 GW in 2025; revenue tracked in lockstep, from $2 billion ARR to over $20 billion. Anthropic’s revenue tripled from $9 billion at the end of 2025 to over $30 billion today, surpassing OpenAI’s $25 billion run rate. More than 1,000 enterprise customers now spend over $1 million annually on Anthropic’s Claude.

These are not projections. They are trailing indicators. The leading indicators are the capabilities themselves, and what Anthropic revealed on Tuesday makes the governance implications impossible to defer.

Anthropic’s Glasswing: When the Model Finds What Humans Can’t

Project Glasswing is built around Claude Mythos Preview, an unreleased frontier model that Anthropic says has reached a level of coding capability where it can surpass all but the most skilled humans at finding and exploiting software vulnerabilities. Anthropic is not releasing this model to the public. Instead, it is sharing it with a consortium of launch partners: AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks, along with more than 40 additional organizations that build or maintain critical software infrastructure. Anthropic has committed up to $100 million in usage credits and $4 million in direct donations to open-source security organizations.

Article content
Members of Project Glasswing. Source: Anthropic

The findings are specific and verifiable. Mythos Preview found a 27-year-old vulnerability in OpenBSD, one of the most security-hardened operating systems in the world, used to run firewalls and critical infrastructure. It discovered a 16-year-old vulnerability in FFmpeg, a ubiquitous video codec, in a line of code that automated testing tools had scanned five million times without catching the problem. It autonomously chained together multiple Linux kernel vulnerabilities to escalate from ordinary user access to full machine control. As The New York Times reported, cybersecurity researchers with early access to the model have characterized it as a significant security risk.

Anthropic’s chief science officer, Jared Kaplan , told the Times that these cybersecurity capabilities are not the result of specialized training. They are a byproduct of the model’s general coding and reasoning improvements. His prediction: similar capabilities will exist in other models soon.

What this means for boards: The security assumptions underpinning your entire technology stack may have just changed. Systems that were “mostly secure because it took a lot of human effort to attack them,” as Anthropic’s Logan Graham put it, may not be secure in a world where AI can methodically catalog every weakness in your infrastructure. Elia Zaitsev , CTO of CrowdStrike, stated that the window between a vulnerability being discovered and being exploited has collapsed from months to minutes. Every audit, risk, and technology committee should be asking: has our cybersecurity posture been stress-tested against AI-augmented attack capabilities?

OpenAI’s New Deal: Policy Paper as Pre-IPO Positioning

One day before Anthropic demonstrated what frontier AI can already do to critical infrastructure, OpenAI published Industrial Policy for the Intelligence Age, a sweeping set of proposals for how governments should prepare for superintelligence. Sam Altman told Axios that the disruption ahead requires a new social contract comparable to the Progressive Era and the New Deal.

The Proposals

Shift the tax base from payroll to capital gains and corporate income. Create a nationally managed Public Wealth Fund seeded partly by AI companies. Incentivize 32-hour workweek pilots at full pay. Float a robot tax on automated labor. Treat AI access as a right comparable to electricity. Create portable benefits that follow workers across employers. Build auto-triggering safety nets that scale up when AI displacemehttps://openai.com/index/industrial-policy-for-the-intelligence-age/nt crosses preset economic thresholds and phase out when conditions stabilize. Establish auditing regimes for frontier models, incident-reporting mechanisms modeled on aviation safety, and containment playbooks for AI systems that cannot be recalled.

Source: OpenAI

OpenAI closed a $122 billion funding round on March 31 at an $852 billion valuation. It generates $2 billion in monthly revenue, is not yet profitable, projects $17 billion in cash burn for 2026, and is preparing for a Q4 IPO targeting a $1 trillion valuation. The paper was released the same day The New Yorker published an investigation raising questions about Altman’s trustworthiness on safety. Its Leading the Future PAC has lobbied against the very AI safety legislation the paper now implicitly endorses, including New York’s RAISE Act and California’s SB 53. The paper contains no binding commitments from OpenAI itself. Every proposal is addressed to governments and policymakers.

The paper’s most consequential passage acknowledges scenarios where dangerous AI systems “cannot be easily recalled” because they are autonomous and capable of replicating themselves. This is a risk disclosure, not a policy proposal. When the company building the technology tells you in a public document that its product may become uncontrollable and that the response should resemble pandemic containment, your risk committee needs to engage.

Two Companies, Two Strategies, One Governance Lesson

The juxtaposition of these two announcements is the story directors and executives should focus on.

OpenAI’s approach: publish ambitious policy proposals addressed to other people, commit to nothing binding, and position the company as the responsible actor ahead of an IPO. The paper proposes a Public Wealth Fund without pledging to fund it, recommends Public Benefit Corporation structures without adopting one, and calls for safety regulation while its PAC lobbies against safety bills.

Anthropic’s approach: withhold a powerful model from general release, form a coalition of the twelve largest technology and security companies, commit $100 million in credits and $4 million in direct funding, and publish specific, verifiable vulnerability findings. Anthropic also refused Pentagon demands that conflicted with its usage policies, a decision a federal judge upheld, and its $30 billion annualized revenue now exceeds OpenAI’s despite a valuation less than half as large ($380 billion vs. $852 billion).

Neither company is a charity. Both are pursuing commercial strategies that happen to align with safety narratives. But the governance signals are different in kind. One company is telling governments what to do. The other is showing what it will and will not do. For boards evaluating AI vendor relationships as governance decisions, the distinction matters.

The Regulatory Clock Is Already Running

While OpenAI proposes voluntary frameworks for the U.S., mandatory obligations are arriving elsewhere. The EU AI Act begins enforcement in August 2026, four months from now. It imposes requirements on deployers, not just developers: risk assessments, transparency, human oversight, conformity assessments for high-risk systems, and penalties up to 7% of global revenue.

The board readiness data is stark. ISS reports that only 8% of companies have disclosed board-level AI oversight, 9% have formal AI policies, and just 16% have directors with AI-relevant skills. Only 4% have two or more AI-skilled directors. NACD data shows 95% of directors believe AI impacts their business, but only 28% discuss it regularly and 14% discuss it at every meeting. Deloitte finds roughly 50% say AI is not on the board agenda at all.

Anthropic’s Glasswing announcement makes the urgency personal. If your company runs software built on the Linux kernel, OpenBSD, FFmpeg, or any major browser, the security assumptions you made last quarter may already be outdated. If your AI governance framework does not account for AI-augmented cyber threats, the EU AI Act’s risk assessment requirements will force the conversation whether you are ready or not.

Six Actions for the Next Board or Leadership Meeting

1. Stress-test your cybersecurity posture against AI-augmented threats. Anthropic’s Glasswing findings demonstrate that AI models can now find vulnerabilities that decades of human review and millions of automated scans missed. Ask your CISO whether your current penetration testing and vulnerability management programs account for this new capability class. If they do not, prioritize an assessment.

2. Assess AI vendor concentration risk. If any single AI provider represents more than 20% of your AI spend, the regulatory and operational surface area around that relationship is expanding rapidly. Use AI governance frameworks to quantify exposure across six dimensions: AI Vendor Concentration, Substitution Difficulty, Operational Dependency Depth, Data Lock-In Exposure, Regulatory Surface Area, and Agentic Readiness.

3. Model the payroll-tax scenario. OpenAI’s paper warns that AI could erode the tax base funding Social Security, Medicaid, and SNAP. If your workforce strategy depends on headcount-based tax assumptions, your audit committee should model the impact of a shift from labor-based to capital-based taxation on your cost structure, benefit commitments, and long-term financial planning.

4. Benchmark against the EU AI Act, not against industry wish lists. The EU’s requirements are mandatory, enforceable, and arriving in August 2026. Build for compliance obligations that exist. ISS data shows the gap between what regulators require and what boards have built is already a liability.

5. Evaluate AI vendors’ actions, not just their papers. The delta between what a company publishes and what its PAC funds, what it withholds from release vs. what it ships, what it commits to vs. what it recommends for others, is the most reliable governance signal available. Track it systematically across your vendor portfolio.

6. Update your board skills matrix for AI governance. Only 4% of boards have two or more directors with AI skills. Spencer Stuart reports 80% publish skills matrices, but few have added AI governance as a tracked competency. If your board cannot evaluate whether your company’s AI deployment practices, vendor relationships, and compliance posture are adequate, you have a composition gap that no policy paper or product announcement will fix.

The Bottom Line

This was the week when the two most powerful AI companies made the governance case in different ways. OpenAI said the disruption is so severe that we need a new social contract, then addressed its proposals to everyone except itself. Anthropic said the capabilities are so powerful that it is withholding its best model from general release and spending $100 million to help the world patch its software before attackers get access.

Both are positioning. Both are also telling you something real. The capability curve that Aschenbrenner described in Situational Awareness is not slowing down. It is accelerating. OpenAI crossed $2 billion in monthly revenue in a technology that did not exist three years ago. Anthropic tripled its revenue in four months. A model that is not even publicly released just found vulnerabilities in software that has been audited for 27 years.

Article content
Source: Leopold Aschenbrenner

The governance gap, the distance between what AI can do today and the governance infrastructure boards have built to oversee it, is the largest in modern corporate history. Every month that gap widens, the cost of closing it increases. The EU AI Act arrives in four months. The IPOs of both companies are expected before year-end. The next generation of models is already in training.

The company that warned you about the flood is selling the boats. The company that found the holes in your hull just told you exactly where they are. Both messages point in the same direction: check your governance now, because the water is already rising.

Steven Wolfe Pereira

Steven Wolfe Pereira

Steven Wolfe Pereira is Founder & CEO of Alpha, an AI governance intelligence company serving boards and executives. Former C-suite executive at Datalogix / Oracle, Neustar, and Quantcast; board member, startup advisor and Forbes contributor.

All articles

More in AI Governance

See all

More from Steven Wolfe Pereira

See all
Can OpenAI’s ChatGPT Ad Ambitions Add Up?

Can OpenAI’s ChatGPT Ad Ambitions Add Up?

/