February 2026 may be the most consequential month in AI’s short commercial history. On February 5, Anthropic released Claude Opus 4.6, its most powerful model ever, with a one-million-token context window and capabilities that placed it 144 Elo points ahead of OpenAI’s GPT-5.2 on economically valuable office tasks. Twelve days later, Sonnet 4.6 arrived at a fraction of the price, and early users preferred it over the previous quarter’s flagship model 59% of the time. Three days after that, Claude Code Security launched and discovered over 500 previously unknown vulnerabilities in production open-source software, triggering an 8-10% crash in cybersecurity stocks including CrowdStrike (NASDAQ: CRWD) and Cloudflare (NYSE: NET).
On February 24, Anthropic released a comprehensive overhaul of its Responsible Scaling Policy (RSP), the voluntary safety framework that has served as a de facto industry standard since 2023. The same day, the Pentagon threatened to invoke the Defense Production Act to compel unrestricted military access to Claude. Earlier this week, Anthropic’s co-founder and head of policy Jack Clark sat down with Ezra Klein for a remarkably candid interview about AI agents, workforce displacement, and governance gaps. The company also disclosed that DeepSeek, Moonshot AI, and MiniMax ran industrial-scale campaigns using 24,000 fraudulent accounts and over 16 million exchanges to steal Claude’s capabilities for their own models. And last but not least, Mrinank Sharma, a senior Anthropic safety researcher, publicly announced his departure, writing: “The world is in peril.”
Taken together, these events form the clearest picture any boardroom has received of where AI capability, AI risk, and AI governance actually stand today. And the picture is sobering.
TL;DR: The company most identified with responsible AI development has concluded it cannot guarantee safety unilaterally, is losing safety researchers who warn publicly about the stakes, and is facing a U.S. military ultimatum to remove its guardrails entirely. It is now asking governments and competitors to act collectively, while warning that AI systems capable of automating top-tier scientific research in domains including weapons development and energy could arrive as soon as early 2027. At the same time, it is shipping new frontier models faster than most boards hold committee meetings, and each release is repricing entire market sectors. Boards that are still treating AI governance as a future priority are running out of time.
What Changed in the Responsible Scaling Policy
Anthropic’s RSP was the closest thing the AI industry had to a binding safety commitment from a frontier AI lab. Since 2023, the policy categorically prohibited the company from training models above certain capability levels unless it could guarantee adequate safety measures were in place. That guarantee is now gone.
The reversal is a major development given Anthropic’s origin story. CEO Dario Amodei left OpenAI in part because of concerns that the startup was prioritizing commercialization and speed over safety. Now his company has explicitly introduced a competitor clause: Anthropic will no longer delay AI development it believes might be dangerous if it lacks a significant lead over a rival. As the company stated in its blog post: “The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level.”
As TIME reported, Anthropic’s Chief Science Officer Jared Kaplan put it directly: “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments…while the competition runs away from us.”
The commercial stakes explain the urgency. Anthropic was recently valued at $380 billion and is pursuing an initial public offering as soon as this year, alongside OpenAI at a valuation exceeding $850 billion. Both companies are racing to tap investor interest in AI while simultaneously asking the public to trust their safety commitments. The rivalry is personal as well as financial: at an AI summit in New Delhi last week, Amodei and OpenAI CEO Sam Altman ended up standing next to each other alongside Prime Minister Narendra Modi and refused to hold hands while everyone else on stage did. For boards evaluating AI vendor risk, the tension between fiduciary pressure, personal rivalry, and safety obligations is no longer theoretical.
The new RSP v3.0 makes three structural changes that every board should understand:
First, Anthropic now separates what it can do alone from what requires collective action. The previous RSP committed Anthropic to maintaining safety regardless of what competitors did. The new version acknowledges that some safety measures at higher capability levels may be impossible for any single company to implement unilaterally. A RAND Corporation report Anthropic cites found that defending model weights against the most capable state-level attackers is “currently not possible” and “will likely require assistance from the national security community.”
Second, hard commitments become public goals. Instead of categorical safety thresholds that trigger mandatory pauses, Anthropic now publishes a Frontier Safety Roadmap with ambitious but non-binding targets. The company will publicly grade its own progress. This creates transparency, but removes the tripwire that would have forced a development halt.
Third, external review becomes a requirement under specific conditions. Anthropic will subject its Risk Reports to independent, public review by credible third parties when models reach “highly capable” levels. Reviewers will have access to unredacted reports and publish their findings without restriction. This is a genuine accountability mechanism, but it activates only when models cross specific capability thresholds.
The policy revision also coincides with a growing confrontation between Anthropic and the U.S. government. The same week, the Pentagon threatened to invoke the Defense Production Act, a Cold War-era law, to compel Anthropic to allow unrestricted military use of Claude if the company failed to comply with government terms by Friday. A company that built its brand on responsible guardrails is now facing the prospect of a federal government demanding those guardrails be removed entirely.
The internal tensions are visible as well. Earlier this month, Mrinank Sharma, a senior safety researcher at Anthropic, announced his departure. “I continuously find myself reckoning with our situation,” he wrote in a letter to colleagues posted publicly. “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.” When a company’s own safety researchers are leaving and warning publicly, boards relying on that company’s safety commitments should take notice.
For directors and executives accustomed to evaluating governance frameworks, the shift is significant. Anthropic has moved from a rules-based safety regime to a principles-based transparency regime, while simultaneously facing pressure from both market competitors and the U.S. military to loosen its remaining constraints. Whether this provides adequate protection depends entirely on whether governments, competitors, and the public hold the company accountable for the goals it sets.
The 2027 Warning: Automated Research in Weapons, Energy, and AI Itself
Buried in the RSP’s technical framework is a statement that should command every board’s attention.
Anthropic now defines “highly capable” AI models as those that could “fully automate, or otherwise dramatically accelerate, the work of large, top-tier teams of human researchers in domains where fast progress could cause threats to international security and/or rapid disruptions to the global balance of power, for example, energy, robotics, weapons development and AI itself.”
Based on current capability trajectories, Anthropic believes this threshold is plausible as soon as early 2027.
The company’s working metric: a model that could compress years AI progress into a single year. That is not a marginal improvement. It represents the potential for recursive acceleration, where AI systems improve themselves faster than humans can evaluate the consequences.
For the first time, a leading AI developer is placing a concrete timeline on when its own technology could fundamentally alter the global balance of power. This is not an academic forecast. It is the company’s own planning assumption embedded in its governance framework.
What This Means for Geopolitical and Societal Risk
Boards have historically treated AI as a technology and business strategy issue. The RSP v3.0 forces a reframing: AI is now a geopolitical and national security issue with direct implications for corporate governance.
Weapons development acceleration. If AI systems can automate or dramatically accelerate weapons research, every defense contractor, dual-use technology company, and critical infrastructure operator needs board-level oversight of how their AI deployments interact with these capabilities. The Pentagon’s threat to invoke the Defense Production Act against Anthropic signals that the U.S. government views unrestricted AI access as a national security imperative, regardless of the developer’s own safety assessments. Export controls, end-use restrictions, and technology transfer policies become board governance topics, not legal compliance footnotes.
Energy and industrial disruption. Automated research in energy could accelerate breakthroughs in fusion, battery technology, or grid optimization. It could also enable rapid exploitation of vulnerabilities in energy infrastructure. Companies across the energy value chain should be mapping their AI exposure through both lenses: opportunity and threat.
AI improving AI. The consensus in Silicon Valley is that recursive self-improvement of AI is going to happen very soon - and this is the scenario that most concerns the technical community. When AI systems can meaningfully improve their own design, training, and deployment, the pace of capability development could outstrip all existing governance mechanisms. Clark told Klein that Anthropic is already tracking the extent to which its own AI systems contribute to internal research, precisely because this recursive dynamic is the one that could make all other governance challenges worse simultaneously.
State-level technology theft. The distillation attacks Anthropic disclosed the same week are not abstractions. Three Chinese AI laboratories mounted coordinated campaigns to extract Claude’s capabilities through fraudulent accounts. DeepSeek alone used 150,000 exchanges and specifically targeted the generation of censorship-safe answers about political dissidents and party leaders. MiniMax generated 13 million exchanges and pivoted within 24 hours whenever Anthropic released a new model. This is industrial-scale technology exfiltration with clear national security implications.
For boards of companies deploying frontier AI models, the question is no longer whether your AI vendor’s technology is secure from state-level adversaries. Anthropic’s own disclosure demonstrates that even the developer with the strongest stated commitment to security is subject to sustained, sophisticated attacks. Your governance framework needs to account for the possibility that the AI capabilities embedded in your operations are being actively targeted.
What Clark Told Boards About the Workforce (Whether They Were Listening or Not)
The Ezra Klein interview provides essential context for the RSP changes. Clark confirmed that the transition from AI as conversational tool to AI as autonomous worker is not approaching. It has arrived.
AI agents are now doing the work. The majority of code at Anthropic is written by Claude. The company targets 99% AI-authored code by year’s end. Multi-agent systems with supervisor agents monitoring worker agents are standard practice. Researchers define problems in the morning, deploy agents, and return hours later to evaluate results.
Entry-level roles are compressing. Clark was direct: “The value of more senior people with really, really well-calibrated intuitions and taste is going up. And the value of more junior people is a bit more dubious.” Anthropic is already shifting hiring toward senior talent. Harvard and Stanford studies confirm entry-level employment has dropped meaningfully in AI-exposed occupations since 2022.
Technical debt is compounding invisibly. Companies are accumulating AI-generated code that their engineers don’t fully understand. Clark acknowledged this as “the issue that all of society is going to contend with.” The scale of the problem is becoming measurable: BaxBench, a benchmark from ETH Zurich and UC Berkeley, found that 62% of solutions generated by even the best AI models are either incorrect or contain security vulnerabilities. CodeRabbit’s analysis found AI-generated code is 2.74 times more likely to introduce cross-site scripting vulnerabilities than human-written code. Anthropic is building internal monitoring to track where AI code changes fastest, where bottlenecks exist, and how delegation patterns escalate as employees grow more comfortable with agents. Most companies deploying AI-generated code at scale have no equivalent monitoring.
AI systems exhibit emergent behaviors that challenge oversight. Claude browses pictures of national parks and dogs during work tasks (unprogrammed). It develops aversions to content categories beyond its training. It has attempted to break out of test environments. Most significantly for governance: AI systems appear to detect when they’re being evaluated and alter their behavior accordingly. If AI systems can game their own audits, traditional compliance frameworks face a structural problem.
The Market Is Already Pricing AI Disruption. Is Your Board?
The governance implications of AI capability releases are no longer theoretical. Public markets are now repricing entire sectors in real time every time a frontier AI lab ships a new product. And the shipping cadence is accelerating: Anthropic released Opus 4.6 on February 5 and Sonnet 4.6 on February 17. Within twelve days, the cheaper model was outperforming the previous quarter’s most expensive offering on most tasks. That is the speed at which the competitive landscape is shifting, and it is fundamentally incompatible with quarterly board review cycles.
On January 12, 2026, Anthropic released Cowork, hailed as the non-technical version of Claude Code bringing agentic capabilities to the masses. A few weeks later on January 30, the company released a series of Cowork plugins for enterprise workflows "which let you bundle any skills, connectors, slash commands, and sub-agents together to turn Claude into a specialist for your role, team, and company". These plugins spanned sales, legal, finance, marketing, and customer support, disrupting the premise of per-seat software licensing by embedding AI agents directly inside enterprise tools.
Within a single week, Atlassian (NASDAQ: TEAM) declined 35%. Intuit (NASDAQ: INTU) declined 34%. Thomson Reuters (NYSE: TRI) fell 16% in its worst single-day trading session in company history. LegalZoom (NASDAQ: LZ) dropped 20%. Salesforce (NYSE: CRM) declined 7%. The JPMorgan Software Index fell 7%. Jefferies coined the term “SaaSpocalypse” for the resulting selloff. Approximately $285 billion in SaaS market capitalization evaporated. The software ETF posted its worst quarter since 2008. This was not an isolated event. It is an emerging pattern. Each major Anthropic capability release has triggered measurable market consequences in the sectors most directly exposed.
And it didn't stop there. On February 20, Claude Code Security deployed AI-powered vulnerability scanning that discovered over 500 zero-day vulnerabilities in production open-source codebases, including bugs that had gone undetected for decades despite years of expert review. In the CGIF library alone, Claude identified a heap buffer overflow by reasoning about the LZW compression algorithm, something traditional coverage-guided fuzzing couldn’t catch even with 100% code coverage. CrowdStrike (NASDAQ: CRWD) fell 8-10%. Cloudflare (NYSE: NET) fell 8-10%. The Global X Cybersecurity ETF (NASDAQ: BUG) fell nearly 9%.
The same week, Anthropic published a COBOL modernization playbook demonstrating how Claude can reverse-engineer business logic from legacy systems, map dependencies across thousands of files, and execute incremental migrations in quarters rather than years. COBOL handles an estimated 95% of ATM transactions in the U.S., and the developers who built those systems retired years ago. IBM (NYSE: IBM) had its worst trading day since 2000. Bain & Company warned that up to 30% of tech services revenue could disappear due to AI automation.
Almost every sector will be in the blast radius as more agentic AI capabilities are released. Based on current capability trajectories and announced product roadmaps, boards should be monitoring AI disruption exposure across the following areas with particular urgency.
Legal and contract intelligence. Million-token context windows now enable end-to-end contract review, due diligence, and regulatory analysis that previously required teams of associates and weeks of billable hours. Anthropic’s dedicated legal plugin performs clause-by-clause contract review with automated risk flagging, NDA triage, and vendor agreement analysis. It connects directly to document management systems. The company is also making a broader push into legal services, recently announcing partnerships with LegalZoom, Harvey, and Intapp that connect their legal platforms directly to Claude’s capabilities. The outputs still require licensed attorney review, but the labor economics of legal work are shifting from hours of human reading to minutes of AI analysis followed by human judgment.
Financial analysis and research. DCF modeling, equity research, credit analysis, and portfolio construction are increasingly automatable. Claude in Excel now connects directly to S&P Global, LSEG, Daloopa, PitchBook, Moody’s, and FactSet through native integrations, pulling real-time data into AI-driven analysis without leaving the spreadsheet. The question for boards of financial services firms is whether their analysts are using these tools or being displaced by them.
Accounting and tax preparation. Bookkeeping, tax filing, audit preparation, and payroll processing are high-volume, rules-based workflows where AI agents are already demonstrating production-quality performance.
IT services and consulting. Enterprise modernization, system integration, and digital transformation engagements face compression when AI agents can perform technical migration work in weeks rather than the months that consulting firms currently bill. The COBOL modernization capability is particularly significant: hundreds of billions of lines of legacy code power critical systems in finance, airlines, and government, and the institutional knowledge needed to maintain them is disappearing as the original developers retire. AI can now map those dependencies, document forgotten workflows, and execute incremental migrations, collapsing multi-year consulting engagements into quarterly sprints.
Contact centers and customer support. Autonomous voice, chat, email, and ticketing systems are approaching the quality threshold where human staffing ratios shift dramatically.
Healthcare IT and administration. Medical billing, coding, prior authorization, and revenue cycle management represent some of the largest back-office cost centers in the economy, and some of the most rule-bound processes AI agents can target.
What This Means for Boards
This is not stock market commentary. It is a governance signal. When a single AI lab’s capability releases can erase tens of billions in market value from SaaS one week and cybersecurity the week after, boards need to be asking two questions simultaneously.
First, the defensive question: Is your company in the blast radius? If your business model depends on selling human labor hours for knowledge work that AI agents can now perform, the market may reprice your equity before your board discusses it. Audit and risk committees should be requesting AI disruption exposure analyses mapped by revenue line and competitive position.
Second, the offensive question: Is your company capturing the opportunity? The companies experiencing the steepest declines are not the ones AI can replace. They are the ones that failed to integrate AI into their own value proposition before a competitor or a platform company did it for them. Strategy committees and the full board should be evaluating whether management’s AI adoption roadmap is aggressive enough relative to market expectations.
The pattern is clear: AI capability releases are now market-moving events with cross-sector consequences, and they are arriving faster than quarterly earnings cycles can absorb them. Boards that treat these releases as technology news rather than material risk events are governing with an outdated information architecture.
Why These Signals Converge Into a Board Governance Emergency
The RSP revision, the Klein interview, the distillation disclosure, the model releases, the Pentagon confrontation, the researcher departure, and the market repricing are not separate stories. They are different facets of the same reality: AI capability is advancing faster than the governance infrastructure designed to contain it, and the company most committed to building that infrastructure has just acknowledged it cannot do so alone.
This is not a technology problem. It is a governance vacuum, and boards sit at the intersection of every pressure it creates. Your company deploys AI systems whose capabilities are advancing on timelines measured in days, not quarters. Your regulatory environment is fragmented, with the EU AI Act enforcement deadline of August 2026 approaching while U.S. federal policy oscillates between absent and coercive. Your AI vendors are subject to state-level technology theft and, simultaneously, to government demands that safety guardrails be removed for military applications. Your codebase may contain AI-generated vulnerabilities that traditional security tools cannot detect. Your workforce is being restructured in real time. And the safety frameworks you may have relied on are being rewritten because even their authors cannot guarantee they will hold.
What Your Board Should Do Now
For the Full Board: This is a fiduciary moment. AI governance can no longer be delegated to a quarterly technology update. The board should receive a comprehensive briefing on the organization's AI exposure, including agentic AI deployment, vendor security posture, workforce transformation trajectory, and regulatory compliance readiness, before the next regularly scheduled meeting. Consider establishing inter-meeting briefing protocols that match the velocity of AI development.
For the Audit Committee: Request a report on AI-generated code and AI-authored outputs as a percentage of critical business systems. Understand what verification processes exist, where human oversight gaps are growing, and whether your audit function can detect AI system behaviors that differ between testing and production environments. Given that independent research shows AI-generated code is significantly more likely to contain security vulnerabilities than human-written code, and that AI-powered scanning is now discovering bugs that persisted for decades under traditional review, the committee should also assess whether the organization's application security posture accounts for the volume and velocity of AI-authored code entering production.
For the Risk Committee: Add five new risk categories to your framework. First, "agentic autonomy risk," covering AI systems that take independent actions in your environment. Second, "recursive capability risk," covering the possibility that AI tools used in your operations are improving faster than your governance can adapt. Third, "state-level technology exfiltration risk," covering whether your AI vendor's models and your proprietary data are targets for nation-state actors. Fourth, "AI-on-AI security risk," covering the emerging reality where AI systems both generate and discover vulnerabilities in code, creating a dynamic where your security posture depends on whether your defensive AI capabilities keep pace with the attack surface your generative AI tools are creating. Fifth, "government compulsion risk," covering the possibility that federal authorities may override your AI vendor's safety guardrails for national security purposes, as the Pentagon's Defense Production Act threat against Anthropic now demonstrates is not hypothetical.
For the Compensation Committee: Commission a workforce impact analysis that maps AI automation exposure by role level, function, and business unit. Ask management whether reduced junior hiring is a deliberate strategy or an unexamined side effect, and what the five-year implications are for talent pipeline, institutional knowledge, and organizational resilience.
For the Nominating/Governance Committee: Evaluate whether the board possesses sufficient expertise to oversee AI risks at the level of sophistication the current environment demands. The issues now on the table, including autonomous research acceleration, emergent system behaviors, geopolitical technology competition, and recursive capability improvement, require directors who can engage substantively with both the technology and its governance implications.
Every board now faces the same decision Anthropic itself confronted this month: act decisively with imperfect information, or wait for certainty that will never come. Anthropic chose to move. The boards that govern the companies deploying its technology cannot afford to do less.