Anthropic just released a redesigned Claude Code desktop app: a more polished integrated terminal with a side chat that runs parallel to an active agent without disrupting it, drag-and-drop panes, a rebuilt diff viewer for large changesets, and three view modes so developers can control how much of the agent's activity they see. The framing from Anthropic was direct: "many things in flight, and you in the orchestrator seat."
Welcome to the "AI superapp" - where the coding terminal (Claude Code), the chat interface (Claude), and the non-technical agentic interface (Claude CoWork) are all integrated into one seamless interface.
In addition, Anthropic also launched Routines, which lets Claude Code automations run on a schedule, via API call, or triggered by a GitHub event, on cloud infrastructure, with no laptop required. Nightly bug triage. Automated PR review. Deploy verification. Alert triage that correlates a Datadog page with recent deployments and drafts a fix before on-call opens the ticket. The coding agent is now a background process.
It's safe to say that Anthropic has been setting the pace for AI advances. For the past six months, Anthropic has been building toward a specific architecture: a unified desktop environment where chat, autonomous task execution for non-technical users, and coding all live in one place. Claude.ai for conversation. Cowork for professionals who want AI to act on their behalf without writing code. Claude Code for developers. The latest update tightens that system considerably. The app switcher has moved from a modal at the top of the screen to a persistent sidebar with three icons, keeping all three surfaces accessible from one interface at all times. Sessions are now grouped by project, filterable by status and environment, so developers managing multiple repositories and parallel agents can see everything in flight without losing context. The side chat feature, triggered by Command + semicolon, lets you ask a question while an agent is mid-task without feeding extra context back into the main thread and redirecting the work already underway. That is a specific, considered solution to a specific workflow problem that anyone who has used an agentic coding tool for more than an hour has encountered.
It was only a matter of time before the competition adopted some of their tactics.
It's this superapp architecture that triggered OpenAI's "code red" last month. OpenAI President Greg Brockman is personally overseeing a product consolidation into a single desktop superapp, combining ChatGPT, Codex, and its Atlas browser, after leadership concluded the company was losing enterprise and developer customers to Anthropic's integrated approach. The internal memo was blunt: too many apps, too much fragmentation, not enough focus.
So it should be no surprise that Alphabet Inc.'s Google (NASDAQ: GOOGL) launched Google Gemini for Mac and shipped Skills for Chrome (you can download it here: https://gemini.google/mac/).
Google's answer is structurally different, and worth understanding carefully on its own terms.
Gemini for Mac is not a standalone AI application in the way Claude or ChatGPT are. It positions itself as a system-level assistant: Option + Space from any screen, and Gemini is available in the context of whatever you're working on. The key capability is window sharing. Instead of describing what you're looking at, you share the window and Gemini reads it directly. A document you're editing, a dataset you're analyzing, a codebase you're reviewing: the context transfer that normally requires copy-pasting or re-explaining is eliminated. The app also brings Google's full creative suite to the desktop, including image generation via Imagen and video generation via Veo, accessible without opening a browser tab.
Skills for Chrome goes further in a different direction. The feature allows users to save reusable AI prompts directly from their Gemini chat history in Chrome and then invoke them on any web page with a forward slash or a plus button. The prompt travels with the Google account, not the device or the page. A prompt you built to calculate nutritional macros from recipe sites runs on every recipe site you visit. A shopping comparison workflow runs on every product page. A document summarization prompt fires on any lengthy web content. Google found in early testing that users gravitated to health and wellness, shopping comparisons, and document summarization as the primary use cases. Those are the right categories to establish habits. A workflow you run daily becomes invisible infrastructure.
Skills can also take actions, not just answer questions. With user confirmation, a Skill can send an email, add a calendar event, or interact with other Google services. That confirmation step is present today. The trajectory of where agentic AI is heading suggests it will become lighter over time as trust is established.
To lower the activation barrier further, Google is launching a Skills library alongside the feature: pre-built workflows across productivity, shopping, recipes, and budgeting that users can add with one click and customize as needed. The library means Google controls the default starting point for what AI workflows most people use. That is not a small thing. Default behaviors compound.
The distribution advantage underlying all of this is significant and worth stating plainly. Chrome has over 3 billion users. Google Workspace has roughly 3 billion users. Android runs on approximately 3 billion active devices. Gemini is already embedded across all three. Anthropic and OpenAI are building new surfaces that require users to change their behavior and adopt something new. Google is extending surfaces that are already the default working environment for a substantial share of the global knowledge workforce. The Mac app doesn't ask you to change how you work. It adds a keyboard shortcut to the workflow you already have.
That said, the capability gap on the developer side is real and significant, at least as of today.
What Anthropic shipped yesterday for professional developers has no direct equivalent in Google's current product lineup. Routines running nightly bug triage against a Linear backlog, automatically generating draft PRs for the top issue, represent a level of agentic integration into the software development lifecycle that Skills for Chrome does not approach. The ability to point a Routine at a GitHub repository, subscribe it to pull request events, and have Claude automatically flag any changes to a sensitive module with a summary posted to a Slack channel: that is operational infrastructure for engineering teams, not a productivity feature. Library porting, where every PR merged to a Python SDK automatically triggers a parallel PR to the Go SDK, is the kind of automation that saves days per week at scale and requires genuine understanding of code context to execute reliably.
Claude Code's redesigned pane system complements this. The integrated terminal, the in-app file editor for spot edits, the expanded preview pane that handles HTML files and PDFs, the rebuilt diff viewer optimized for large changesets: these are purpose-built for how senior developers work with AI agents across multiple concurrent workstreams. The three view modes, Verbose, Normal, and Summary, give teams control over how much of the agent's decision-making they observe, which matters when you're operating agents autonomously and need to calibrate your oversight without drowning in tool-call logs.
Google Cloud Next opens April 22 in Las Vegas. Google I/O follows May 19. Both events will almost certainly bring announcements about expanded agentic capabilities within Workspace, deeper Gemini integration across Android and desktop, and new developer tools through Google Cloud. Yesterday's launches were the positioning move ahead of those reveals: establish the ambient desktop presence, get Skills into 3 billion Chrome installations, and demonstrate the Mac app before the bigger capability announcements land in front of the largest developer audience of the year.
The competitive question going into those events is whether Google's distribution advantage compounds faster than Anthropic's capability depth widens. Three billion Chrome users is a number that commands attention. So is an AI coding agent that runs autonomously overnight against your entire issue backlog and has a set of draft PRs waiting when your engineering team opens their laptops in the morning.
The race to own the AI desktop is no longer a product category discussion. It is a live competition among three well-resourced companies making simultaneous moves in the same week, targeting the same daily working surface, with meaningfully different approaches to how they get there. Google comes through distribution and ambient integration. Anthropic comes through capability depth and a tightly integrated product architecture. OpenAI is consolidating a fragmented product portfolio and betting that brand and scale can close the gap.
The model quality differences that dominated AI coverage for the past three years have narrowed enough that the surface, the workflow integration, and the switching costs are now the decisive variables. Whoever owns the environment where people work, where their documents are open, their code is running, and their daily tasks are executing, has an advantage that the next benchmark release will not easily displace.
Google moved today. Anthropic moved yesterday. What both companies reveal over the next five weeks in Las Vegas and online will define the shape of this competition for the rest of the year.