Anthropic’s Claude Code and Cowork Can Now Control Your Computer
Anthropic just handed Claude the keys to your desktop. The company announced on March 23, 2026 that its Claude Code and Claude Cowork tools can now directly control a Mac clicking, typing, navigating apps, and completing entire workflows on your behalf, even while you’re nowhere near your laptop.
For months, the AI industry has talked about agentic AI systems that don’t just answer questions but actually get things done. Most of that talk has stayed in the realm of demos and benchmarks. Anthropic’s new computer use feature makes it real, at least on macOS. Starting now, Claude can take over your screen and run through multi-step tasks the same way a colleague sitting at your desk would.
This isn’t a minor UI upgrade. It represents a genuine philosophical shift in how Claude works as an AI assistant moving from a tool that describes what to do to one that does it.
What “Computer Use” Actually Means in Practice
The feature works exactly how it sounds. Once enabled through Claude Desktop’s settings, Claude can move your cursor, click buttons, fill in forms, open apps, browse the web, and run complex software workflows all on autopilot. Think of it less like a chatbot and more like a remote worker who has been given a login to your machine.
Anthropic’s official demo drives the point home. A user running late for a meeting texts Claude: export the pitch deck as a PDF and attach it to the meeting invite. The video shows Claude opening the file, navigating the export dialog, finding the calendar app, locating the event, and attaching the PDF all without the user lifting a finger. That kind of end-to-end task completion is exactly what the agentic AI category has been promising for years.
What makes the system clever is its fallback logic. When Claude is given a task, it first checks whether it has a native connector an integration with apps like Google Calendar, Slack, or Gmail. If a connector exists, it uses that, which is faster and more reliable. But when no connector is available, Claude falls back to controlling your screen directly, navigating through UI elements the same way a human would. It always requests permission before accessing any new application, and users can halt operations at any point.
Cowork vs. Claude Code Two Doors Into the Same Engine
The computer use feature is available through two distinct products, and understanding the difference between them matters if you want to use this effectively.
Claude Code is the command-line coding agent Anthropic launched for developers. It was built to help engineers write code, run tests, submit pull requests, and manage repositories all from the terminal. The computer use upgrade extends its reach beyond the terminal, letting it interact with any application on your Mac.
Claude Cowork is the version designed for everyone else. It lives inside the Claude Desktop app and was introduced in January 2026 after Anthropic noticed something interesting: developers using Claude Code were applying it to far more than coding. They were using it to reorganize their downloads folder, build expense spreadsheets from receipt screenshots, and summarize research papers. So Anthropic built Cowork the same agentic engine, packaged for non-technical knowledge work. No terminal required.
If you’re a developer, how you use AI in your workflow may shift considerably now that Claude Code can operate any app on your system. For everyone else, Cowork brings that same power without the technical overhead. Both tools now share the same computer use capability it’s just a question of which entry point fits your workflow.
Dispatch: The Piece That Makes This Actually Useful
Computer use would be a neat trick if you had to sit in front of your Mac to trigger it. The feature that elevates it into something genuinely valuable is Dispatch, which Anthropic released just a week earlier, around March 17, 2026.
Dispatch creates a single continuous conversation thread that runs across your phone and your desktop simultaneously. You assign a task from your iPhone commuting, at a coffee shop, in a meeting and Claude picks it up on your Mac at home or in the office. When you get back to your desk, the work is done.
Anthropic’s own examples of what this looks like in practice are practical and concrete: compile a morning briefing while you’re on the train, make changes in your IDE and run tests while you’re in a meeting, or have Claude pull analytics data into a weekly report automatically every Friday. The one firm requirement is that your Mac must stay awake and the Claude Desktop app must remain open.
The combination of Dispatch and computer use is, in effect, a rudimentary remote assistant not powered by cloud screenshots but by your actual local machine, using your own apps and your own files. That distinction matters for users who’ve been hesitant about cloud-based AI agents handling sensitive files. With Cowork, conversation history is stored locally on the device, not on Anthropic’s servers.
How This Compares to the Competition
Anthropic isn’t the first to ship computer-use AI, but it may be the most mainstream company to deploy it this broadly. The competitive landscape here is worth mapping clearly.
The most obvious rival is OpenClaw, an AI agent that supports macOS, Windows, and Linux. That cross-platform reach is currently Claude’s biggest gap macOS-only support is a real limitation for anyone on a Windows machine. The Anthropic team says Windows support is coming, but hasn’t given a timeline. Perplexity Computer and Meta’s Manus are also active in the agentic space, though neither has Claude’s combination of user base and desktop app ecosystem.
What Anthropic has going for it is trust infrastructure. The company built Cowork with a permission-first model, prompt injection scanning, and a clear set of blocked categories investment platforms, trading apps, and cryptocurrency apps are off by default. It also explicitly excludes itself from regulated workloads: HIPAA, FedRAMP, and FSI environments are not supported, and the company doesn’t pretend otherwise.
Compare that to OpenAI’s approach of consolidating everything into one superapp which is broad and ambitious, but arguably less transparent about where the guardrails are. Anthropic is betting that its safety-first framing will resonate with enterprise and prosumer users who need to actually trust what the AI is doing on their machines.
The Real Risks Worth Taking Seriously
Anthropic is unusually candid about the fact that this feature isn’t ready to run wild. The company published its own warning: “Computer use is still early compared to Claude’s ability to code or interact with text. Claude can make mistakes, and while we continue to improve our safeguards, threats are constantly evolving.”
There’s also a security dimension that goes beyond Anthropic’s own warnings. When you give an AI agent full control of your machine, you introduce a new attack surface: prompt injection. A malicious actor could embed hidden instructions inside a webpage or document that Claude reads, causing it to take actions you didn’t authorize. Anthropic says it has built automatic scanning to detect prompt injection attempts, but the company acknowledges this is an evolving threat.
The broader implication is that screen-based AI agents require a different mental model than chatbots. With a chatbot, the worst-case outcome of a bad interaction is a wrong answer. With an agent that can delete files, send emails, and navigate your entire app ecosystem, the blast radius of a mistake is fundamentally different. That’s not a reason to avoid the technology, but it is a reason to use it deliberately, with a clear boundary between tasks you’re comfortable delegating and tasks you’re not.
This connects to a larger conversation the industry is having about companies that moved too fast automating workflows with AI agents and ended up regretting the loss of human oversight on sensitive processes. The lesson isn’t to avoid agents it’s to be intentional about what you hand off.
What This Means for the Way We Work
The honest assessment is that computer use in its current form is a capable but uneven tool. Tasks that map cleanly to established apps with predictable UI calendar management, PDF exports, spreadsheet updates work well. Tasks that require judgment, context, or navigating ambiguous interfaces are likely to require supervision. Anthropic is asking users to treat this as an experiment, and that framing is appropriate.
But zoom out and the direction is unmistakable. Anthropic’s own internal data was telling: Claude Code users weren’t just coding with it. They were using it for almost everything. The company built Cowork precisely because they saw users stretching the tool beyond its intended scope. Computer use is the logical endpoint of that behavior an AI that doesn’t need a dedicated API integration to get work done in any app, the same way a competent human contractor doesn’t need custom tooling to work in your environment.
That’s not a near-term reality screen navigation is slow, prone to UI changes, and brittle in ways that native integrations aren’t. But it is the trajectory. And it’s why Gemini’s task automation push and OpenAI’s superapp strategy both converge on the same destination: an AI that doesn’t augment your workflow, but runs it. Anthropic just put a working version of that on your desktop.
Who Should Enable This Right Now and Who Should Wait
The practical question is whether you should turn this on. The answer depends heavily on your use case and your risk tolerance.
If you’re a developer on Claude Max and you’re already using Claude Code daily, enabling computer use is a straightforward upgrade for tasks that currently require manual UI steps. Running tests, managing files, navigating non-API tools all of these become things you can delegate from your phone. The permission-first model and blocked app categories give you enough control to use this without feeling like you’re handing over the keys entirely.
If you’re a knowledge worker on Claude Pro, Cowork with computer use is worth experimenting with for clearly bounded tasks: weekly reporting, file organization, calendar management. Start small, watch what Claude does in real time, and expand from there. Don’t give it access to folders containing sensitive documents until you’ve built confidence in how it handles the simpler stuff.
If you’re on a Windows machine or on a Team or Enterprise plan that involves regulated data — you’re not the target user yet. Windows support is coming, but not announced. And Anthropic is explicit that regulated workloads are out of scope for this research preview. The question of which roles AI is ready to augment versus replace gets more concrete with every feature like this that ships.
What Comes Next for Claude’s Agentic Push
Anthropic’s roadmap signals that computer use is a foundation, not a ceiling. The company already shipped Claude Opus 4.6 on February 5 and Claude Sonnet 4.6 on February 17 both explicitly targeting complex agentic workflows. The timing of those releases alongside the Cowork computer use launch is not coincidental. More capable models plus broader computer access equals a significantly more powerful autonomous assistant.
Windows support is the obvious near-term unlock. Mac users represent a meaningful but non-majority slice of the knowledge worker market, and the feature’s real impact will compound when it reaches Windows x64. Anthropic has confirmed the platform is coming; the question is how long “coming” means in practice.
Beyond platform expansion, the more interesting frontier is multi-agent coordination. Cowork already supports sub-agent architectures, where Claude breaks complex work into parallel workstreams and coordinates multiple agents to complete them. As computer use matures and screen navigation becomes faster and more reliable, those multi-agent pipelines will start to look less like a research preview and more like infrastructure.
The AI agent space has been building toward this for two years. Most of that time was spent proving the concept. Anthropic’s computer use launch imperfect as it is marks the beginning of the deployment phase.
An AI researcher who spends time testing new tools, models, and emerging trends to see what actually works.