Content

OpenClaw can turn your AI agent into a password-sharing nightmare
How much do you trust your team? Would you give them your banking password? If you’re not careful with agentic AI, that’s basically what you’re doing — except the “team member” is a piece of software that can act faster than you can think, and make mistakes at machine speed.
I love the promise of agents. I’m building Notis because I genuinely believe we’re heading toward a world where software doesn’t just tell you things — it does things for you. But the moment an agent can do real work, security stops being a footnote and becomes the product.
The uncomfortable truth about agentic AI
Classic software is deterministic: you click a button, it runs the same code path every time. Agents are different. They reason over messy inputs, they interpret intent, and they decide what to do next. That’s the magic. That’s also the risk.
In practice, many agent setups end up with a single, dangerous pattern: one place where you “just give it access” so it can be helpful. A token here. An API key there. A plugin that can run code. A browser that can click buttons. A filesystem that can read everything. It feels incremental — until you look up and realize you’ve created a super-user with a memory problem.

Why OpenClaw is powerful — and why that power cuts both ways
OpenClaw is exciting because it turns an LLM into something closer to a real operator. It can connect to tools, pull context, and take actions. That’s the dream: your assistant is not just a chat window, it’s an executor.
But you can’t have execution without capability, and you can’t have capability without credentials. The moment you wire an agent to your email, your calendar, your docs, your CRM, your payments, your servers — you’re moving from “AI writes text” to “AI holds the keys.”
If you configure things perfectly, isolate the runtime, minimize permissions, and audit every action, you can run something like this safely. The problem is what happens in practice. Most people don’t treat a weekend automation project like production infrastructure. And agents don’t politely stop at the edge of what’s wise — they use whatever they can to get the job done.
The password-sharing analogy isn’t a metaphor — it’s a threat model
When you hand an agent broad credentials, you’re implicitly accepting at least three kinds of risk:
First: accidental leakage. Agents work with text. Text gets logged, copied, pasted, summarized, forwarded, stored. If sensitive material ever enters the agent’s working context, it’s surprisingly easy for it to reappear in places you didn’t intend — especially when you add plugins, tool calls, and third-party integrations.
Second: malicious inputs. The internet is a hostile place. Email is hostile. Public docs are hostile. Even internal Slack threads can become hostile if someone posts something that includes instructions meant to hijack an agent’s behavior. If your agent ingests untrusted content and has the authority to act, you’ve built the perfect bridge between “someone can send you text” and “someone can trigger actions inside your tools.”
Third: ecosystem risk. The moment you install third-party skills/extensions, you are effectively running someone else’s code in a context that has your credentials. Even if you are careful, you’re now depending on the hygiene of the entire chain: maintainers, repositories, updates, and the incentives of strangers.

What we learned building Notis for a year (so you don’t have to learn it the hard way)
One of the underrated benefits of working with a team that’s been living inside agentic AI for a while is that we’ve already hit all the sharp edges. Not in theory — in production. With real users. With real data. With real consequences.
A secure agent isn’t a single feature. It’s a pile of boring decisions that add up:
How permissions are requested. How tokens are stored. How access is scoped. How data is separated between users. How actions are logged. How you prevent one user’s context from bleeding into another. How you make the safe path the default path — because defaults are what most people actually run.
This is the part that’s easy to underestimate when you’re excited about “getting an agent running.” Security isn’t what you add when the demo works. Security is what decides whether you should ship the demo at all.
If you still want to run OpenClaw, do this (seriously)
I’m not here to fearmonger. I’m here to make the tradeoffs explicit. If you still want to run OpenClaw, the safer path looks a lot more like ops than like a hobby project.
Start with isolation. Run it in a dedicated environment (a separate machine, a VM, or a containerized setup) that does not have access to your personal files by default. Treat it like you’re running untrusted code that can make network calls.
Then enforce least privilege. Don’t give the agent a master key when a valet key would do. Use scoped OAuth where possible. Avoid long-lived API keys. Create separate service accounts with narrow permissions. Segment accounts: the agent that drafts emails should not be the same identity that can move money or access production systems.
Make access revocable and short-lived. The more “forever” a credential is, the more it becomes a liability. Prefer tokens you can rotate quickly and revoke instantly.
Finally, audit like you mean it. Keep logs of what the agent read and what it did. Review actions. Set up alerts for anything that looks abnormal. If you can’t tell what your agent did yesterday, you won’t know what it did during the five minutes it went off the rails.

The standard I use: would I give this to a teammate?
I keep coming back to a simple test: if I wouldn’t hand a teammate my banking password, my root credentials, or my production keys, why would I hand them to an agent?
Agents are going to be everywhere. The winners won’t just be the ones with the best reasoning models. They’ll be the ones who make agents safe to run in the real world — where people are busy, setups are messy, and mistakes happen.
That’s what we’re building at Notis: an assistant that actually gets work done, without accidentally turning your company’s credentials into a shared secret. Because no matter how good your team is and how much you trust them, there are still things you shouldn’t share.

