Content
Trust is the bottleneck: how I’m building Notis to earn it (not demand it)
Trust is the part nobody wants to talk about because it doesn’t fit neatly in a benchmark chart. You can ship a faster model, connect a thousand integrations, and still watch users hesitate at the exact same moment: the second you ask for access to their real life.
Trust is the bottleneck (and it’s not a technical one)
When people evaluate an AI assistant, they rarely start by asking whether it can do something. They start by asking whether it should. The moment you touch email, calendar history, client conversations, or anything that looks like “the truth,” the product stops being a tool and becomes a relationship.
I’ve seen this pattern again and again while building Notis: users can be excited about the idea of an assistant, yet completely frozen when it comes to giving it the keys. That hesitation isn’t irrational. It’s a natural response to delegating judgment to something you don’t fully understand.

Trust isn’t a single feature. It’s a stack. It’s what users infer from your product decisions, your defaults, your tone, your transparency, and—whether we like it or not—the humans behind it.
Why SOC 2 isn’t enough
I’m pro security certifications. They’re important. They reduce risk. They create a baseline for how data should be handled.
But here’s the uncomfortable truth: compliance is not the same thing as trust.
Most users don’t wake up thinking, “I sure hope this company’s controls align with an audit framework.” They think, “If this goes wrong, will I regret it for years?” Enterprise users feel this even more sharply because email isn’t just a tool for them—it’s a decades-long archive of decisions, negotiations, and liabilities.
SOC 2 can tell you the company has processes. It doesn’t tell you the product will behave in a way that feels safe. It doesn’t tell you the assistant won’t surprise you. And with AI, surprise is the enemy.
Founder-led onboarding is a trust primitive
Early on, I personally onboarded the first couple hundred Notis customers. Not because it’s scalable, but because it’s effective. People don’t just want documentation; they want to know there’s a real person accountable for what happens when they plug an AI into their workflow.
That accountability changes everything. It makes privacy explanations feel like a conversation instead of a disclaimer. It lets you surface fear before it becomes churn. It gives the product a human center of gravity.
This is also why branding and “AI personality” are not superficial layers. They’re part of the safety system. If your assistant feels pushy, opaque, or overly confident, users will assume it’s hiding something—even when it isn’t.

A lot of products confuse intensity with clarity. Aggressive onboarding can convert in the short term, but it can also create a quiet, lasting feeling of being cornered. When you’re asking for high-trust permissions, the experience needs to feel like consent, not extraction.
Notion-first is privacy-by-default
One decision that keeps paying off for us is being Notion-first. It sounds like a product choice, but it’s also a trust choice.
Notion is where many founders and teams already curate what matters. It’s where they write the “clean version” of their thinking. When Notis starts from Notion, it starts from a space that’s already structured, intentional, and—crucially—less sensitive than a raw inbox.
That changes the first step of the relationship. Users can get value without handing over the most invasive dataset they own. They can test the assistant’s judgment on a surface that feels controllable. And when they do choose to connect more, it’s earned.
This is the direction I believe assistants should take: trust-building through progressive depth, not permission grabs.
Multi-agent architecture: intelligence is useless if it arrives late
Notis was multi-agent from the beginning. I cared more about “does it think well?” than “does it answer instantly?” In practice, that meant responses could take minutes. It worked, but it didn’t feel like an assistant. It felt like submitting a ticket.
Then the market shifted. Users began expecting speed as part of competence. And honestly, they’re right. If an assistant can’t keep up with the tempo of decisions, it can’t sit next to you while you work.
So we rebuilt around a simple principle: parallelize the thinking. Instead of agents working sequentially, they work at the same time. Plans adjust dynamically. We also separated out user-facing behavior so the assistant could have a consistent voice and a stable interaction style.

The result isn’t just faster outputs. It’s a different feeling. The assistant becomes present. And presence is another ingredient of trust: if it’s always late, you stop relying on it. If it’s reliably on time, you start delegating.
Where this is going: assistant to chief of staff
The path I see is pretty clear: the assistant role is just the starting point.
An AI intern helps with tasks. An assistant helps with follow-through. An executive assistant anticipates. A chief of staff coordinates across people, priorities, and systems.
That last step is the interesting one. It implies the assistant isn’t only reacting to you; it’s orchestrating outcomes. It’s negotiating trade-offs, making sure decisions land, and keeping the team aligned. It may even mean assistants communicating with other assistants, because coordination is a network problem as much as it is an intelligence problem.
I’m not trying to fight Apple, Google, or general-purpose chat. The power is in the long tail: the weird, specific, high-leverage workflows founders discover when the assistant is flexible enough to meet them where they are.
If we get trust right—through defaults, transparency, tone, and accountable humans behind the system—then capability finally has room to matter. And that’s when the assistant stops being a demo and starts being infrastructure.


