Content
Why Notis Can Feel “Buggy” and “Too Expensive” (And What’s Actually Going On)
It took me longer than I’d like to admit to hear churn feedback without immediately translating it into “we’re failing.” When someone cancels and says “too expensive” or “buggy,” the instinct is to go straight to pricing tweaks and bug hunts.
But after enough conversations, patterns show up. Those two reasons are real, but they’re often proxies. Most of the time, they’re symptoms of a framing problem and an expectations problem.
The two churn reasons that keep showing up
The short version is this: people cancel Notis for two “reasons” and they’re both frequently misunderstood.
One is price. Users compare the cost to subsidized SaaS subscriptions instead of comparing it to ROI or to what they’d pay for an executive assistant.
The other is “buggy.” Users call it buggy when they prompt unsupported capabilities, run with the wrong settings, miss permissions, or give unclear instructions. When they see refusals or hallucinations, they interpret that as instability.
When “buggy” really means “expectations don’t match reality”
AI products have a brutal UX problem: the interface is language, and language makes people assume the system understands anything they can describe.
So when Notis doesn’t deliver exactly what someone imagined, the easiest label is “buggy.” Sometimes it is a product issue. Often, it’s a mismatch between what the user expects and what the system is actually set up to do.

“Buggy” is often what people say when boundaries were never made explicit.
The hidden causes: capability, permissions, settings, clarity
There’s a class of failure that looks like instability from the outside but is really just constraints doing their job.
If the assistant is asked to do something it cannot do, it will either refuse or try to guess. If it lacks the right permissions, it can’t read or write where you expect. If the settings are wrong, it might behave in a way that feels inconsistent. And if the instruction is fuzzy, the output will be fuzzy too.
None of that is a moral failing by the user. It’s simply what happens when a system that can speak fluently still has hard edges.
The reframe that changes everything
I’ve started thinking of Notis less as “magic” and more as an interface to systems and constraints.
When you treat it like a teammate, delegation becomes a skill. When you treat it like a genie, every edge case becomes a disappointment.
When “too expensive” really means “wrong comparison set”
Price complaints rarely live in a vacuum. They live inside a mental comparison.
If the comparison is “another SaaS subscription,” then any meaningful price can feel high. If the comparison is “what this replaces,” the same number can feel obvious.

The moment the frame changes from subscriptions to outcomes, the conversation changes too.
ROI framing beats subscription framing
The cleanest frame is value created.
If a tool reliably creates meaningful output, reduces follow-up debt, and keeps your system organized without you doing the manual work, then the right question isn’t “is this cheaper than other apps?” It’s “is this worth it for what it returns?”
The simple version I use is: pay 150 if it reliably creates 1,500 of value. Not because I want everyone to talk in spreadsheets, but because it stops the conversation from being emotional.
The assistant frame is more honest than the SaaS frame
The other comparison that matters is an executive assistant, or contractor hours.
Notis isn’t a passive dashboard. It’s designed to take messy input and turn it into structured work. That’s labor. When you anchor it to labor, people tend to evaluate it in a way that matches what they actually want from it.
What I’m changing about how we set expectations
The worst version of an AI product is one that markets itself like it has no limits.
I’d rather win slower and keep trust than win fast and create churn. So the move is not to undersell. It’s to be explicit about boundaries, and to make “success modes” obvious.
That means treating onboarding like teaching delegation.
It means making it clear that “unsupported” isn’t a bug, and that a refusal is often a guardrail, not a crash.
It means reminding people that clarity isn’t just good communication. In AI, clarity is functionality.

The product isn’t just features. It’s the mental model people carry into the first week.
The point I wish I’d internalized earlier
If someone says “buggy,” don’t only hunt defects. Ask what they expected, what they tried, and what they believed the product could do.
If someone says “too expensive,” don’t only defend the price. Ask what they compared it to, what outcome they were hoping for, and what “worth it” would have looked like.
Most churn feedback is valuable. The mistake is taking the literal words at face value.
If you’re evaluating Notis, here’s how to think about it
If you want an AI tool that feels like a cheaper version of a hundred SaaS subscriptions, you’ll probably be disappointed.
If you want an assistant you can delegate to, where the goal is turning real-world input into structured work and follow-through, you’ll evaluate it on the right axis.
And if it ever feels “buggy,” the first thing I’d check is not your patience. It’s the setup, the permissions, the instruction clarity, and whether the task is in-bounds.
That’s the difference between “AI is flaky” and “AI is a system you can actually trust.”


