Content

You can't work for Twitter, Elon Musk is different
You can't work for Twitter, Elon Musk is different
You can't work for Twitter, Elon Musk is different

Constraints are the product: what building Notis on WhatsApp taught me about AI, cost, and roadmap

Image

Florian (Flo) Pariset

Founder of Mind the Flo

I’ve learned the hard way that the best AI products aren’t born from perfect roadmaps. They’re born from constraints you can’t ignore, user behavior you can’t fake, and a willingness to ship an “ugly” architecture if it gets you to real signal faster.

Why Notis started on WhatsApp (and why I’d do it again)

When people hear “WhatsApp-based AI assistant that integrates with Notion”, they usually assume it’s a gimmick. It isn’t. WhatsApp gives you three unfair advantages that are weirdly aligned with how humans actually use an assistant.

First, you inherit resilience. Connectivity issues, message delivery, media handling, all of that is solved at the platform level. Second, you inherit psychology. People are already conditioned to accept asynchronous replies in messaging, so the product can take a few seconds (or longer) to think without triggering the “this is broken” reflex. Third, media is native. Images, videos, documents come in as first-class citizens, which matters if your assistant is supposed to capture reality, not just text.

Of course, constraints come with it.

WhatsApp has a 1,600 character message limit. There are no typing indicators. And if you go through Twilio, you’re stacking more limitations on top. But these constraints ended up shaping Notis into something I like: concise, structured, and biased toward doing work in Notion instead of flooding a chat.

The architecture: messy on purpose, because learning is expensive

A lot of founders try to “clean” the architecture too early. I did the opposite. I optimized for iteration speed, observability, and the ability to swap models without rewriting the product every week.

Today’s setup is essentially three layers.

Identification, then “prefrontal cortex”, then an agent

The flow starts with identification: who is speaking, what workspace, what permissions, what context should be pulled.

Then I run what I call a “prefrontal cortex.” It’s the part that decides what to do: should we write to Notion, query a database, ask a clarification question, summarize, schedule a reminder, or just acknowledge and do nothing. That layer used to run on heavier models, and I recently upgraded it to gpt-5.2 because it’s the best trade I’ve found between cost and reliability for this kind of routing.

Finally, sub-agents executes the plan. That’s where the expensive things live: Notion queries, notification logic, database sync, block conversion, and anything that touches the outside world.

Why serverless + a Python backend ended up being the right split

The first processing step runs serverless on Pipedream because it makes integrations and event plumbing ridiculously fast. But the “real” operations run on a Python server because the expensive parts need better control: batching, caching, rate limits, and more predictable performance.

Supabase sits underneath as the backend, and documents get vectorized and stored in OpenAI so the assistant can retrieve context without turning every interaction into a full-database scan. There are also a few Node.js pieces that exist for one reason: converting Notion blocks to and from markdown without losing structure.

It’s not elegant. It’s productive.

The unit economics reality check: yes, a single query can cost more than 1$.

Everyone loves to talk about “AI at scale” until they look at the invoice.

A single user query can cost more 1$. Some flows on heavier reasoning models have hit $5 per query. And it’s not just the chat model. The bill is a stack: vectorization, storage, speech-to-text, text-to-speech, and all the glue that makes it feel seamless.

That cost pressure forces one design principle: the assistant must do fewer, higher-leverage things. Notis shouldn’t be a toy that chats endlessly. It should be a system that captures, organizes, and executes inside your tools.

Pricing experiments: what I’m learning from the A/B test

I’m currently running a pricing A/B test.

In one version, it’s $20 per month and a $200 lifetime deal.

In another version, it’s split by capability: $10 per month for one-way sync and $20 per month for two-way sync.

Early results surprised me. The higher price variant converted better in the first samples: 3.18% versus 1.49%. That doesn’t mean “higher price always wins.” It usually means the framing is clearer, the value feels more serious, or the segmentation creates doubt.

The point is: pricing is part of the product. If the user can’t instantly map cost to outcome, they won’t buy. And if they buy the wrong tier, you’ll feel it later in retention, support, and margin.

Roadmap: three axes, one product mindset

When I think about what’s next, I don’t think in features. I think in axes. That keeps the product coherent instead of turning it into a random checklist.

Axis 1: platform connections

Notion is the first deep integration. But the future isn’t “Notion only.” It’s multi-home: Notion, Obsidian, and whatever else people use to run their work.

This is where architecture becomes a strategic decision. If you want multi-platform support, you have to build a core that’s not married to one API’s quirks.

Axis 2: communication channels

WhatsApp works. But it’s not the only place people want an assistant. Signal, iMessage-like experiences, maybe even voice-first flows. Each channel brings its own constraints, and those constraints will change what the product becomes.

Axis 3: vertical expansion

Once you can reliably capture and organize work, the next step is acting on the world.

The most obvious vertical is calendar management.

Meeting scheduling. Restaurant booking. Reservation management. Zoom coordination. The goal isn’t to become a “calendar app.” It’s to become the layer that turns intent into outcomes.

Competition, profitability, and why I’m not optimizing for comfort

If you build on top of Notion, you have to accept the shadow of Notion AI. They can ship features that look similar. They can bundle. They can undercut.

My bet is that Notis wins by being closer to the user’s actual workflow. Messaging-first capture, proactive nudges, multi-tool context, and a willingness to integrate deeply rather than live as a sidebar.

I’m also not pretending profitability is the immediate goal. In early stages, the job is to find product-market fit and learn fast enough that the future architecture and business model become obvious. Ironically, poor retention can even “help” when you’re learning, because it keeps costs from exploding before the product is truly ready.

But that phase ends. If Notis becomes something people rely on daily, the architecture will need to evolve, the cost curve needs to bend, and hiring becomes a real lever. I’m increasingly convinced that generalist profiles are the fastest way to push through this stage: people who can ship, debug, talk to users, and make the product sharper without hiding behind process.

The real takeaway: constraints are the product

Looking back, the biggest lesson isn’t about models or tools. It’s that constraints force clarity.

WhatsApp’s character limit forces brevity. The cost per query forces leverage. Multi-platform ambition forces modularity. And users force honesty.

If I keep following the constraints instead of fighting them, Notis gets better—and I make fewer decisions based on vibes, and more decisions based on reality.

Huseyin Emanet

Flo is the founder of Mind the Flo, an Agentic Studio specialized into messaging and voice agents.

Break Free From Busywork

Delegate your busywork to your AI intern and get back to what matters: building your company.

Break Free From Busywork

Delegate your busywork to your AI intern and get back to what matters: building your company.

Break Free From Busywork

Delegate your busywork to your AI intern and get back to what matters: building your company.