So, three weeks ago, I was on a call with the Head of IT at a mid-size financial services firm. Fifteen minutes in, she stopped me mid-sentence.
“Jay, can I see your security architecture?”
She didn’t ask, “do you have SOC 2?” She didn’t ask “are you GDPR compliant?” She wanted to see the architecture. The actual policies. The scanning pipeline. How data flows and who can touch it.
I pasted a link into the chat: execreps.ai/security.
Thirty seconds of silence stretched.
Then: “Okay. This is more than I expected.”
That page is why I’m writing this post. Building security as a first-class product feature, not a compliance afterthought, has been one of the highest-ROI investments I’ve made. I even built it when I was a team of one.
Don Norman would recognize what happened on that call. The buyer had a ‘Gulf of Evaluation’. She needed to answer, “Can I trust this vendor with my company’s data?” Most startups force buyers to dig through questionnaires and schedule follow-ups to close that gap. A public security page closes it in thirty seconds.
The First RLS Policy (October 2025)
Back in October 2025, I shipped v0.5 of ExecReps. The product was barely functional. People could record themselves practicing a presentation, get AI feedback, track their scores. Maybe two dozen active users.
That release included Row Level Security across six database tables.
If you’re not a backend engineer, Row Level Security means the database itself enforces who can see what. It isn’t application logic saying ‘don’t show User A’s recordings to User B.’ It’s the database saying ‘this row does not exist for you.’ Even if someone bypasses the API, even if there’s a bug in the code, the database won’t hand over data that doesn’t belong to you.
Overkill for two dozen users? Absolutely.
The thing I understood, after fifteen years of product work, was this: security debt compounds faster than any other form of technical debt. Kahneman and Tversky’s research on loss aversion explains why. Losses are felt roughly twice as intensely as equivalent gains. You can refactor a messy UI. You can optimize a slow query. You cannot un-leak data. A breach isn’t a setback you recover from. It’s a loss your users feel viscerally. That asymmetry makes early security investment disproportionately valuable.
Six tables. Every query filtered by the authenticated user’s ID. A day and a half of work that’s paid for itself a hundred times over.
The Recursion Bug That Almost Ruined Everything
You might expect this part to be smooth. It wasn’t.
When I started building team functionality, managers seeing team data, admins needing org-wide visibility, the original RLS policies became a problem. The naive approach was policies that check team membership by joining against the teams table. Which had its own RLS policies. Which tried to check membership. Which queried the teams table again.
Infinite recursion. In the database layer.
Supabase doesn’t give you a helpful error for this. The query just… dies. I spent an evening staring at Postgres logs, convinced I’d broken my approach.
The fix was security definer functions that bypass RLS for specific trusted operations, restructured team membership verification, and extreme deliberation about which policies reference which tables. Sounds simple in a blog post. Had me questioning my life choices at midnight on a Thursday.
The critical part, though, was this: I found the bug during development, not in production. Nielsen’s Error Prevention heuristic (H5) says the best error message is no error message. The best data breach is one that’s structurally impossible. Because the RLS foundation already existed, I was forced to solve multi-tenant complexity before real enterprise data was at stake. If I’d waited, if I thought ‘we’ll add security later when we have enterprise customers,’ I’d have been retrofitting access control while actual company data flowed through the system.
That is exactly how breaches happen.
Building the Wall Higher (v2.4)
v2.4 is ‘Enterprise Security Hardening’. It’s the release that transformed our security story from ‘solid foundation’ to ‘genuinely enterprise-grade.’
Product teams who use the Kano Model know the categories: must-be, performance, and delighter. For enterprise buyers, security is firmly must-be. Absence kills a deal instantly, but presence alone doesn’t win one. What wins is when the implementation exceeds expectations. When a must-be quality is delivered at a delighter level, buyers notice.
On every pull request, we now run these checks:
Semgrep. This is Static Application Security Testing, scanning for vulnerability patterns like SQL injection, XSS, insecure auth patterns, and hardcoded secrets. Every PR, every time.
Gitleaks. This scans for accidentally committed secrets: API keys, tokens, passwords. Because sooner or later someone (or some AI, I’ll come back to this) pastes a token in a config file.
Dependency auditing. Every npm package is checked against known vulnerability databases. A critical CVE? The build fails.
Content-Security-Policy headers. These tell the browser exactly which domains can execute scripts, load images, or open connections. Most startups don’t implement this until past Series B.
Aikido DAST integration. This is Dynamic Application Security Testing, monitoring the running application. It probes endpoints, tests real-world attack vectors, and reports continuously.
The /security page itself ties it together. A public, human-readable overview of everything above. Not a compliance PDF. A web page that an IT buyer can read, share with their team, and use to make a procurement decision. Norman’s Gulf of Evaluation and Gulf of Execution, closed simultaneously. Buyers can evaluate without scheduling a call, and act by forwarding the URL to their CISO, without needing us in the room.
The AI Engineer Problem
Something I haven’t seen discussed much is this: what happens when one of your engineers is an AI?
I use Devin, an AI software engineer, for significant portions of development. Devin is remarkably capable. Clean code, follows patterns well, handles complex refactoring. Devin doesn’t have security intuition, though. It doesn’t think ‘this input should be sanitized.’ It won’t instinctively avoid hardcoding a token in a test file.
BJ Fogg’s B=MAP model describes the AI security problem perfectly. Devin has high ability to write code and zero motivation around security, no concept of consequence. There’s no built-in ‘prompt’ for ‘check for vulnerabilities before you commit.’ The SAST pipeline becomes that prompt. Devin opens a PR, scanners run, a vulnerability is found. The build fails. Period.
It’s not about trusting or not trusting the AI. Human engineers ship vulnerable code too. Security automation doesn’t care about intent. It enforces rules. Like guardrails on a mountain road. They aren’t there because you’re a bad driver. They’re there because the cliff doesn’t care how good you are.
Not sure where the industry lands on this in a year or two. I had a podcast guest recently who was convinced AI-written code would be more secure than human-written code within 18 months. Maybe. The scanning pipeline doesn’t care either way.
Feature Flags as a Security Mechanism
One surprise from v2.4: feature flags became a security tool.
We built an admin UI for feature flags, turning features on or off for specific users, teams, or globally. The obvious use is gradual rollouts.
Feature flags also give you a kill switch. Vulnerability in a feature? Disable it in seconds. No emergency hotfix, no midnight deployment. Toggle it off.
Norman would call this a strong affordance. The toggle looks like instant control because it is instant control. For a solo founder without a 24/7 on-call rotation, mean time to mitigation drops from ‘however long it takes to write, test, and deploy a fix’ to ‘however long it takes to click a button.’
Enterprise buyers get this immediately. “Any feature can be instantly disabled without a deployment” translates to “if something goes wrong, it gets fixed immediately.” That’s not just a product feature. That’s a purchasing decision.
Security as Moat
I think many startups miss the mark on this: they treat enterprise security as compliance theater.
The scramble for SOC 2 before the first enterprise deal. The ‘we take security seriously’ banner on a website with Access-Control-Allow-Origin: * in production. The security questionnaire answered by Googling in real-time.
Cialdini’s research on Authority explains why this fails. Authority isn’t claimed, it’s demonstrated. Telling a buyer ‘we take security seriously’ is a claim. Showing them automated scanning pipelines and row-level data isolation is a demonstration. The buyer’s brain processes those inputs completely differently.
Security is a competitive moat. Not because your competitors can’t build it, but because they won’t. It’s unglamorous. It doesn’t move metrics VCs ask about. Nobody tweets about CSP headers.
Enterprise buyers, the ones writing the big checks, care a lot. The gap between ‘we’ll get to security eventually’ and ‘here’s our security architecture, publicly documented’ is enormous in their eyes.
I’m a solo founder competing against funded teams. I shouldn’t be winning security conversations. But when the review happens, I’m not scrambling. I’m sending a link.
What This Costs
The SAST pipeline runs in GitHub Actions. A few hours to configure, almost no ongoing cost. Aikido has a generous startup tier. CSP headers: half a day. RLS policies: three to four days across iterations. The /security page: an afternoon.
All in? Maybe two weeks of engineering time over five months.
This is loss aversion math in your favor. Enterprise buyers overweight security risk. Two weeks of your work neutralizes months of their perceived risk. Compare that to losing a deal over a question you can’t answer, or spending three months rushing SOC 2 because a prospect demanded it. I wonder if this kind of upfront investment is often undervalued by founders focused solely on new feature velocity.
The Deeper Connection
I think about why I care this much about security for a communication practice tool. ExecReps isn’t handling financial data or medical records. We hold voice recordings of people practicing presentations, pitch rehearsals, feedback conversations.
That’s exactly why it matters.
Someone records themselves practicing a difficult conversation. Asking their boss for a raise, rehearsing for a board presentation, delivering bad news. That’s vulnerable. That’s someone exposing the gap between where they are and where they want to be.
Deci and Ryan’s Self-Determination Theory identifies three needs for intrinsic motivation: autonomy, competence, and relatedness. The prerequisite, though, is psychological safety. You can’t pursue competence if you’re afraid of being exposed. The practice loop that makes ExecReps work, record, get feedback, improve, repeat, only functions when users trust the container.
Executive presence should not be a privilege. The willingness to practice it, though, requires trust. Trust requires infrastructure, not just promises.
A Question for Builders
If you’re building a product right now, especially if you’re thinking ‘we’ll handle security later,’ ask yourself:
What would it mean for your customers if their data leaked tomorrow? Not legally. What would it mean for them?
The answer tells you when to invest. For many of us, it probably should have been yesterday.
The good news: ‘yesterday’ is closer than you think. A few days of intentional work now can be worth months of panic later.
Security isn’t just a phase of your product development. It’s a core feature. It’s probably worth building like one.






