afterbuild/ops
§ EX-01/code-audit-specialist
AI-generated code audit specialist

AI-generated code audit specialists — written Code audit of your Lovable, Bolt, v0 or Cursor app before you hire anyone

By Hyder Shah11 min readLast updated 2026-04-15
Quick verdict

AI-generated code audit before you spend a dollar on rescue. We read the repo, run semgrep and Supabase RLS audit passes, and ship a written report covering Lovable code audit, Bolt audit, v0 and Cursor findings across security, data model, auth, Stripe, deploy, tests and architecture — every finding severity-ranked with a rescue-vs-rewrite verdict in 48 hours.

Why AI builders ship broken Code audit

Founders arrive at us with the same question: 'is this salvageable?' The honest answer requires reading the code. Lovable, Bolt, v0, Cursor and Replit Agent all ship apps that look similar on the surface and differ enormously underneath. One Lovable app we audited had clean RLS and 40 indexes; the next had RLS disabled on every table and every query unbounded. The tool doesn't decide the audit outcome — the specific prompts and data model do.

The industry-wide picture explains why the audit is worth it before you hire: our 2026 vibe-coding research summarizes the AI-code vulnerability benchmark, the widely-reported Lovable/Supabase RLS disclosure, and the NIST CVE-2025-53773 (CVSS 9.6) GitHub Copilot issue. An audit is the cheapest step in the rescue funnel — it stops you paying to fix the wrong problem first, and it surfaces the severities that actually block your launch.

Source: Veracode 2025 AI Code Security Report

§ MATRIX/ai-builder / failure-mode

Which AI builder shipped your broken Code audit?

The Code audit failure mode usually depends on the tool that shipped the code. Find your builder below, then read the matching problem page.

AI builderWhat breaks in Code auditGo to
LovableRLS disabled, secrets committed, Supabase exposed publiclyLovable audit
Bolt.newStripe misconfigured, webhooks unverified, env vars in clientBolt audit
v0No backend at all, no auth, no rate limits on API routes added laterv0 audit
CursorMulti-file drift, tests missing or fake, architecture debtCursor audit
Replit AgentSQLite in prod, infrastructure lock-in, secrets in repl envReplit audit
Claude CodeGenerally cleaner; audit focuses on architecture and testsClaude Code audit
WindsurfEnterprise-scale compliance and audit-log gapsWindsurf audit
Base44Proprietary runtime — audit focuses on escape planBase44 audit
§ ANATOMY/code audit / failure

Anatomy of an audit finding in an AI-built app

A founder sent us a Lovable fintech MVP last quarter with 1,400 pilot users on the waitlist and a demo scheduled with a strategic partner the following Monday. The app 'worked.' The sign-up flow looked immaculate. The dashboard charted the right numbers. They wanted to know whether they could launch and ship payments on time. We ran the standard eight-area audit over 48 hours.

The first finding was the one that shut the demo down: RLS disabled on every single table. Anyone with the Supabase anon key — which ships in the public JavaScript bundle — could read and write every row. The second finding: hardcoded Stripe test key in the client, and the live key in the same file guarded by a NODE_ENV check that Vercel's build process overrode anyway. The third: the webhook endpoint returned 200 without verifying Stripe's signature, so anyone could forge a 'payment succeeded' event and flip an order to paid. The fourth: password reset emails were never sent — SMTP was unconfigured, so users who forgot their password had no recovery path. The fifth: no pagination on the main dashboard query, which loaded 47,000 rows into the browser on every page view.

The audit report delivered that Friday gave the founder the ranked list: patch the critical five before the demo, patch seven highs within two weeks, accept the ten mediums.
Hyder Shah· Code audit rescue engineer

Individually, each was a half-day fix. Together, they would have buried the demo and made a public breach disclosure a matter of when, not if — exactly the pattern captured by the February 2026 Lovable/Supabase RLS disclosure (summarized in our 2026 research). The audit report delivered that Friday gave the founder the ranked list: patch the critical five before the demo, patch seven highs within two weeks, accept the ten mediums. The rescue shipped in 9 days. The demo happened.

What the audit was worth, concretely: the founder got a defensible scope to hire against, a ranked list to triage internally, and a document to hand the strategic partner's security team. The partner's security review came back clean because we had already found and fixed everything their checklist covered. The $499 audit fee was credited against the rescue engagement. The alternative — hire a developer at hourly rate with no scope — would have consumed weeks of discovery before the first fix. Every founder we've audited has reported the same math: the audit is the cheapest step in the rescue funnel, and it protects against the specific failure modes Veracode measured across its 2025 benchmark. Nothing in the audit is speculative. Every finding maps to a CWE, a line, and an estimate.

§ RESCUE/code audit / engagement

What a Code audit rescue engagement ships

From first diagnostic to production handoff — the explicit steps on every Code audit engagement.

  1. 01

    Free 30-minute diagnostic

    We talk for 30 minutes, you share the repo, we look at five things: auth, data model, secrets, deploy, tests. You get a written one-pager in 48 hours.

  2. 02

    Paid audit ($499)

    Full written audit: security, data model, auth, payments, deploy pipeline, tests, architecture, performance. Each finding has severity (critical/high/medium/low), file references, and estimated fix effort.

  3. 03

    Rewrite-or-rescue recommendation

    We tell you honestly whether to rescue the app or throw it out. We have no incentive to inflate — if you should rewrite, we say so.

  4. 04

    Quoted fix plan

    Every critical and high finding comes with a fix estimate. You decide what to do in-house and what to hand us.

  5. 05

    Optional: rescue kickoff

    If you hire us to fix the findings, the audit fee is credited against the rescue engagement.

§ AUDIT/code audit / first-pass

Every Code audit rescue audit checks

The diagnostic pass on every Code audit rescue. Each item takes under 10 minutes; together they cover the patterns that cause 90% of AI-built-app failures.

  1. 01
    RLS status on every Supabase table

    We enumerate tables, check whether RLS is enabled, and check whether each policy is non-trivial. Policies like `USING (true)` fail this check. Industry AI-vulnerability benchmarks (see our 2026 research) put rates near half; RLS misconfiguration is the single most common class.

  2. 02
    Secrets in the client bundle

    Grep the built JavaScript for 'sk_live', 'sk_test', 'SUPABASE_SERVICE_ROLE', and common API key prefixes. Any match is a critical finding.

  3. 03
    Webhook signature verification

    Every Stripe, GitHub, or provider webhook handler must verify the signature before acting. We read the route and confirm the check.

  4. 04
    Env var parity between preview and production

    We diff the Vercel or host env vars for each environment. Any preview-only value that the app references in production is flagged.

  5. 05
    OAuth redirect URLs registered with providers

    Google, GitHub, and Supabase allowlists must include the production URL exactly. Mismatched protocols, trailing slashes, or missing subdomains are common.

  6. 06
    Database migrations in git

    If the Supabase dashboard state can't be reproduced from the repo, the team has no rollback. We check for a migrations folder and that it matches the live schema.

  7. 07
    Test coverage on the critical paths

    We count tests on sign-up, sign-in, checkout, and the top 3 mutations. Zero tests on a $100k ARR flow is a high finding.

  8. 08
    Error boundaries and 404/500 pages

    Classic React default: one component crashes, the whole app goes white. We check for route-level error boundaries and custom error pages.

  9. 09
    Rate limits on auth and mutating endpoints

    Sign-up, sign-in, password reset, and any POST without rate limiting is DoS-able and brute-forceable.

  10. 10
    Indexes on foreign keys and filterable columns

    We dump the schema and check pg_indexes. Any table with over 10k rows and a foreign key without an index is a high finding.

  11. 11
    Input validation at API boundaries

    Zod, Valibot, or equivalent at every route handler. Untyped req.body reaching the database is flagged.

  12. 12
    CORS, CSP, and security headers

    CORS set to '*' or missing Content-Security-Policy is noted. We recommend next-safe or Helmet equivalents per stack.

§ DIFF/code audit / before-after

Common Code audit patterns we fix

These are the shapes AI-generated code arrives in — and the shape we leave behind.

The readme-says-nothing repo
✕ before · ai-shipped
tsx
01README.md contains a one-line description and a `npm run dev` command. Setup takes three days because nobody knows which env vars are needed or how to seed data.
The readme-says-nothing repo
✓ after · afterbuild
tsx
01README.md documents every env var with example values and source of truth, a one-command bootstrap script that seeds a working dev DB, and a first-day checklist for new engineers.
The readme-says-nothing repo
The unbounded SELECT
✕ before · ai-shipped
tsx
01API route runs `supabase.from('orders').select('*')` and returns to the client — 47,000 rows, 12-second load, Vercel function timeout, client-side filter.
The unbounded SELECT
✓ after · afterbuild
tsx
01Paginated query with explicit LIMIT, OFFSET or cursor, moved to a server component that streams, with an index on the filter column.
The unbounded SELECT
The happy-path-only checkout
✕ before · ai-shipped
tsx
01Order created client-side before webhook fires. Success page marks order paid. Failed payments silently become paid orders.
The happy-path-only checkout
✓ after · afterbuild
tsx
01Order created pending, webhook is the source of truth for paid status, signature verified, idempotency key attached.
The happy-path-only checkout
The 'any' escape hatch
✕ before · ai-shipped
tsx
01tsconfig strict is false. Functions typed as `(data: any) => any`. Compiler cannot catch renames or missing fields.
The 'any' escape hatch
✓ after · afterbuild
tsx
01Strict mode on. Zod schemas at IO boundaries. Types derived from schemas. Renames caught at compile time.
The 'any' escape hatch
The global state sink
✕ before · ai-shipped
tsx
01Every piece of state — URL, form, server — lives in one Zustand store. Every render re-renders everything.
The global state sink
✓ after · afterbuild
tsx
01URL state in URL (searchParams), server state in TanStack Query, form state in React Hook Form, client UI state local to components.
The global state sink
The copy-pasted fetch
✕ before · ai-shipped
tsx
01Fetch calls inlined in 40 files, each with its own error handling (or none). No retry, no timeout, no loading state.
The copy-pasted fetch
✓ after · afterbuild
tsx
01One typed API client, shared TanStack Query hooks, consistent error boundaries and toast layer.
The copy-pasted fetch
The silent 500
✕ before · ai-shipped
tsx
01API returns a vague 500 with no log, no Sentry, no error surface to the user. The engineer finds out from a user complaint.
The silent 500
✓ after · afterbuild
tsx
01Sentry wired, structured logs, user-facing error states with retry, 500 page distinct from 404.
The silent 500
The .env.example that lies
✕ before · ai-shipped
tsx
01README says `npm install && npm run dev`. It's not. Three secrets are missing, one integration is optional but undocumented.
The .env.example that lies
✓ after · afterbuild
tsx
01Fully working .env.example, scripted bootstrap, first-run README verified on a fresh machine before handoff.
The .env.example that lies
§ FLAGS/code audit / red-signals

Code audit red flags in AI-built code

If any of these are true in your repo, the rescue is probably worth more than the rewrite.

flag
signal
why it matters
tsconfig.json has strict: false or no tsconfig at all
The compiler isn't catching renames, wrong shapes, or missing fields — meaning every refactor is a regression risk.
No migrations folder, or migrations don't match live schema
The database state can't be reproduced in staging, rolled back, or audited.
Env vars with NEXT_PUBLIC_ prefix containing service keys
Server-only secrets are shipping to every browser. Immediate critical finding.
Single deploy env and no staging environment
Every change goes straight to production. No safe testing surface for risky migrations or auth changes.
Zero tests on the checkout or sign-up flows
Revenue-critical paths with no regression guard means the AI's next prompt can break the paying feature.
README says 'ask me for the .env file'
Secrets are not versioned, onboarding is manual, and secret rotation is a DM search.
Commits authored by 'lovable-bot' or similar spanning months
The code is still being generated by the AI post-launch. Any handoff to a human must stop the regeneration.
§ PRICING/code audit / fixed-price

Fixed-price Code audit engagements

No hourly meter. Scope agreed up front, written fix plan, delivered on date.

price
Free
turnaround
Free rescue diagnostic
scope
30-min call + 1-page triage in 48 hours.
View scope
featured
price
$499
turnaround
Security audit
scope
Written report, 48hr turnaround, credited against rescue.
View scope
price
From $499
turnaround
Emergency triage
scope
If you're mid-outage, audit + fix plan same-day.
View scope
price
Custom
turnaround
AI app rescue
scope
Full rescue engagement based on audit findings.
View scope
§ EXAMPLES/code audit / real-scopes

What Code audit rescues actually cost

Anonymized, representative scopes from recent Code audit rescues. Every price is the one we actually quoted.

Small rescue
$499

A solo founder who built a Lovable marketplace with 240 beta users and is worried about launching publicly. The audit flags RLS off, no rate limits, and no Stripe webhook signing.

Scope
8-area audit, written report with severities, 30-minute walkthrough call.
Duration
48 hours
Medium rescue
$3,499

A seed-stage team who raised $1.5M on a Bolt.new prototype and has investor diligence in three weeks. The audit surfaces 23 findings, 7 critical; they hire us to patch the criticals.

Scope
Audit plus remediation of all critical and high findings. RLS policies written, webhooks signed, secrets rotated, env var guards.
Duration
2 weeks
Large rescue
$7,999+

A growth-stage company with an AI-built admin tool that now processes 40k orders/month and is being pitched to an enterprise buyer who wants a security review.

Scope
Full audit, remediation, SOC 2 prep checklist, audit log implementation, pen-test prep and partner handoff.
Duration
4-6 weeks
§ DECISION/code audit / migrate-or-patch

Audit-only, audit + rescue, or audit + rewrite?

Most founders who book an audit fall into one of three buckets after they read the report. The first bucket — roughly half — needs targeted remediation. The audit surfaces five to fifteen findings, most are scoped fixes (RLS policies, signature verification, env vars), and the right next step is to hire a developer or us to patch the ranked list. We credit the audit fee against any rescue engagement we run for the same codebase.

The second bucket — roughly a third — needs a deeper architectural pass. The findings cluster around a structural issue: state management is wrong, the data model has fundamental shape problems, the auth provider is incompatible with the team's needs. These are fixable but the fixes overlap, and we recommend a 2 to 6 week refactor engagement instead of a long list of one-off patches. The audit becomes the scope document for that engagement.

The third bucket — the smallest, around 15% — gets a 'rewrite' recommendation. The verdict applies when the data model is unrecoverable, when the codebase is locked into a no-code platform with no real export path, or when the cost of patching exceeds the cost of starting over with the audit's findings as the spec for the new build. We prefer to be honest about this verdict early; nothing is more frustrating than spending three months patching a codebase that should have been rewritten in week one. The audit gives you the data to make the call with confidence.

§ RUNBOOK/code audit / references

Code audit runbook and reference material

The documentation, CLIs, and specs we rely on for every Code auditengagement. We cite, we don't improvise.

§ FAQ/code audit / founders-ask

Code audit questions founders ask

FAQ
What does an AI-generated code audit cover?
Eight areas: (1) security — secrets, auth, Supabase RLS audit, CORS, input validation, rate limits, (2) data model — schema, indexes, migrations, (3) auth — session, tokens, RBAC, (4) payments — Stripe webhooks, idempotency, refunds, (5) deploy — CI/CD, env vars, rollback, monitoring, (6) tests — coverage, quality, fakes, (7) architecture — layering, state, boundaries, (8) performance — queries, caching, bundle size. Each area is scored against a written checklist with severity tags so you can rebuild the rationale months later.
How is a Lovable code audit different from a generic security scan?
Generic scanners (npm audit, Snyk) find known-CVE dependency issues. Our Lovable code audit and Bolt audit passes add the AI-specific failure modes: RLS disabled on Supabase, hardcoded secrets in client bundles, webhooks without signature checks, auth logic inverted — the stuff specific to vibe-coded apps. We also evaluate the patterns, not just the surface — for example, 'every API route accepts unvalidated JSON' is a single architectural finding, not 40 separate ones, and it gets a single fix recommendation.
How long does an AI-generated code audit take?
48 hours for a small app (under 50 files), 3-5 business days for mid-size. We don't rush — the point is signal, not a rubber-stamped report. Larger codebases (over 200 files or with significant infrastructure-as-code) take a week to ten days; we quote at the 30-minute diagnostic.
Is a code audit worth it if I already know the AI-built app is bad?
Yes — 'bad' isn't actionable. The audit gives you a ranked list: fix these three things before launch, these five soon, these ten can wait. That's what you hire against. It also gives you the data to negotiate with a developer or agency — 'patch findings 1, 2, 3, and 7' is a quote-able statement of work in a way that 'fix the app' is not.
Do you fix anything during the code audit?
No — audit and fix are separate services for integrity. If you hire us to fix afterwards, the audit fee is credited against the engagement. The separation matters: an auditor who also charges for fixes has an incentive to inflate findings, and we go to lengths to avoid that perception.
What if the audit gives me a rescue-vs-rewrite verdict of rewrite?
Then we tell you to rewrite, and we explain why. We've refused rescue engagements that weren't salvageable. The audit is the checkpoint. The rescue-vs-rewrite verdict typically lands on 'rewrite' when the data model itself is wrong, when the auth model is unrecoverable without breaking every existing user, or when the codebase is so deeply coupled to a no-code platform that porting it is more expensive than starting over.
Can I share the audit report with another developer?
Yes. The report is yours. Many clients use it as a scoped brief when hiring a contractor internally. We have written audits used in three follow-on agency engagements where the agency told us privately that the report was the best technical brief they had ever received from a non-technical founder.
Will a Supabase RLS audit help with my SOC 2 / ISO 27001 / GDPR compliance work?
Indirectly. Our Supabase RLS audit and the broader code audit are not formal compliance attestations, but findings overlap heavily with the technical-control checklists those frameworks require. Several clients used our audit as the first-draft scope for their SOC 2 readiness engagement; their auditor signed off on most of the technical controls without further evidence.
What if I disagree with a Bolt audit or Lovable code audit finding?
We ship the report with reasoning per finding — the file, the line, the threat model, the recommended fix, and the CWE reference where applicable. If you push back, we re-examine in writing and either revise the finding or explain why the original stands. Several customers have flagged false positives where our heuristic flagged a pattern that the team had already mitigated downstream; we appreciate the corrections and update our checklists accordingly.
Next step

Your AI builder shipped broken Code audit. We ship the fix.

Send the repo. We'll tell you exactly what's wrong in your Code audit layer — and the fixed price to ship it — in 48 hours.

Book free diagnostic →