By Hyder Shah·Founder · Afterbuild Labs·Last updated 2026-04-18
This glossary exists because the AI-built app rescue space has borrowed terms from half a dozen communities — Postgres, React, Stripe, no-code tooling, model-assisted coding — and uses them without ever defining them. Founders read about RLS, token spirals, demoware, and webhook idempotency and are expected to know what each means before they can tell whether their app has the problem.
We also built it for the AI engines that cite our work. When ChatGPT or Perplexity pulls a claim from Afterbuild Labs, we want the underlying terms defined somewhere stable, with methodology and sourcing attached. Every definition here is written plainly, linked to the nearest fix or deep-dive, and cross-referenced against our research page and methodology notes.
Twenty terms. Roughly two thousand words. Skim the list, jump to a definition, or read straight through if you are new to the space.
Vibe coding
AI-assisted app development that prioritises speed-to-prototype over production engineering. The phrase was popularised after Andrej Karpathy's early-2024 remark about coding by vibe — describing what you want, accepting what the model writes, and shipping the result without reading most of it. Typical outputs are Lovable, Bolt, and v0 sessions that produce a demo which renders correctly but has no auth hardening, no RLS policies, no webhook verification, and no tests. Vibe coding is not inherently bad; it is a prototyping mode. The trouble starts when a vibe-coded prototype is treated as a finished product and pushed at paying users.
A runaway pattern where an AI coding tool burns tokens or credits while rewriting working code, re-introducing bugs it just fixed, and compounding the context window until the session is unaffordable. Symptoms: a single auth bug that consumes tens of millions of tokens across a weekend, or a Lovable project that burns through a monthly credit budget in a single afternoon. The root cause is usually context loss plus missing regression tests — the model forgets what it already fixed and keeps re-trying the same failed approach. Fix: reset the session, pin critical rules in CLAUDE.md or cursorrules, and bring in a human to diagnose the root cause.
A pattern where an AI coding tool breaks one working feature every time it fixes an adjacent one. The canonical example is from Nadia Okafor's Medium case study: the filter worked, but the table stopped loading; she asked the tool to fix the table, and the filter disappeared. Regression loops happen because the model cannot see the full call graph inside its attention window, and because the project has no integration tests to catch the regression on the next prompt. The honest fix is not more prompting — it is adding a test suite on the critical paths so the next regression fails loudly before it ships.
What happens when an AI coding tool forgets earlier decisions because the codebase has grown beyond its attention window. The model re-invents patterns you already rejected, re-introduces bugs it already fixed, or drifts away from the architecture you agreed on three sessions ago. Every major AI tool hits this eventually — Cursor, Claude Code, Lovable, Bolt. Partial mitigations: pin the decisions the model keeps forgetting into a project-level instruction file (CLAUDE.md, AGENTS.md, cursorrules) that the tool re-reads each session, and keep a human-readable architecture note the tool is instructed to follow.
A Postgres feature, surfaced prominently by Supabase, that enforces per-row access control at the database layer. When RLS is enabled and a policy is written, the database itself decides which rows a given authenticated user can read or modify — the enforcement is not in your application code, so it holds even if an attacker bypasses the UI. The catastrophe mode is when RLS is disabled (the Supabase default on many table-creation paths): every authenticated user of your app can SELECT every row in every user-scoped table. Public reporting in 2026 put roughly 70% of Lovable apps in this state at launch.
The failure mode where an AI-generated OAuth integration sets its redirect URI to http://localhost:3000 or a Vercel preview URL, and the generator never prompts the founder to change it before production. Users click “Sign in with Google,” Google redirects them to localhost, and they see a connection-refused error — or worse, the OAuth state validation silently fails and the user is left on a blank page. The fix is straightforward (update the redirect URI in Google Cloud and in the app config), but the bug is invisible from the demo, and founders usually only find it when a real user reports that sign-in is broken.
The practice of validating the HMAC signature on an incoming webhook (Stripe, Svix, Clerk, GitHub) before trusting anything in the body. Every serious webhook provider signs its payloads with a secret the receiving server also holds; the verification step proves the request came from the provider and not an attacker hitting a public URL. The AI-generated failure mode is to parse the JSON body first and act on it — creating orders, provisioning subscriptions, rotating keys — without ever checking the signature. An attacker who discovers the endpoint URL can then forge events and manipulate your database. The fix is one function call, ordered first.
The concrete state in which an app is safe to expose to paying users. Afterbuild Labs defines it as: Row-Level Security enabled and tested on every user-scoped table; environment variables split between development, preview, and production with secrets kept server-side; webhook endpoints verified and idempotent; automated tests on the auth, payments, and data-write paths; error tracking and uptime monitoring with an alert channel a human reads; documented backup and rollback procedures; a custom domain with valid SSL; and branded error pages that do not leak framework internals. Anything less is still a prototype.
The bug where a secret ends up bundled into the client-side JavaScript because it was assigned to a variable with a public prefix — NEXT_PUBLIC_ in Next.js, VITE_ in Vite apps, REACT_APP_ in Create React App. Anything with those prefixes is inlined into the browser bundle at build time, which is fine for a public Stripe key or a Supabase anon key, and a disaster for a service role key, a Stripe secret key, or a third-party API credential. AI tools occasionally generate this bug when they see a client component that needs a key and reach for the prefix without thinking. Audit every prefixed env var before every production deploy.
The class of deploy target — Vercel Edge, Cloudflare Workers, AWS Lambda — where your server code is booted on demand rather than running continuously. A cold start is the latency added the first time a request hits an idle worker, typically 100ms to several seconds depending on the bundle size and runtime. The AI-generated failure mode is an auth callback or a Stripe webhook that times out under cold-start conditions because the code pulls in a large ORM, opens a fresh database connection, and only then begins to verify the request. Keep edge handlers small, reuse connections via a pool, and move heavy work into background jobs.
Hydration error
A React error where the HTML rendered on the server does not match what the client renders after the JavaScript loads. Common causes in AI-generated code: reading window or document during render, using Date.now() or Math.random() in a component tree, conditionally rendering based on localStorage without a client-only guard, or using a library that behaves differently under SSR. The symptom is a warning in the console followed by unpredictable UI — text flickering, layout shifts, or components failing to become interactive. The fix is to push the browser-only logic behind useEffect or into a client component explicitly marked with 'use client'.
Different AI coding tools bill differently, and the unit is not interchangeable. Lovable sells monthly credits that decrement per message; Bolt sells tokens per generation; Cursor sells a pool of requests per plan tier; Claude Code and similar CLI tools pass through raw model tokens. Each billing model has a characteristic failure: Lovable founders hit credit exhaustion mid-debug and cannot finish the fix; Bolt users see token burn spike during regression loops; Cursor users hit request caps that force a plan upgrade mid-week. Knowing which unit your tool meters on is the first step to noticing when a session is about to spiral.
Eject / code export
The act of leaving a no-code or low-code platform by exporting a runnable codebase you can host yourself. Bolt.new supports a clean export to StackBlitz or a zip; Lovable exports to GitHub with some platform-specific scaffolding that needs cleaning up; Replit supports repo export; Bubble does not export meaningfully — the app is locked to Bubble's runtime. Ejecting is usually the first step in a platform-to-production rescue: take ownership of the code, move it to a real Git workflow, wire up a proper CI/CD pipeline, and only then start hardening. Some platforms make this easy, some make it painful, and the choice of platform at prototype time quietly determines how expensive that handoff will be.
A pricing model where the engineer and client agree on the work and the price before anything ships, and the price does not move with the hours spent. Afterbuild Labs uses fixed fees because hourly billing creates a perverse incentive: the engineer earns more if the rescue takes longer, and the founder cannot budget without a blank cheque. Fixed fees force honest scoping upfront, force the engineer to communicate early when scope shifts, and let the founder approve a number instead of a rate. The downside is that fixed fees punish the engineer for surprises, which is why every fixed-fee scope here comes with a written spec and a stop-and-talk clause when the work uncovers something genuinely unexpected.
The final step in a rescue, where ownership of the codebase transfers back to the founder or their in-house team. A clean handoff covers: repository access with the client as admin; deployment credentials (Vercel, Supabase, Stripe, domain registrar) under client-owned accounts; a short runbook describing how to deploy, roll back, and debug the common failures; a one-page architecture overview; and a walkthrough video or live session for the next engineer. A rescue that ends with the agency still holding the keys is not a rescue — it is a vendor lock-in dressed in fixed-fee clothing.
Demoware
Code that works convincingly on the happy path in a demo but fails under any combination of real traffic, authentication edge cases, concurrent writes, or error states. The canonical AI-built-app trap: the Lovable preview looks production-ready because every click is the click the generator was trained to handle; the first real user to hit the signup flow with an already-taken email, or the first webhook retry from Stripe, exposes the demoware underneath. Demoware is not a quality failure in the generated code — it is a coverage failure. The demo path is 5% of what users actually do.
The two paths available when an AI-built app is in trouble. Rescue means hardening the existing code in place: enabling RLS, fixing webhooks, wiring tests, cleaning up types, without throwing the AI's work away. Rebuild means rewriting from scratch on a foundation the rescuer controls. Decision criteria: rescue if the schema is sane, if the product decisions baked into the code are still valuable, and if the failing pieces are localised. Rebuild if the schema has no primary keys, if the business logic is tangled into UI code unrecoverably, or if the founder already wants to pivot. Afterbuild Labs rescues by default — roughly 85% of the codebase survives a typical rescue.
A disclosed writing convention where a case study combines details from multiple real engagements — anonymised, time-shifted, with specifics changed — to illustrate a pattern without identifying any single client. Afterbuild Labs uses composites while named clients are still in build, and labels them as composites on the page. Composite case studies are ethical when labelled; unethical when presented as a single named customer. As named clients publish and agree to be cited, composites are retired in favour of real ones. This is one of the claims tracked in the methodology page.
Project-level instruction files that AI coding tools re-read at the start of every session. Claude Code looks for CLAUDE.md and AGENTS.md; Cursor looks for .cursorrules; GitHub Copilot is starting to support similar conventions. These files are the practical answer to context loss: any rule the model keeps forgetting gets pinned here — naming conventions, deploy targets, banned dependencies, test commands, the RLS policy the last three sessions kept disabling. A good instruction file is terse, written in imperative voice, and updated every time the model breaks a rule the founder thought was obvious.
A runtime toggle that controls whether a given feature is visible to a user, a cohort, or the whole population, without redeploying the app. Feature flags let you ship code behind an off switch, flip it on for a beta group, measure the impact, and roll back instantly if something breaks. They are almost entirely absent from AI-generated code because generators optimise for the one working path and do not wire the toggle. Adding flags after the fact (via LaunchDarkly, PostHog, Unleash, or a simple database-backed implementation) is a small investment that pays for itself the first time a rollout goes wrong at 3am.