AI regression loop
appears when:Every time the agent edits adjacent code — a prompt about feature B breaks feature A, and the next prompt breaks feature C
AI regression loop — stop Cursor, Claude Code, and Lovable from breaking working code
Your AI coding tool rewrites working code each time it edits something nearby. A prompt about the checkout page breaks the auth hook. The next prompt fixes the hook and breaks the webhook. Same pattern across Cursor, Claude Code, Lovable, and Bolt.
@filein Cursor, a CLAUDE.md in Claude Code, "Edit only this component" in Lovable), write a failing test that captures the regression, and only then re-prompt.Quick fix for AI regression loop
01# Revert the last passing state, then scope the next prompt02 03git log --oneline -20 # find the last green commit04git checkout <sha> -- path/to/broken/file # restore one file05# or: git reset --hard <sha> # restore the whole tree (destructive)06 07git bisect start08git bisect bad HEAD09git bisect good <last-known-green-sha>10# run your smoke test at each step until bisect names the first bad commit11 12# Then add a failing regression test BEFORE re-prompting13pnpm test -- --run path/to/regression.test.ts14# Only re-prompt once the test is failing and committed.Deeper fixes when the quick fix fails
01 · Per-tool tactics: Cursor
Cursor’s regression surface is Composer with @codebase turned on. Turn it off. The default auto-context pulls files the agent will edit without naming them. Use @file explicitly — one or two files maximum per Composer run. Switch to Ask mode for anything diagnostic or exploratory; Edit mode is only for a scoped change you can describe in one sentence.
Create a .cursorrules file at the repo root. List the stable files the agent must not touch (signed webhook handlers, auth callbacks, applied migrations, env schema). Add a prohibition on weakening TypeScript types and on removing hook dependency arrays. Keep the file short — three to five hard rules is the sweet spot. Longer .cursorrules files get truncated from context and ignored. Commit before every Composer run, and if the diff touches a file you did not name, revert and re-prompt with tighter scope.
01# .cursorrules — scope + protected files02 03DO NOT modify these files unless the prompt names them explicitly:04- src/lib/stripe/webhook.ts # signed handler, covered by integration test05- src/lib/auth/callback.ts # OAuth redirect, changes break production sign-in06- prisma/migrations/** # applied migrations are append-only07- src/lib/env.ts # zod schema — changes ripple across the app08 09DO NOT reformat files that are not part of the requested change.10DO NOT weaken TypeScript types to silence an error. Fix the logic.11DO NOT remove useEffect dependency arrays. Fix the dependency instead.12DO NOT delete code you believe is unused unless the prompt says "remove dead code".13 14When in doubt, ASK instead of EDIT.02 · Per-tool tactics: Claude Code
Claude Code’s regression pattern is session-wide refactoring. The agent reads many files at once and treats small formatting differences as invitations to harmonise. The primary defence is a CLAUDE.mdat the repo root that lists "do not modify" patterns explicitly — stable public API files, migration directories, and any file with the header // PUBLIC API — DO NOT REGENERATE.
When a session starts going sideways, use the revert command — do not try to prompt your way out. Claude Code keeps a checkpoint per tool call; rolling back two or three steps is faster than three rounds of clarification. Limit each session to a single file where the task allows. Use --no-plan only for tightly scoped tasks where the plan overhead is not worth the round-trip; for anything touching more than one file, keep the plan step so the agent commits to a file list before editing.
01# CLAUDE.md — repo-level agent rules02 03## Scope04- One file per session unless explicitly told otherwise.05- Never touch files outside the directory named in the prompt.06 07## Do not modify08- src/lib/stripe/webhook.ts09- src/lib/auth/**10- prisma/migrations/**11- any file with the header: // PUBLIC API — DO NOT REGENERATE12 13## Behaviour14- Resolve type errors by fixing logic, not by casting to any or unknown.15- Preserve hook dependency arrays. If a dep is missing, add it — do not remove the array.16- Do not reformat files you were not asked to change.17- If a test is failing, fix the code under test — do not weaken the test.03 · Per-tool tactics: Lovable
Lovable’s regression surface is the broad chat prompt. A prompt like "add a filter to the product list" regenerates the whole component and often the shared layout that hosts it. The discipline is to use chat-based edits with narrow scope: "Edit only the ProductList component. Do not regenerate the page layout or the parent route." Lovable respects scope instructions more reliably than most tools do, as long as the prompt says it explicitly.
Add a comment block at the top of any component that has stabilised: // DO NOT REGENERATE — stable, covered by tests. Lovable will usually leave the file alone. Use the branch feature if your Lovable plan has it — branch before any non-trivial edit so reverts are a single click. Commit-per-prompt via the GitHub sync integration is non-negotiable; without it, you are flying blind.
01// PUBLIC API — DO NOT REGENERATE02// This file is covered by integration tests in tests/stripe/webhook.test.ts03// Any change must keep the exported signature: POST(req: NextRequest): Promise<Response>04// Last audited: 2026-04-15 — Hyder Shah04 · Per-tool tactics: Bolt.new
Bolt.new snapshots the whole tree and replays prompts across it. Small prompts pile up, the snapshot drifts, and three components you did not touch have new props. Lock working files before iterating — Bolt exposes a per-file lock in the UI, use it on anything that has shipped.
When a regression hits, rollback to the last snapshot — do not prompt Bolt into reversing the edit. The prompt path rarely restores the exact previous state. Keep the component tree shallow: deeper trees regenerate less predictably because more files land in context. If you catch yourself two or three prompts deep with regressions mounting, stop, snapshot, and switch to a more scoped tool.
When to bring in a developer
There is a threshold past which prompting your way out of the regression loop stops being the right move. If you are seeing more than five regressions a day, if core features break and return repeatedly across sessions, if your team is quietly avoiding certain files because every edit cascades, or if the AI has rewritten the same function three or more times without converging — you are past that threshold. The loop is no longer a tooling problem; it is an architectural one.
At that point the cheapest move is a rescue engagement: a developer walks the git log with you, identifies the last architecturally sound commit, extracts the salvageable work from subsequent commits, builds the test harness that should have existed from the start, and hands back a branch you can prompt against safely. Most rescues ship in forty-eight hours to a week depending on depth. The alternative — five more days of prompt-revert-prompt with the same patterns recurring — is both more expensive and less likely to exit the loop.
Why AI-built apps hit AI regression loop
The regression loop has a predictable shape. You ship a feature, the AI gets it working, you commit. A week later a new prompt touches an adjacent file. The agent pulls the earlier file into context because they share an import. The agent "improves" both files. The first feature breaks. You prompt the agent to fix the broken feature. The agent edits the adjacent file again. The second feature breaks. At this point you are in the loop and every prompt you run deepens it because the working state is three, four, five prompts behind in git history.
The loop runs until you break the pattern mechanically: revert to a known-good commit, add a failing regression test, scope the next prompt narrowly, and ship a small diff. Most rescue work starts by walking a founder back through their git log until we find a commit that still passes the smoke tests — sometimes that is today, sometimes it is last Tuesday — and branching forward from there with stricter rules of engagement.
AI regression loop by AI builder
How often each AI builder ships this error and the pattern that produces it.
| Builder | Frequency | Pattern |
|---|---|---|
| Cursor | Very high | @codebase auto-context pulls files into Composer without naming them |
| Claude Code | High | Session-wide formatting harmonisation; weakens types to silence errors |
| Lovable | High | Broad prompts regenerate whole components and shared layouts |
| Bolt.new | High | Snapshot drift — small prompts mutate files across the tree |
| v0 | Medium | Regenerates entire route segments on UI tweak prompts |
| Replit Agent | Medium | Auto-runs and commits before the human can diff |
| Base44 | Low | Tight scope by design; regressions usually trace to missing test coverage |
Related errors we fix
Stop AI regression loop recurring in AI-built apps
- →Commit after every AI prompt. Squash on merge if history hygiene matters — until then, the noisy log is the recovery path.
- →Run your test suite before every prompt. If tests are already red, you are debugging two problems instead of one.
- →Run lint and typecheck before every prompt. Baseline errors get attributed to the wrong commit otherwise.
- →Scope prompts to a single file by name. "Edit src/lib/auth/callback.ts only" — not "improve auth".
- →Maintain an instructions file (CLAUDE.md, .cursorrules, or AGENTS.md) with a short list of protected files and hard prohibitions.
- →Add a `// PUBLIC API — DO NOT REGENERATE` header to every file that has shipped and is covered by integration tests.
- →Turn on TypeScript strict mode. Regression loops thrive on implicit `any` and silent type widening.
- →Add a snapshot or end-to-end smoke test for each critical flow (sign-in, checkout, webhook). Unit tests do not catch the regressions AI tools produce.
Still stuck with AI regression loop?
If the regression loop is costing you shipping velocity or revenue, a fixed-price engagement ships this week:
- →More than five regressions a day across sessions
- →Core features break, get fixed, break again within the same week
- →The team is avoiding specific files because edits cascade unpredictably
- →The AI has rewritten the same function three or more times without converging
AI regression loop questions
Why does Cursor keep breaking my working code?+
How do I stop Claude Code from reformatting files it shouldn't touch?+
Is there a way to lock a file so the AI won't edit it?+
Should I commit after every AI prompt?+
Why do my tests pass but my app breaks in production?+
Can a CLAUDE.md / cursorrules file really prevent regressions?+
Ship the fix. Keep the fix.
Emergency Triage restores service in 48 hours. Break the Fix Loop rebuilds CI so this error cannot ship again.
Hyder Shah leads Afterbuild Labs, shipping production rescues for apps built in Lovable, Bolt.new, Cursor, Replit, v0, and Base44. our rescue methodology.
AI regression loop experts
If this problem keeps coming back, you probably need ongoing expertise in the underlying stack.