afterbuild/ops
ERR-772/Cursor · Claude Code · Lovable · Bolt
ERR-772
AI regression loop

appears when:Every time the agent edits adjacent code — a prompt about feature B breaks feature A, and the next prompt breaks feature C

AI regression loop — stop Cursor, Claude Code, and Lovable from breaking working code

Your AI coding tool rewrites working code each time it edits something nearby. A prompt about the checkout page breaks the auth hook. The next prompt fixes the hook and breaks the webhook. Same pattern across Cursor, Claude Code, Lovable, and Bolt.

Last updated 18 April 2026 · 12 min read · By Hyder Shah
Direct answer
Your AI keeps breaking working code because (a) prompts have no file-level scope, so the agent feels free to edit whatever is in context, (b) the test suite is not catching the regression because unit tests mock the integration seams where AI tools change behaviour most, and (c) prompt context includes code the agent decides to "clean up" even though you never asked. Immediate fix: revert to the last passing commit, add scope guardrails (explicit @filein Cursor, a CLAUDE.md in Claude Code, "Edit only this component" in Lovable), write a failing test that captures the regression, and only then re-prompt.

Quick fix for AI regression loop

recovery.sh
bash
01# Revert the last passing state, then scope the next prompt02 03git log --oneline -20                      # find the last green commit04git checkout <sha> -- path/to/broken/file  # restore one file05# or: git reset --hard <sha>               # restore the whole tree (destructive)06 07git bisect start08git bisect bad HEAD09git bisect good <last-known-green-sha>10# run your smoke test at each step until bisect names the first bad commit11 12# Then add a failing regression test BEFORE re-prompting13pnpm test -- --run path/to/regression.test.ts14# Only re-prompt once the test is failing and committed.
Revert first, bisect second, test third. Never re-prompt an AI without a failing regression test to catch the same break.

Deeper fixes when the quick fix fails

01 · Per-tool tactics: Cursor

Cursor’s regression surface is Composer with @codebase turned on. Turn it off. The default auto-context pulls files the agent will edit without naming them. Use @file explicitly — one or two files maximum per Composer run. Switch to Ask mode for anything diagnostic or exploratory; Edit mode is only for a scoped change you can describe in one sentence.

Create a .cursorrules file at the repo root. List the stable files the agent must not touch (signed webhook handlers, auth callbacks, applied migrations, env schema). Add a prohibition on weakening TypeScript types and on removing hook dependency arrays. Keep the file short — three to five hard rules is the sweet spot. Longer .cursorrules files get truncated from context and ignored. Commit before every Composer run, and if the diff touches a file you did not name, revert and re-prompt with tighter scope.

.cursorrules
markdown
01# .cursorrules — scope + protected files02 03DO NOT modify these files unless the prompt names them explicitly:04- src/lib/stripe/webhook.ts        # signed handler, covered by integration test05- src/lib/auth/callback.ts         # OAuth redirect, changes break production sign-in06- prisma/migrations/**             # applied migrations are append-only07- src/lib/env.ts                   # zod schema — changes ripple across the app08 09DO NOT reformat files that are not part of the requested change.10DO NOT weaken TypeScript types to silence an error. Fix the logic.11DO NOT remove useEffect dependency arrays. Fix the dependency instead.12DO NOT delete code you believe is unused unless the prompt says "remove dead code".13 14When in doubt, ASK instead of EDIT.
Cursor reads .cursorrules on every Composer run. Keep it short, list hard prohibitions, name protected files explicitly.

02 · Per-tool tactics: Claude Code

Claude Code’s regression pattern is session-wide refactoring. The agent reads many files at once and treats small formatting differences as invitations to harmonise. The primary defence is a CLAUDE.mdat the repo root that lists "do not modify" patterns explicitly — stable public API files, migration directories, and any file with the header // PUBLIC API — DO NOT REGENERATE.

When a session starts going sideways, use the revert command — do not try to prompt your way out. Claude Code keeps a checkpoint per tool call; rolling back two or three steps is faster than three rounds of clarification. Limit each session to a single file where the task allows. Use --no-plan only for tightly scoped tasks where the plan overhead is not worth the round-trip; for anything touching more than one file, keep the plan step so the agent commits to a file list before editing.

CLAUDE.md
markdown
01# CLAUDE.md — repo-level agent rules02 03## Scope04- One file per session unless explicitly told otherwise.05- Never touch files outside the directory named in the prompt.06 07## Do not modify08- src/lib/stripe/webhook.ts09- src/lib/auth/**10- prisma/migrations/**11- any file with the header: // PUBLIC API — DO NOT REGENERATE12 13## Behaviour14- Resolve type errors by fixing logic, not by casting to any or unknown.15- Preserve hook dependency arrays. If a dep is missing, add it — do not remove the array.16- Do not reformat files you were not asked to change.17- If a test is failing, fix the code under test — do not weaken the test.
CLAUDE.md is read at session start. Keep it short and categorical — scope, do-not-modify list, behaviour rules.

03 · Per-tool tactics: Lovable

Lovable’s regression surface is the broad chat prompt. A prompt like "add a filter to the product list" regenerates the whole component and often the shared layout that hosts it. The discipline is to use chat-based edits with narrow scope: "Edit only the ProductList component. Do not regenerate the page layout or the parent route." Lovable respects scope instructions more reliably than most tools do, as long as the prompt says it explicitly.

Add a comment block at the top of any component that has stabilised: // DO NOT REGENERATE — stable, covered by tests. Lovable will usually leave the file alone. Use the branch feature if your Lovable plan has it — branch before any non-trivial edit so reverts are a single click. Commit-per-prompt via the GitHub sync integration is non-negotiable; without it, you are flying blind.

components/ProductList.tsx
typescript
01// PUBLIC API — DO NOT REGENERATE02// This file is covered by integration tests in tests/stripe/webhook.test.ts03// Any change must keep the exported signature: POST(req: NextRequest): Promise<Response>04// Last audited: 2026-04-15 — Hyder Shah
Top-of-file guardrail comment. Lovable and other tools respect it when paired with a rules file that references the pattern.

04 · Per-tool tactics: Bolt.new

Bolt.new snapshots the whole tree and replays prompts across it. Small prompts pile up, the snapshot drifts, and three components you did not touch have new props. Lock working files before iterating — Bolt exposes a per-file lock in the UI, use it on anything that has shipped.

When a regression hits, rollback to the last snapshot — do not prompt Bolt into reversing the edit. The prompt path rarely restores the exact previous state. Keep the component tree shallow: deeper trees regenerate less predictably because more files land in context. If you catch yourself two or three prompts deep with regressions mounting, stop, snapshot, and switch to a more scoped tool.

When to bring in a developer

There is a threshold past which prompting your way out of the regression loop stops being the right move. If you are seeing more than five regressions a day, if core features break and return repeatedly across sessions, if your team is quietly avoiding certain files because every edit cascades, or if the AI has rewritten the same function three or more times without converging — you are past that threshold. The loop is no longer a tooling problem; it is an architectural one.

At that point the cheapest move is a rescue engagement: a developer walks the git log with you, identifies the last architecturally sound commit, extracts the salvageable work from subsequent commits, builds the test harness that should have existed from the start, and hands back a branch you can prompt against safely. Most rescues ship in forty-eight hours to a week depending on depth. The alternative — five more days of prompt-revert-prompt with the same patterns recurring — is both more expensive and less likely to exit the loop.

Why AI-built apps hit AI regression loop

The regression loop has a predictable shape. You ship a feature, the AI gets it working, you commit. A week later a new prompt touches an adjacent file. The agent pulls the earlier file into context because they share an import. The agent "improves" both files. The first feature breaks. You prompt the agent to fix the broken feature. The agent edits the adjacent file again. The second feature breaks. At this point you are in the loop and every prompt you run deepens it because the working state is three, four, five prompts behind in git history.

The loop runs until you break the pattern mechanically: revert to a known-good commit, add a failing regression test, scope the next prompt narrowly, and ship a small diff. Most rescue work starts by walking a founder back through their git log until we find a commit that still passes the smoke tests — sometimes that is today, sometimes it is last Tuesday — and branching forward from there with stricter rules of engagement.

AI regression loop by AI builder

How often each AI builder ships this error and the pattern that produces it.

AI builder × AI regression loop
BuilderFrequencyPattern
CursorVery high@codebase auto-context pulls files into Composer without naming them
Claude CodeHighSession-wide formatting harmonisation; weakens types to silence errors
LovableHighBroad prompts regenerate whole components and shared layouts
Bolt.newHighSnapshot drift — small prompts mutate files across the tree
v0MediumRegenerates entire route segments on UI tweak prompts
Replit AgentMediumAuto-runs and commits before the human can diff
Base44LowTight scope by design; regressions usually trace to missing test coverage

Related errors we fix

Stop AI regression loop recurring in AI-built apps

Still stuck with AI regression loop?

Emergency triage · $299 · 48h turnaround
We restore service and write the root-cause report.

If the regression loop is costing you shipping velocity or revenue, a fixed-price engagement ships this week:

  • More than five regressions a day across sessions
  • Core features break, get fixed, break again within the same week
  • The team is avoiding specific files because edits cascade unpredictably
  • The AI has rewritten the same function three or more times without converging
start the triage →

AI regression loop questions

Why does Cursor keep breaking my working code?+
Cursor rewrites code it did not need to touch because @codebase auto-context pulls related files into the prompt, and Composer treats every file in context as a candidate for edits. A prompt about the checkout page ends up rewriting the auth hook because both import the same user context. The fix is to turn off auto-context, use @file tags explicitly, and use Ask mode — not Edit — for anything diagnostic. Commit before every Composer run so a regression is a one-command revert.
How do I stop Claude Code from reformatting files it shouldn't touch?+
Claude Code reformats files because the agent treats 'tidying' as part of the task and because prompts rarely specify which files are off-limits. Add a CLAUDE.md at the repo root with a 'do not modify' list: stable public API files, signed webhook handlers, migration files, generated code. Keep sessions scoped to a single file where possible, and use the revert command the moment a diff touches a file you did not ask about. A single broad prompt that says 'improve X' gives license to rewrite anything nearby.
Is there a way to lock a file so the AI won't edit it?+
No AI coding tool enforces a true file lock today, but you can approximate one. Add a top-of-file comment block that reads 'DO NOT REGENERATE — stable public API, covered by regression tests' and reference it in your CLAUDE.md or .cursorrules. Bolt.new offers a per-file lock in the UI. For everything else, rely on pre-commit hooks that fail if a file in the protected list has changed without an explicit opt-in flag. The mechanical lock is git: commit before every prompt, diff after, revert if a protected file moved.
Should I commit after every AI prompt?+
Yes. Commit-per-prompt is the single highest-leverage discipline against regression loops. The cost is a noisy git log; the benefit is that every regression becomes a one-command revert and every bisect lands in seconds. Squash on merge if you care about history hygiene. The alternative — running five prompts and trying to untangle which one broke the payment flow — is the exact pattern that turns a 90-minute task into a two-day rescue. A terse message like 'AI: added checkout validation' is enough.
Why do my tests pass but my app breaks in production?+
Unit tests cover pure functions; production bugs live in the integration seams AI tools rewrite most eagerly — auth redirects, webhook signature verification, hydration boundaries, env-var loading, database RLS. Tests that mock the network never exercise those seams. Add at least one end-to-end smoke that logs in a real user, triggers a payment webhook, and reads back the expected row. The regression loop thrives on the gap between 'compiles and unit-tests pass' and 'actually works against production infra'.
Can a CLAUDE.md / cursorrules file really prevent regressions?+
It prevents the predictable ones. An instructions file cannot stop an AI from making a genuine reasoning error, but it will stop the common regression patterns: reformatting stable files, weakening TypeScript types to silence errors, deleting hook dependency arrays, switching implicit returns to explicit ones, 'cleaning up' dead code that was load-bearing. A useful rules file lists three to five hard prohibitions, names protected files explicitly, and restates the public API contracts the agent must not alter. Longer files get ignored.
Next step

Ship the fix. Keep the fix.

Emergency Triage restores service in 48 hours. Break the Fix Loop rebuilds CI so this error cannot ship again.

About the author

Hyder Shah leads Afterbuild Labs, shipping production rescues for apps built in Lovable, Bolt.new, Cursor, Replit, v0, and Base44. our rescue methodology.

AI regression loop experts

If this problem keeps coming back, you probably need ongoing expertise in the underlying stack.

Sources