afterbuild/ops
ERR-266/Performance · DB
ERR-266
TTFB 1.4s · 101 queries fired on a list page

appears when:List pages with related data (tasks + author, posts + comments) take 1-8 seconds to render in production

N+1 query slow page load

One parent query plus N child queries, one per row. 100 rows at 10ms per round trip = full second of server-side wait before anything ships to the browser.

Last updated 17 April 2026 · 6 min read · By Hyder Shah
Direct answer
N+1 query slow page load is caused by AI-generated code fetching related data one row at a time. Replace the loop with a single batched query: Prisma include: { author: true }, Supabase .select("*, author:users(*)"), or .in("id", ids) to batch. Query count drops from 101 to 1-2, TTFB from 1,400ms to 60ms.

Quick fix for N+1 query

lib/db/list-tasks.ts
typescript
01// ❌ N+1 — 101 queries for 100 tasks02const tasks = await prisma.task.findMany();03for (const task of tasks) {04  task.author = await prisma.user.findUnique({05    where: { id: task.authorId },06  });07}08 09// ✅ Prisma include — 2 queries total10const tasks = await prisma.task.findMany({11  include: { author: true },12});13 14// ✅ Supabase nested select — 1 query15const { data } = await supabase16  .from("tasks")17  .select("*, author:users(id, name, avatar_url)")18  .order("created_at", { ascending: false });
Replace the per-item fetch with include or nested select — 101 queries collapse to 2 or 1

Deeper fixes when the quick fix fails

01 · Batch parent-child queries with .in()

lib/db/list-tasks.ts
typescript
01// Supabase: two queries, one round trip per table02const { data: tasks } = await supabase.from("tasks").select("*");03const authorIds = [...new Set(tasks!.map((t) => t.author_id))];04 05const { data: authors } = await supabase06  .from("users")07  .select("id, name")08  .in("id", authorIds);09 10const byId = new Map(authors!.map((a) => [a.id, a]));11const enriched = tasks!.map((t) => ({ ...t, author: byId.get(t.author_id) }));
When a nested select is not possible, batch the child fetch with .in()

02 · Add Dataloader for GraphQL resolvers

graphql/loaders.ts
typescript
01const userLoader = new DataLoader(async (ids) => {02  const users = await prisma.user.findMany({03    where: { id: { in: ids as string[] } },04  });05  return ids.map((id) => users.find((u) => u.id === id));06});07 08// In resolver:09author: (task) => userLoader.load(task.authorId)

03 · Add an index on the foreign key

supabase/migrations/index_author.sql
sql
01-- A join on an unindexed foreign key is fast for 100 rows and dead at 10,00002create index tasks_author_id_idx on public.tasks (author_id);

Why AI-built apps hit N+1 query

Large language models generate data access one item at a time. Asked for “a list of tasks with their author names,” the model writes the list query first and the author lookup second. The simplest stitch is a for or map that fetches each author by id. The result is one parent query returning N tasks and N child queries fetching a single author. On a page with 100 tasks you get 101 round trips — the classic N+1.

The pattern is invisible in development. Local database round trips are sub-millisecond because Postgres is on the same machine. Page loads in 200ms. In production on Vercel, each query to Supabase or Neon crosses the public internet and costs 5 to 20 milliseconds. Multiply by 100 and you have a 1 to 2 second tax on every page view. Add a slow RLS evaluation or a cold function and you hit the 8-to-10-second load times that drive users off.

Cursor, Bolt, Lovable, Base44, and Replit all produce this shape by default. GraphQL with tRPC is especially prone because each resolver fetches its own dependency. Fix is structural — replace per-item fetches with a single batched query. AI builders rarely do that because the unoptimized version works locally and the model has no signal telling it otherwise.

Diagnose N+1 query by failure mode

Measure against a cold function on the same Vercel region as your database. Numbers reflect a 100-row list page with a single one-to-one author join.

ShapeQueriesTTFB (100 rows)
N+1 loop101~1,400 ms
Prisma include2~80 ms
Supabase nested select1~60 ms
Raw SQL join1~50 ms

N+1 query by AI builder

How often each AI builder ships this error and the pattern that produces it.

AI builder × N+1 query
BuilderFrequencyPattern
LovableEvery list pageGenerates items.map(async) for related data
Bolt.newCommonFetches each child inside the page component
v0CommonUses Promise.all(items.map(fetch)) — still N queries
CursorCommontRPC resolver per field; no Dataloader
Base44SometimesShips unindexed foreign keys — join slows at scale

Related errors we fix

Stop N+1 query recurring in AI-built apps

Still stuck with N+1 query?

Emergency triage · $299 · 48h turnaround
We restore service and write the root-cause report.
start the triage →

N+1 query questions

What is an N+1 query and why does it kill page load?+
N+1 means one initial query returns N rows and the code then fires one follow-up query per row. A list of 100 tasks with an author lookup becomes 101 database round trips. Each round trip adds 5-20ms of latency on Vercel-to-Neon or Vercel-to-Supabase. 100 hops at 10ms each is a full second before data renders. AI-generated code produces this pattern by default.
How do I detect N+1 queries in an AI-built app?+
Turn on Prisma's query log with log: ['query'] on the client. Supabase: Dashboard → Logs → API Logs. Reload the page. If you see the same query repeated with only the id parameter changing, it is N+1. APM tools like Sentry and Datadog flag it automatically, but the logs are free.
Is Prisma include faster than a manual join?+
Prisma's include generates two queries: one for the parent, one for the children with a WHERE IN clause. 2 round trips for any N, not N+1. A raw SQL join is 1 round trip and is faster at high volume because the database streams the joined result set. For under 1,000 rows the difference is negligible. For over 10,000 rows, write the join. Prisma 5+ supports a relationJoins preview flag that collapses to a single query.
What does Dataloader do that include does not?+
Dataloader batches requests within a single tick of the event loop. It is the standard solution for GraphQL resolvers where each field requests its own data and you cannot restructure the query. Dataloader collects all ids in one tick, issues a batched query, and hands the result back to each resolver. For REST or tRPC, include is simpler. For GraphQL, Dataloader is idiomatic.
How much does an N+1 fix cost?+
Our Security Audit + Performance Pass starts at $499 and covers the most common N+1 patterns in Prisma, Drizzle, and Supabase-js code. For a single hotspot page, triage is often under $299. Full application rewrite of data access: Break the Fix Loop at $3,999 covers it end-to-end with new tests to prevent regression.
Next step

Ship the fix. Keep the fix.

Emergency Triage restores service in 48 hours. Break the Fix Loop rebuilds CI so this error cannot ship again.

About the author

Hyder Shah leads Afterbuild Labs, shipping production rescues for apps built in Lovable, Bolt.new, Cursor, Replit, v0, and Base44. our rescue methodology.

Sources