afterbuild/ops
ERR-256/Vercel · Functions
ERR-256
FUNCTION_INVOCATION_TIMEOUT · Task timed out after 10.00 seconds

appears when:When a serverless or Edge route runs longer than the plan's wall-clock cap — 10s Hobby, 60s Pro default, 25s Edge

Vercel function timeout error

Vercel kills any function that runs past its plan cap. OpenAI calls, slow DB joins, and heavy SSR routes all hit this. Fix by streaming, queuing, or moving to a longer runtime.

Last updated 17 April 2026 · 7 min read · By Hyder Shah
Direct answer
Vercel function timeout means the handler ran past the plan's wall-clock cap. Hobby is 10s, Pro is 60s by default and 300s if configured, Edge is 25s. Fix by returning a streaming Response, offloading long work to a queue (Inngest, QStash, Trigger.dev), or extending with export const maxDuration = 60 on Pro.

Quick fix for Vercel function timeout

app/api/chat/route.ts
typescript
01// app/api/chat/route.ts — stream OpenAI responses so the function stays under the timeout02import OpenAI from "openai";03 04const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });05 06// Edge runtime has a hard 25s cap but starts cold faster and streams natively07export const runtime = "edge";08// On Node runtime you can extend to 60s on Pro: export const maxDuration = 60;09 10export async function POST(req: Request) {11  const { messages } = await req.json();12 13  const stream = await openai.chat.completions.create({14    model: "gpt-4o",15    messages,16    stream: true,17  });18 19  const encoder = new TextEncoder();20  const body = new ReadableStream({21    async start(controller) {22      for await (const chunk of stream) {23        const text = chunk.choices[0]?.delta?.content ?? "";24        if (text) controller.enqueue(encoder.encode(text));25      }26      controller.close();27    },28  });29 30  return new Response(body, {31    headers: {32      "content-type": "text/plain; charset=utf-8",33      "cache-control": "no-store",34    },35  });36}
Edge runtime streaming handler — starts cold in under 50ms and bypasses the 10s Hobby cap via TTFB

Deeper fixes when the quick fix fails

01 · Move PDF generation to Inngest background queue

app/api/reports/route.ts
typescript
01// app/api/reports/route.ts — enqueue instead of blocking02import { inngest } from "@/lib/inngest";03import { NextResponse } from "next/server";04 05export async function POST(req: Request) {06  const { reportId } = await req.json();07  // enqueue — returns in milliseconds08  const { ids } = await inngest.send({09    name: "reports/generate.pdf",10    data: { reportId },11  });12  return NextResponse.json({ jobId: ids[0] });13}14 15// inngest/reports.ts — the slow job runs on Inngest's infra16import { inngest } from "@/lib/inngest";17 18export const generateReport = inngest.createFunction(19  { id: "reports-generate-pdf", retries: 3 },20  { event: "reports/generate.pdf" },21  async ({ event, step }) => {22    const data = await step.run("fetch-data", () => fetchReportData(event.data.reportId));23    const pdf = await step.run("render-pdf", () => renderPdf(data));24    await step.run("upload", () => uploadToStorage(event.data.reportId, pdf));25    return { ok: true };26  },27);
Enqueue instead of block — route returns in ms, Inngest runs the minutes-long job separately

02 · Extend Node runtime on Pro with maxDuration

app/api/slow-report/route.ts
typescript
01// app/api/slow-report/route.ts — only works on Pro plan02export const runtime = "nodejs";03export const maxDuration = 60; // max 300 on Pro04 05export async function GET() {06  const result = await slowQuery(); // takes up to 45s07  return Response.json(result);08}
Valid on Pro. Silently capped to 10s on Hobby — confirm plan in Vercel dashboard

03 · Warm Neon compute with a cron ping

app/api/cron/warm-db/route.ts
typescript
01// app/api/cron/warm-db/route.ts — keep Neon compute awake during business hours02import { db } from "@/lib/db";03 04export const runtime = "nodejs";05export const maxDuration = 10;06 07export async function GET(req: Request) {08  if (req.headers.get("authorization") !== `Bearer ${process.env.CRON_SECRET}`) {09    return new Response("unauthorized", { status: 401 });10  }11  await db.execute("select 1");12  return Response.json({ ok: true });13}14 15// vercel.json — run every minute, 9am-6pm UTC16// {17//   "crons": [18//     { "path": "/api/cron/warm-db", "schedule": "* 9-18 * * *" }19//   ]20// }
Prevents cold Neon starts from eating your 10-second budget during peak hours

Why AI-built apps hit Vercel function timeout

Vercel runs Node and Edge functions on AWS Lambda and Cloudflare Workers respectively. Both platforms enforce a wall-clock timeout. Vercel exposes this via plan tiers: 10 seconds on Hobby, 60 seconds default on Pro (configurable up to 300 with maxDuration), 25 seconds on Edge regardless of plan. Once the cap is hit, the function process is killed and Vercel returns a 504 with FUNCTION_INVOCATION_TIMEOUT. The client sees a gateway error; no body, no partial data.

AI builders scaffold OpenAI handlers that call chat.completions.create without streaming. A short completion takes two seconds and works in preview. A real product question with tool use takes fifteen to forty seconds and times out on Hobby every time. The fix is streaming — you return a Response backed by a ReadableStream. The function returns the first byte in milliseconds; Vercel counts that as "done" for the purposes of the timeout and the client receives tokens until the model finishes.

The second source is slow database work. Cold Neon compute wakes in three to five seconds. A complex join over a few million rows can run for ten seconds. Any function that hits both a cold Neon branch and a complex query will time out on the first request and succeed on the second. The fix is to warm the database with a cron ping, add appropriate indexes, or move the query to a background job.

The third pattern is long-running work that never should have lived inside an HTTP request: PDF rendering, email blasts, image generation, data exports. None of those belong in a serverless function. Move them to Inngest, QStash, or Trigger.dev. Your API route enqueues the job and returns immediately. The queue service runs the work on dedicated infrastructure with no 60-second cap, retries on failure, and calls a webhook when done.

Vercel function timeout by AI builder

How often each AI builder ships this error and the pattern that produces it.

AI builder × Vercel function timeout
BuilderFrequencyPattern
LovableEvery AI appUses chat.completions.create without stream: true
Bolt.newCommonGenerates PDFs or sends bulk emails inside an API route
v0CommonSSR-renders huge pages with synchronous DB calls
CursorSometimesSets maxDuration = 60 on a Hobby plan — silently capped to 10s
Replit AgentRareOpens a raw Postgres client per request — cold start adds 3-5s

Related errors we fix

Stop Vercel function timeout recurring in AI-built apps

Still stuck with Vercel function timeout?

Emergency triage · $299 · 48h turnaround
We restore service and write the root-cause report.
start the triage →

Vercel function timeout questions

What is the default Vercel function timeout?+
On the Hobby plan, serverless (Node) functions time out at 10 seconds. The Pro plan raises this to 60 seconds by default and up to 300 seconds if explicitly configured. Edge functions run on a different runtime with a 25-second wall-clock limit regardless of plan. Any route whose handler exceeds the plan cap is killed and returns a 504 gateway timeout to the client.
Why does my OpenAI call always time out on Vercel?+
OpenAI's gpt-4 and gpt-4o models regularly take 15 to 40 seconds to finish a long completion. A serverless function on Hobby dies at 10 seconds. Even on Pro with 60 seconds, large prompts or tool loops blow past. The fix is streaming responses. Return Response with a ReadableStream; the Vercel runtime counts time-to-first-byte not full completion, so the client receives tokens as they arrive and the function technically finished within the limit.
How do I run a job that needs to take two minutes?+
Do not run it inside a Vercel function. Move the work to a background queue. Inngest, QStash, and Trigger.dev all take a webhook and run the job on their infrastructure, then call your webhook when done. Your Vercel function returns in under a second. The queue handles retries, concurrency, and long-running execution. This is the canonical pattern for PDF generation, email campaigns, and anything touching a slow third-party API.
Is the Edge runtime faster than the Node runtime on Vercel?+
Edge cold starts are much faster (under 50ms vs up to 1 second for Node) and Edge has a 25-second wall-clock cap that does not depend on plan. Edge cannot use Node-only packages, filesystem access, or long-lived database connections. For read-heavy API routes with HTTP fetches and no Prisma or Node.js crypto, Edge is an easy win. For anything holding a Postgres connection, stay on Node and move the slow work to a queue.
How long does a Vercel function timeout fix take?+
Thirty minutes to convert an OpenAI handler to streaming. Two hours to move a background job to Inngest or QStash and wire up status polling. A day if you need to refactor five API routes into a queue-driven architecture. Emergency Triage at $299 covers the streaming conversion plus a Playwright test that hits the longest route in CI with a realistic payload.
Next step

Ship the fix. Keep the fix.

Emergency Triage restores service in 48 hours. Break the Fix Loop rebuilds CI so this error cannot ship again.

About the author

Hyder Shah leads Afterbuild Labs, shipping production rescues for apps built in Lovable, Bolt.new, Cursor, Replit, v0, and Base44. our rescue methodology.

Sources