afterbuild/ops
§ PLATFORM/mcp-developer

What breaks when you ship a MCP app

MCP developers who expose your stack to every LLM client. One MCP server, plugged into Claude Desktop, Cursor, Windsurf, ChatGPT, Gemini. Auth handled, schemas tuned for tool-calling accuracy, transport picked to fit your deployment.

48%
AI code vulnerability rate (Veracode 2025)
6
MCP problem pages indexed
48h
Rescue diagnostic SLA
Quick verdict

MCP engagements cover the six places custom Model Context Protocol servers stall without experienced engineers: teams building separate integrations per AI client (3–5× the work), brittle tool descriptions that cause the LLM to call the wrong tool or fill the wrong arguments, MCP servers deployed without auth (or with auth that doesn't respect tenant scope), wrong transport choice for the deployment (stdio vs. HTTP vs. SSE), schema hallucinations where the LLM invents parameters, and no rate limiting so one runaway agent takes the server down. MCP is the emerging standard — Claude, Cursor, Windsurf, ChatGPT, Gemini all support it. Built right, one server serves every client.

§ FAILURES/every way it ships broken

Every way MCP ships broken code

MCP (Model Context Protocol) is how you expose tools, data, and prompts to LLM clients in a portable way — write once, every MCP-compatible AI client can call it. The failure mode isn't 'MCP is hard'; it's that production MCP servers need auth, rate limits, audit logging, and schemas that the tutorial examples skip. This page is for hiring senior MCP engineers who have shipped production servers against every one of those failure modes.

E-01✕ FAIL

Separate integrations per AI client

Teams build one connector for Claude, another for Cursor, another for ChatGPT — 3× the engineering time, 3× the maintenance. One MCP server replaces the lot. We migrate existing per-client code to a single MCP surface when that's the right shape.

E-02✕ FAIL

Brittle tool descriptions

LLM tool-calling accuracy lives in the tool description. Vague descriptions cause the model to call the wrong tool, fill the wrong argument, or ignore the tool entirely. We rewrite descriptions against real traces, add examples, and tune until the accuracy shows up in evals.

E-03✕ FAIL

No auth on the MCP server

Some tutorials show stdio-transport MCP servers running unauthenticated — fine for personal Claude Desktop use; a security hole in any multi-user or production deployment. We default to API-key auth (works everywhere) and deploy OAuth 2.0 + PKCE where the client supports it (Claude Desktop, Cursor, ChatGPT Desktop).

E-04✕ FAIL

Wrong transport choice

stdio is right for desktop-local tool access. HTTP is right for server-hosted production. SSE is right when the tool streams partial results. Picking wrong locks you out of the deployment shape you need; we pick on the scoping call based on client mix and infra.

E-05✕ FAIL

Schema hallucinations

Without strict JSON Schema validation and good descriptions, the LLM invents optional parameters or passes the wrong shapes. We use Zod (Node) or Pydantic (Python) for strict schemas, validate every call, and return structured errors the model can recover from.

E-06✕ FAIL

No rate limiting or audit log

One buggy agent can burn through database read quotas or hammer an internal API. Production MCP needs rate limits (per user, per tool, per minute) and an audit log of every call — timestamp, user, tool, inputs, result-size. We wire both by default.

§ RESCUE/from your app to production

From your MCP app to production

The rescue path we run on every MCP engagement. Fixed price, fixed scope, no hourly surprises.

  1. 0148h

    Free rescue diagnostic

    Send the repo. We audit the MCP app — auth, DB, integrations, deploy — and return a written fix plan in 48 hours.

  2. 02Week 1

    Triage & stop-the-bleed

    Patch the highest-impact failure modes first — the RLS hole, the broken webhook, the OAuth loop. No feature work until production is safe.

  3. 03Week 2-3

    Hardening & test coverage

    Real migrations, signed webhooks, session management, error monitoring. Tests for every regression so MCP prompts can't re-break them.

  4. 04Week 4

    Production handoff

    Deploy to a portable stack (Vercel / Fly / Railway), hand back a repo your next engineer can read, and stay on-call for 2 weeks.

§ PRICING/fixed price, fixed scope

MCP rescue pricing

Three entry points. Every engagement is fixed-fee with a written scope — no hourly surprises, no per-credit gambling.

price
Free
turnaround
48 hours
scope
Written MCP audit + fix plan
guarantee
No obligation
Book diagnostic
most common
price
$299
turnaround
48 hours
scope
Emergency triage for a single critical failure
guarantee
Fix or refund
Triage now
price
From $15k
turnaround
2–6 weeks
scope
Full MCP rescue — auth, DB, integrations, deploy
guarantee
Fixed price
Start rescue
When you need us
  • You run multiple AI clients (Claude Desktop + Cursor + ChatGPT) and want one integration surface
  • Your team built a Claude-only tool-use spec and now needs it portable to other MCP clients
  • Your existing MCP server has brittle tool descriptions and LLM tool-calling accuracy is hurting the product
  • You want OAuth-scoped, rate-limited, audited MCP access — not a quick stdio-auth hack
Stack we support
MCP SDK (Python)MCP SDK (TypeScript)FastMCPClaude Agent SDK tool-use patternsOAuth 2.0 + PKCEZod (TS) + Pydantic (Python) schema validationDocker + Vercel Functions + self-hosted deploymentstdio / HTTP / SSE transports
§ FAQ/founders ask

MCP questions founders ask

FAQ
What is MCP and why should we care?
Model Context Protocol (MCP) is a standard introduced by Anthropic in late 2024 for exposing tools, data, and prompts to LLM clients. Think 'REST for AI tools.' Write the server once, and every MCP-compatible client (Claude Desktop, Cursor, Windsurf, ChatGPT Desktop, Zed, OpenAI Responses API, Gemini experimental) can call it. For teams where 'which AI client our people use' is a moving target, MCP is the bet that stops you locking into one vendor.
Which MCP clients support OAuth today?
Claude Desktop, Cursor, and ChatGPT Desktop support OAuth 2.0 + PKCE for MCP servers as of Q1 2026. Windsurf and smaller editor clients typically use API-key bearer tokens. We default to API keys (compatible everywhere) and deploy OAuth when tools expose sensitive per-user data (mailbox access, per-user billing). Enterprise OAuth with SSO is a Q2 2026 roadmap item most clients are tracking.
How does this compare to Claude's native tool use?
Claude's native tool use is locked to Claude. OpenAI Assistants tools are locked to OpenAI. MCP is portable — one server, every client. If you're sure you'll only ever use one AI client, the vendor-specific path is simpler and sometimes faster. If your team expects to switch clients or use multiple, MCP is the right bet. We map that trade-off explicitly on the Day-1 scoping call.
Transport choice — stdio, HTTP, or SSE?
stdio: best for desktop-local tools the user already has installed (file system access, local git, local commands). HTTP: best for server-hosted production tools (query your Postgres, write to your CRM, call your internal API). SSE: best when a tool streams partial results (long-running searches, batch operations with progress). We pick on the scoping call based on client mix and tool latency profile.
Do we need one MCP server per domain or one big server?
One MCP server per tool domain is the clean pattern. E.g., one server for 'customer data' (query, update, export), one for 'deploy / ops' (deploy, rollback, status), one for 'billing' (invoice, refund). Each owns its auth scope, deploys independently, and can be versioned separately. MCP clients connect to multiple servers at once, so there's no drawback to splitting. A monster server with 50 tools is harder to reason about and harder to secure.
What does the typical engagement look like?
MCP Server Build ($5,999 / 2 weeks) is the most common — one MCP server exposing 3–8 tools from your stack, with auth, rate limits, audit logging, and a runbook. For multi-server or enterprise OAuth work, we scope a custom engagement on Day 1. For teams migrating existing Claude tool-use code to MCP, the build is often closer to 10 days.
About the author

Hyder Shah leads Afterbuild Labs, shipping production rescues for apps built in Lovable, Bolt.new, Cursor, v0, Replit Agent, Base44, Claude Code, and Windsurf — at fixed price.

Next step

Stuck on your MCP app?

Send the repo. We'll tell you what it takes to ship MCP to production — in 48 hours.

Book free diagnostic →