🩹Vibe Code Fix

Shipping an AI-Built Prototype to Real Users

You vibe-coded a weekend project with Claude Code or Cursor. It runs locally. Your friends liked the demo. Now you want actual users on it — not just a gif on X, but a live URL that strangers can sign up for. The distance between a working prototype and a production service is where most AI-assisted projects either take off or quietly die. This page walks through what changes when you flip the "real users" switch.

The prototype review pass

Before you deploy, run your whole codebase through the Vibe Code Fix checklist. Every item is either critical, high, or medium severity. A prototype can ship with medium items open. It cannot ship with a single critical open. The four criticals to clear first: secrets baked into the client bundle, SQL injection, missing auth on protected routes, silent deletions in recent AI edits. Each of these has taken down a launch inside the first hour. Each takes ten minutes to fix if you find them before strangers do.

Env vars and the "works on my machine" tax

Your local .env has everything. Production has nothing until you set it. Every provider you use — database, auth, email, analytics, AI API — needs its key configured in your deploy platform before the build will even boot. AI assistants rarely add env validation, so the first sign of a missing var is a cryptic 500 in production. Add a Zod schema that parses process.env at startup and refuses to boot if anything is missing. That alone collapses a whole category of launch-day incidents.

Rate limits and cost guardrails

If your app calls any paid API — OpenAI, Anthropic, Stripe, a geocoding service — put a rate limit on every endpoint that triggers a call. Without one, a single user (or bot) with a loop can drain your budget before you notice. On Cloudflare Pages this is a dashboard setting. On other hosts you add middleware. Also set a hard monthly spending cap in each provider's billing console. You do not want to find out in a support ticket that your AI costs last night were four figures.

Observability from day one

You need to know when something breaks before a user emails you. The minimum stack: error tracking (Sentry or similar) wired up to both client and server, an uptime check hitting a health endpoint every minute, and a simple structured log for every API call with user ID and timing. You do not need Grafana. You need to not be flying blind. Adding Sentry to a Next.js project is one command and fifteen lines of config.

The first real user

Assume they will click things in orders you never tested, paste weird input, open the app in a stale tab from last week, sign up with a plus-addressed email, and try to use it on a phone in dark mode on a bad connection. Your prototype probably handles none of this. Before launch, do one full run-through yourself as a fresh user, on a phone, on cellular, in private browsing. Every friction point you hit, they will hit ten times worse.

When to stop hardening and ship

There is a diminishing return on pre-launch polish. Clear your criticals, get your top three highs, ship to a small audience (ten people), and watch what actually breaks. The hazards real users trip are almost never the ones you predicted in the abstract. Launch small, fix what smoke-tests find, and launch bigger. The checklist is a map, not a prison — use it to prioritize, not to delay.

shippingprototypeproductionlaunch프로토타입プロトタイプ

More use cases

Ready to run your next diff through the checklist?

Back to checklist