Software Rescue
May 11, 2026From AI Prototype to Real Users: The Engineering Work Nobody Told You About
Your Cursor or Lovable MVP works. The demo went well. Real users are signing up. And then it starts: errors you can't reproduce, users seeing other users' data, the database getting slow, a bill you didn't expect. This is the work AI tools don't do for you — and that nobody told you about when you shipped the prototype.
The gap in one sentence
AI coding tools write the code that proves the idea. They don't write the code that keeps the idea running with real users, real money, and real consequences.
What “production-ready” actually means
A prototype proves an idea works for one user doing the happy path. Production is what happens when:
- Hundreds of users do things you didn't imagine
- Failures happen and you have to recover
- Money flows through the app and a single bug can lose it
- You need to ship a fix at 11pm without taking the site down
- Someone tries to break your system on purpose
None of that gets written automatically. Here is the actual work, in the rough order it usually needs to happen.
1. Move auth and security to the server
In most vibe-coded apps, the client talks directly to Supabase or Firebase. The client says “give me all users” and the database says “okay, here you go” — and then row-level security is supposed to filter. RLS works when configured perfectly. When not, your data is wide open.
The production fix is an API layer that sits between the client and the database. The client never talks to the database directly. Every request goes through an endpoint that validates the session, validates the input, applies business rules, and then performs the database operation with appropriate permissions.
For Next.js + Supabase apps, this usually means Server Actions or API routes that use the Supabase client with the service role key (server-only) and explicit permission checks. RLS stays as defense-in-depth, not your only defense.
2. Put backups in place — and test the restore
A backup you have never restored from is not a backup. It is a file. Real backup work:
- Daily automated backups of the database (Supabase Pro tier includes point-in-time recovery; below that you need an external backup)
- Backups stored in a different account or region (a compromised main account should not lose both)
- User-uploaded files backed up too (Supabase Storage, S3, whatever) — not just the database
- A written runbook for restoring, and one practice restore done successfully before you trust it
3. Wire up observability
You cannot fix what you cannot see. Production apps need three kinds of visibility:
- Error tracking. Every exception caught and reported to a tool (Sentry, Rollbar, Highlight) with stack trace, user context, and request data.
- Server logs. Structured logs with request IDs that you can search. Not just console output that disappears.
- Performance and uptime monitoring. Know when the site is slow, know when it's down, before users tell you.
Wire all three to alerts that reach a real person. A monitoring system nobody looks at is worse than no monitoring system — it gives false confidence.
4. Take database migrations seriously
Production databases need a versioned migration history. AI tools often have you click around in the Supabase UI, which means your schema lives only in production with no history and no rollback path.
The production fix:
- Migrations as code files in your repo (Supabase CLI, Prisma, Drizzle — pick one)
- A staging or local environment where migrations are tested before production
- A documented rollback plan for any migration that touches data
- Migrations run automatically as part of deploy, not by hand
5. Add indexes before users notice slow queries
AI tools rarely add database indexes. Queries that work fine at 100 rows become unbearable at 10,000. The fix is mostly mechanical but needs to happen before users notice.
For every table, ask: what columns get filtered or sorted by? If it's anything more than primary key lookups, you probably need an index. Look at the query patterns in your app, run EXPLAIN, add indexes for anything doing a full table scan.
6. Handle errors like you mean it
AI-generated code falls into two error-handling failure modes: silent swallow (the app pretends nothing happened) or naked crash (the request 500s with a stack trace shown to the user). Both are unacceptable in production.
Production error handling:
- Every error gets logged with enough context to debug
- Users see a clear explanation, not a stack trace
- Transient failures (network blips, rate limits) retry automatically
- Critical paths (payments, signups) have explicit rollback if a step fails
- Nothing crashes silently
7. Add rate limiting before you get rate-limit-attacked
The default vibe-coded app has no rate limiting on anything. Anyone can hit your signup endpoint thousands of times a second, brute-force login, or — and this is the expensive one — trigger your OpenAI integration in a loop until your monthly bill is in the five figures.
The fix: rate limits on signup, login, password reset, and every endpoint that hits a paid third-party API. For LLM endpoints, add a spend cap at the OpenAI / Anthropic side too, so even a broken rate limit doesn't result in catastrophic spend.
8. Move expensive work to background jobs
AI tools tend to do everything in the request handler. Send an email? Inside the request. Process a file? Inside the request. Call an LLM? Inside the request. This works at small scale and falls over the moment any of those steps gets slow.
Move them to background jobs. For Next.js, that's usually Inngest, Trigger.dev, or a Supabase Edge Function queue. The request handler creates a job and returns immediately. The user sees a status. The work happens out-of-band and can retry on failure.
9. Set up environments and a real deploy pipeline
Most vibe-coded apps have one environment: production. Push to main → Vercel deploys → users see the change. This works until you ship a bad migration that takes the site down.
The minimum production setup:
- A separate staging environment that mirrors production
- CI that runs tests and a build before allowing deploy
- Preview deploys for every PR (Vercel does this automatically)
- A documented rollback procedure that takes under 5 minutes
- Production deploys triggered only from a protected main branch
10. Lock down third-party API spending
If your app uses OpenAI, Anthropic, Twilio, Resend, or anything else with per-call pricing, you have a runaway-cost problem waiting to happen. The fix:
- Hard spend cap at the provider level (OpenAI org-level limit, Anthropic limit, etc.)
- Per-user or per-IP rate limit on calls
- Track spend per user in your DB so you can detect abuse
- Alerts on daily spend so you find out before the bill arrives
What this looks like in practice
For a typical Next.js + Supabase MVP that has real users, doing this work end-to-end takes 2-6 weeks of focused senior engineering. A rough order:
Week 1 — stop the bleeding
- • Rotate any exposed secrets
- • Audit and fix RLS
- • Stand up automated backups + test restore
- • Hard spend caps on paid APIs
Weeks 2-3 — production basics
- • Server-side validation on every endpoint
- • Error tracking + structured logs + uptime monitoring
- • Migrations into version control
- • Staging environment
- • Rate limiting on auth + paid endpoints
Weeks 4-6 — durability
- • Background jobs for slow work
- • Database indexes audit
- • Real error handling pass
- • Deploy pipeline with CI + protected branches
- • Documentation: runbooks, architecture diagram, incident playbook
The good news: an AI-built MVP with a reasonable data model usually does not need a rewrite. It needs this layer of production engineering work added on top. See our post on rescue vs rebuild for the deeper framework on when rescue is the right call.
Frequently asked questions
What is the difference between a prototype and a production app?
A prototype proves an idea works for one user happy-path scenario. A production app handles real users doing unexpected things at scale, with no downtime, with secure data, with proper error handling, with observability, with backups, and with the ability to ship changes safely. The work to turn a prototype into a production app is usually 2-6 weeks of focused engineering.
How long does it take to make an AI-built app production-ready?
Most AI-built MVPs take 2-6 weeks of senior engineering work to be production-ready, depending on how complex the app is and how broken the foundation is. The fastest path is to fix critical issues first (security, backups, auth) and then layer in observability, error handling, and performance work as you grow.
Do I need a real backend if my app uses Supabase or Firebase?
You need a real server layer for anything that involves business logic, third-party API calls with secret keys, payment processing, or data integrity rules that span multiple tables. Supabase and Firebase work as direct-from-client backends for simple CRUD apps, but most production apps quickly need an API layer (Next.js API routes, edge functions, or a separate backend) to enforce rules the database alone cannot.
Why does my AI-built app break with more users?
Three common reasons. First, missing database indexes — queries that returned in 10ms with 100 rows take 5 seconds with 100,000. Second, no caching — every request hits the database fresh, and you exhaust your connection pool. Third, expensive operations running synchronously in request handlers when they should be in a background job. All three are solvable but require explicit engineering work AI tools don't do for you.