The 2-week AI MVP target
Two-week AI MVPs are achievable for a specific class of product: a single core flow, one or two user types, an AI feature that is integral but not exotic, and a founder who can make decisions inside 24 hours. Outside of those constraints, two weeks becomes four to six.
The point of the two-week target is not the speed itself. It is forcing scope discipline. When you only have ten working days, every "wouldn't it be cool if…" gets cut.
The stack
- Frontend: Next.js 15 + Tailwind CSS, deployed on Vercel
- Backend: FastAPI on a DigitalOcean droplet with Docker
- Database: PostgreSQL (managed by DigitalOcean) or Supabase
- AI: Google Gemini 1.5 Flash for most tasks, Gemini 1.5 Pro for hard reasoning
- Auth: Supabase Auth or Clerk
- Analytics: PostHog from day one
- Monitoring: Sentry + Vercel Analytics
This stack is not optimal — it is fast. Every choice is calibrated for "ship in two weeks," not "scale to a million users."
Day-by-day breakdown
Day 1 (Monday): Spec lockdown
Half a day on a spec call. The other half rewriting the spec into a single document with these sections: who the user is, what the core flow is, what the AI feature does, what the data model looks like, what the API contract looks like, what counts as "done."
If the founder cannot describe the core flow in three sentences, the project does not start. I have walked away from engagements at this stage and would do it again.
Day 2 (Tuesday): Skeleton + deploy
Spin up the Next.js app, the FastAPI backend, the database. Wire them all together with one health-check endpoint that returns "hello world from a real database query." Deploy to Vercel and DigitalOcean. Get HTTPS working on a real subdomain.
By end of day, there is a public URL. It does almost nothing, but it is real.
Day 3 (Wednesday): Auth + data model
Wire up auth. Implement the core data model — usually 3 to 5 tables, never more than 8. Build the API endpoints for the core CRUD. No AI yet.
This is the day where most "fast MVP" projects start to slip. The fix is brutal scoping: if a table is not absolutely required for the core flow, it does not exist yet.
Day 4 (Thursday): Core flow without AI
Build the core user flow end-to-end without any AI. The user can log in, create the thing, view the thing, and act on the thing. The AI feature has a placeholder button that returns canned text.
This step is non-negotiable. Building the AI before the core flow always produces an AI feature that does not connect to anything.
Day 5 (Friday): AI integration
Wire up Gemini. Implement the core AI feature. Get it to a point where it works for a happy path on real input.
The trick on day 5 is to not try to handle every edge case yet. Get the happy path solid, then add edge case handling on day 9.
Days 6-7 (weekend): Buffer
I do not work weekends, but I leave them as buffer for the inevitable surprise. Half the time I do not need them. Half the time something on day 4 or 5 breaks badly enough that the buffer saves the project.
Day 8 (Monday): Polish core flow
Take the core flow and make it actually pleasant to use. Loading states, empty states, error states, success states. Every button does what the user expects.
Day 9 (Tuesday): AI edge cases + safety
Now handle the AI edge cases — empty inputs, abusive inputs, API failures, rate limits, model timeouts. Add the safety guardrails: prompt injection defenses, max token limits, content filters if relevant.
This is where the AI feature goes from "demo-able" to "production-ready."
Day 10 (Wednesday): Test + bug bash
Real users tapping at the real URL. Run a structured bug bash with the founder. Fix the bugs that block the core flow.
Day 11 (Thursday): Documentation + handoff
README, deployment runbook, environment variable list, AI prompt registry. The founder can take over from here without needing me.
Day 12 (Friday): Launch + first marketing
Marketing copy, landing page polish, analytics events, basic SEO. The MVP launches publicly.
The tradeoffs I make
- No automated tests for UI. Manual testing in week 2 is faster than building UI test infrastructure.
- Tests for the AI logic and any code touching money or user data. Non-negotiable.
- One-shot prompts, not multi-step agents. Multi-step agents are 10x harder to debug. Most AI features can be a single well-designed prompt.
- Server-rendered Next.js, not SPA. Faster to ship, better SEO, fewer bugs.
- Boring database schema. Three to five tables. JSONB columns for anything that might evolve. No premature normalisation.
Common mistakes that break the timeline
Trying to handle every edge case in week 1. Edge cases multiply scope. Handle the happy path first; come back for edges in week 2.
Picking exotic AI tooling. LangChain, LangGraph, agent frameworks — they all add cognitive overhead. Plain HTTP calls to Gemini are 10x easier to debug.
Letting the founder add features mid-project. Every new feature is a re-scope. The right answer is "yes, that goes in v2; let's keep v1 shippable."
Building auth from scratch. Supabase Auth or Clerk. No exceptions.
Hand-writing UI components. Tailwind UI or shadcn/ui. The component is not what makes the product valuable.
The deliverable
At the end of two weeks, the founder has:
- A live URL with HTTPS and a real domain
- Source code in their GitHub
- Deployment runbook
- AI prompt registry with version control
- Analytics event tracking from day one
- A go/no-go conversation about what to build next
That is enough to put in front of real users, raise a seed round, or hand to a full-time team.
Closing thought
Two-week AI MVPs are not a hack — they are a discipline. The discipline is scope clarity, stack opinion, and the willingness to delete more than you build.
If you have an MVP idea and want to know whether it fits the two-week pattern, book a free strategy call. The first 30 minutes is free, and we can usually figure it out together.