Building Production-Ready Apps Without Writing Code: A Vibe Coding Workflow


- Premium Results
- Publish articles on SitePoint
- Daily curated jobs
- Learning Paths
- Discounts to dev tools
7 Day Free Trial. Cancel Anytime.
How to Build a Production-Ready App with Vibe Coding
- Write a Markdown PRD with functional requirements, technical constraints, folder structure, and acceptance criteria.
- Feed the PRD to an AI coding agent (e.g., Claude Code) and generate the full project scaffold.
- Verify TypeScript compilation with
npx tsc --noEmitand confirm the folder structure matches the spec. - Generate Jest tests mapped directly to each PRD acceptance criterion using supertest.
- Run tests locally and fix failures via targeted re-prompts referencing specific PRD sections.
- Containerize the app with a multi-stage Dockerfile that compiles TypeScript and produces a minimal image.
- Configure a GitHub Actions CI pipeline with lint, test, and Docker build stages.
- Deploy by connecting the repository to Vercel for automatic deployment from
main.
Vibe coding -- the practice of using natural language prompts to generate full applications through AI coding agents -- has moved from novelty to legitimate development approach since tools like Claude Code and Cursor shipped in 2024. Yet most vibe coding workflows stop at the prototype stage. This article presents a structured, repeatable workflow that takes a vibe-coded application from prompt to production deployment.
Table of Contents
- Why Vibe Coding Needs a Production Workflow
- The PRD-First Approach: Writing Specs AI Can Execute
- Generating the Application with Claude Code
- Adding Automated Testing with Jest
- CI/CD Pipeline: GitHub Actions and Docker
- Deploying to Vercel
- The Production Vibe Coding Checklist
- Making Vibe Coding Reliable
Why Vibe Coding Needs a Production Workflow
Vibe coding -- the practice of using natural language prompts to generate full applications through AI coding agents -- has moved from novelty to legitimate development approach since tools like Claude Code and Cursor shipped in 2024. Yet most vibe coding workflows stop at the prototype stage. The generated code compiles, runs locally, and then sits in a repo untested and undeployed. The gap between "it works on my machine" and "it runs in production" is precisely where vibe-coded projects collapse. This article presents a structured, repeatable workflow that takes a vibe-coded application from prompt to production deployment using Claude Code as the AI agent, TypeScript as the language, Jest for automated testing, GitHub Actions for CI/CD, Docker for containerization, and Vercel for hosting.
The PRD-First Approach: Writing Specs AI Can Execute
Why a PRD Matters More Than a Prompt
Vague prompts produce vague code. Telling an AI agent to "build me a task API" leaves dozens of decisions unspecified: error handling behavior, response formats, folder conventions, dependency choices. The result is code that works in isolation but conflicts with the assumptions of any team or deployment environment it enters. A Product Requirements Document constrains the AI's output to predictable, testable outcomes. The distinction matters: chatting with AI is exploration, but instructing an AI agent with a spec is engineering. A PRD transforms the AI from a conversation partner into an executor with clear acceptance criteria.
The distinction matters: chatting with AI is exploration, but instructing an AI agent with a spec is engineering.
Anatomy of a Vibe-Coding PRD
An effective vibe-coding PRD contains five essential sections: project overview, functional requirements, technical constraints, file and folder structure, and acceptance criteria. The project overview gives the AI context about purpose and scope, while functional requirements list specific endpoints, data models, and behaviors. Technical constraints lock in TypeScript strict mode, specific libraries, and Node.js version. The folder structure prevents the AI from inventing its own conventions, and acceptance criteria define what "done" looks like in testable terms.
# PRD: Task Management REST API
## Project Overview
A four-endpoint REST API for managing tasks, built with Express and TypeScript.
## Functional Requirements
- `POST /tasks` — Create a task with `title` (string, required) and `status` (enum: "pending" | "done", default: "pending"). Returns 201 with the created task.
- `GET /tasks` — List all tasks. Returns 200 with an array of tasks.
- `GET /tasks/:id` — Get a single task by ID. Returns 200 or 404 with `{ error: "Task not found" }`.
- `DELETE /tasks/:id` — Delete a task by ID. Returns 204 or 404.
## Technical Constraints
- Language: TypeScript (strict mode enabled)
- Runtime: Node.js 20
- Framework: Express 4.x
- Data store: In-memory array (no database)
- Testing: Jest with ts-jest and supertest (both required as devDependencies)
- No additional dependencies unless explicitly approved. Required devDependencies include: `jest@29`, `ts-jest@29`, `supertest`, `@types/supertest`, `eslint`, `@typescript-eslint/parser`.
## File/Folder Structure
```
src/
index.ts # Express app setup (exports app; does NOT call app.listen())
server.ts # Server start (imports app, calls app.listen())
routes/tasks.ts # Task route handlers
models/task.ts # Task interface and type definitions
tests/
tasks.test.ts # Jest test suite
```
## Acceptance Criteria
1. `POST /tasks` with missing `title` returns 400 with `{ error: "Title is required" }`.
2. `GET /tasks/:id` with a nonexistent ID returns 404.
3. All responses use `application/json` content type.
4. TypeScript compiles with zero errors under strict mode.
5. All Jest tests pass.
Generating the Application with Claude Code
Feeding the PRD to Claude Code
The workflow is direct: load the PRD as context, then issue a generation prompt that explicitly references the spec. Rather than asking Claude Code to "build a task API," the prompt anchors every instruction to the PRD document. This prevents drift during iterative refinement. Follow-up prompts should always reference specific PRD sections rather than introducing new, unspecified requirements.
Prompt 1:
"Here is my PRD for a Task Management REST API (see PRD.md above).
Generate the full project scaffold matching the specified folder structure.
Use TypeScript in strict mode, Express 4.x, and Node.js 20.
Include a tsconfig.json, package.json with all dependencies, and
Jest test stubs mapped to each acceptance criterion in the PRD."
Prompt 2 (refinement):
"The generated routes/tasks.ts is missing the 400 validation for
POST /tasks when title is absent. Refer to Acceptance Criterion #1
in the PRD and add the validation with the exact error format specified."
Prompt 3 (refinement):
"Add a proper npm script configuration: 'build' should run tsc,
'test' should run jest, and 'start' should run the compiled output
from dist/server.js. Update package.json accordingly."
Reviewing and Validating the AI Output
Before writing a single test, verify three things immediately. First, run npx tsc --noEmit to confirm TypeScript compiles cleanly under strict mode. This requires a tsconfig.json in the project root -- verify that the AI generated one with "strict": true and appropriate compiler options (see the configuration section below). Second, check that package.json dependencies actually exist at the specified versions and do not include packages the AI hallucinated from the npm registry. Third, compare the generated folder structure against the PRD's specification. Common failure patterns include the AI inventing helper files not in the spec, using default exports inconsistently, or adding middleware you did not request. When these appear, re-prompt by citing the exact PRD section the output violates rather than describing the fix manually.
Required Configuration Files
The scaffold must include the following configuration files. If Claude Code does not generate them, add them manually or re-prompt.
tsconfig.json:
{
"compilerOptions": {
"strict": true,
"target": "ES2020",
"module": "commonjs",
"outDir": "dist",
"rootDir": "src",
"esModuleInterop": true,
"skipLibCheck": true
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist", "tests"]
}
The include and exclude fields are critical: without them, tsc compiles everything under the project root -- including test files and jest.config.ts -- into dist/, polluting the production build artifact with test code. The skipLibCheck flag prevents spurious type errors from third-party type definitions (common with ts-jest and @types/supertest).
jest.config.ts:
// jest.config.ts
import type { Config } from "jest";
const config: Config = {
preset: "ts-jest",
testEnvironment: "node",
roots: ["<rootDir>/tests"],
testMatch: ["**/*.test.ts"],
};
module.exports = config;
Important: Although this file uses .ts extension and TypeScript syntax for the Config type import, it must use module.exports instead of export default. The tsconfig.json sets "module": "commonjs", and Jest loads its config file with Node's require(). Using export default causes a SyntaxError: Unexpected token 'export' at runtime and prevents any tests from executing. The roots and testMatch fields scope test discovery to tests/ only, preventing Jest from picking up compiled .js files if dist/ exists.
package.json scripts:
{
"scripts": {
"build": "tsc",
"test": "jest --ci",
"start": "node dist/server.js",
"lint": "eslint \"src/**/*.ts\" --max-warnings 0"
}
}
The start script must point to dist/server.js, not dist/index.js. Per the PRD, src/index.ts exports the Express app object without calling app.listen(). Running node dist/index.js directly produces a process that imports Express, exports the app object, and exits -- no HTTP server ever binds to a port. The server.ts file imports app and calls app.listen(), which is the correct production entry point. The --ci flag on jest disables interactive watch mode and ensures deterministic behavior in CI environments.
.eslintrc.json:
The CI pipeline runs ESLint against the source files. ESLint 8+ exits with an error if no configuration file is found, which means the lint step fails on every run in a fresh clone without this file.
{
"root": true,
"parser": "@typescript-eslint/parser",
"parserOptions": {
"ecmaVersion": 2020,
"sourceType": "module"
},
"rules": {}
}
This is the minimum required for the CI lint step to resolve without error. Extend rules as needed for your team's coding standards.
Adding Automated Testing with Jest
Generating Tests from Acceptance Criteria
Each acceptance criterion in the PRD maps directly to one or more Jest test cases. This is where the PRD-first approach pays off: the AI does not need to guess what to test because the spec already defines pass/fail conditions. Asking Claude Code to "generate Jest tests for each acceptance criterion in the PRD" produces tests that are traceable back to requirements rather than invented ad hoc.
Ensure src/index.ts exports the Express app object without calling app.listen() at module scope. Move app.listen() to a separate src/server.ts file, or guard it with if (require.main === module) { app.listen(3000); }. The test file imports app directly -- if listen() runs on import, Supertest will attempt to bind a port, causing EADDRINUSE errors in CI or when running tests in parallel.
The in-memory task store used by the route handlers is shared mutable state. Without resetting it between tests, tasks created in one test case leak into subsequent cases. This causes GET /tasks to return an ever-growing array in watch mode, and any assertion on array length or task IDs becomes order-dependent and flaky. The route module must export a resetTasks() function that clears the in-memory array, and each test must call it in a beforeEach hook.
The gap between "it works on my machine" and "it runs in production" is precisely where vibe-coded projects collapse.
import request from "supertest";
import app from "../src/index";
import { resetTasks } from "../src/routes/tasks";
describe("Task Management API", () => {
beforeEach(() => {
resetTasks();
});
afterAll(async () => {
// Allow any pending handles (timers, sockets) to resolve.
// Prevents "Jest did not exit one second after the test run
// has completed" warnings and CI timeouts.
await new Promise<void>((resolve) => setImmediate(resolve));
});
it("should return 400 when title is missing on POST /tasks", async () => {
const response = await request(app)
.post("/tasks")
.send({ status: "pending" });
expect(response.status).toBe(400);
expect(response.body).toEqual({ error: "Title is required" });
});
it("should return 404 for a nonexistent task ID on GET /tasks/:id", async () => {
const response = await request(app).get("/tasks/nonexistent-id");
expect(response.status).toBe(404);
expect(response.body).toEqual({ error: "Task not found" });
});
it("should create a task and return 201 with the task object", async () => {
const response = await request(app)
.post("/tasks")
.send({ title: "Write PRD" });
expect(response.status).toBe(201);
expect(response.body).toHaveProperty("title", "Write PRD");
expect(response.body).toHaveProperty("status", "pending");
});
});
The resetTasks() import requires src/routes/tasks.ts to export a function that clears the in-memory array. A minimal implementation:
// In src/routes/tasks.ts, alongside the route handlers:
let tasks: Task[] = [];
export function resetTasks(): void {
tasks.length = 0;
}
Additional Test Cases
These additional tests cover the remaining CRUD operations and response format requirements from the PRD:
// tests/tasks.test.ts — additional cases within the same describe block
it("GET /tasks returns empty array on fresh store", async () => {
const response = await request(app).get("/tasks");
expect(response.status).toBe(200);
expect(response.body).toEqual([]);
});
it("DELETE /tasks/:id returns 204 for existing task", async () => {
const create = await request(app)
.post("/tasks")
.send({ title: "To delete" });
const id = create.body.id;
const del = await request(app).delete(`/tasks/${id}`);
expect(del.status).toBe(204);
});
it("DELETE /tasks/:id returns 404 for nonexistent task", async () => {
const response = await request(app).delete("/tasks/does-not-exist");
expect(response.status).toBe(404);
expect(response.body).toEqual({ error: "Task not found" });
});
it("POST /tasks sets default status to pending", async () => {
const response = await request(app)
.post("/tasks")
.send({ title: "No status provided" });
expect(response.body.status).toBe("pending");
});
it("all responses have application/json content-type", async () => {
const response = await request(app).get("/tasks");
expect(response.headers["content-type"]).toMatch(/application\/json/);
});
Integration Test
A full lifecycle test verifies that create, read, and delete operations work together correctly:
// tests/integration.test.ts
import request from "supertest";
import app from "../src/index";
import { resetTasks } from "../src/routes/tasks";
describe("Task CRUD Lifecycle", () => {
beforeEach(() => {
resetTasks();
});
afterAll(async () => {
await new Promise<void>((resolve) => setImmediate(resolve));
});
it("full task lifecycle: create, read, delete", async () => {
// Create
const created = await request(app)
.post("/tasks")
.send({ title: "Integration task" });
expect(created.status).toBe(201);
const id = created.body.id;
// Read by ID
const fetched = await request(app).get(`/tasks/${id}`);
expect(fetched.status).toBe(200);
expect(fetched.body.title).toBe("Integration task");
// Read in list
const list = await request(app).get("/tasks");
expect(list.body.some((t: { id: string }) => t.id === id)).toBe(true);
// Delete
const deleted = await request(app).delete(`/tasks/${id}`);
expect(deleted.status).toBe(204);
// Confirm gone
const gone = await request(app).get(`/tasks/${id}`);
expect(gone.status).toBe(404);
});
});
CI/CD Pipeline: GitHub Actions and Docker
GitHub Actions Workflow for Lint, Test, and Build
AI-generated code must pass the same quality gates as human-written code. A CI pipeline with install, lint, test, and build stages catches regressions that re-prompting might silently introduce. For vibe-coded projects, this pipeline is not optional; it is the safety net that makes iterative AI generation viable across multiple prompt cycles.
name: CI Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
- name: Install dependencies
run: npm ci
- name: Lint
run: npx eslint "src/**/*.ts" --max-warnings 0
- name: TypeScript compilation check
run: npx tsc --noEmit
- name: Run Jest tests
run: npm test
- name: Build Docker image
run: docker build -t task-api:${{ github.sha }} .
The Docker build step validates that the image builds correctly but does not push it to a registry. To use the image for deployment on a container-based platform, add a docker push step targeting a container registry (e.g., GitHub Container Registry, Docker Hub) and authenticate using repository secrets.
Dockerizing the Application
A multi-stage Dockerfile keeps the production image minimal by separating the build environment from the runtime. The first stage compiles TypeScript. The second stage copies only the compiled JavaScript into a slim Node.js image, then runs a fresh npm ci --omit=dev to install production dependencies.
Note: The --omit=dev flag requires npm 7 or later. node:20-alpine ships with npm 9+ and is compatible. If you change the base image, verify the npm version with npm --version.
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:20-alpine
WORKDIR /app
COPY /app/dist ./dist
COPY /app/package*.json ./
RUN npm ci --omit=dev
EXPOSE 3000
CMD ["node", "dist/server.js"]
The CMD must invoke dist/server.js, not dist/index.js. The index.ts file exports the Express app without calling listen() -- running it directly results in a container that starts, passes the build step, but binds no port and serves zero traffic. The server.ts file imports app and calls app.listen(3000), which is the correct production entry point.
Create a .dockerignore file in the project root to prevent COPY . . from sending unnecessary files (including node_modules/, .git/, and test files) into the Docker build context:
.dockerignore:
node_modules
dist
.git
tests
*.test.ts
.env
Deploying to Vercel
Connecting the Repo and Configuring Deployment
Linking the GitHub repository to Vercel takes three steps: import the repo in the Vercel dashboard, set any required environment variables (such as API keys or database URLs), and confirm that automatic deploys trigger from the main branch. Do not set PORT -- Vercel manages port binding internally and ignores user-defined port variables.
Vercel deploys directly from the repository source using its own Node.js build pipeline -- it does not consume the Docker image built in CI. The Docker build step in GitHub Actions validates containerizability only. For Docker-based deployment, use a registry-backed service such as Railway, Render, or Cloud Run.
The GitHub Actions pipeline runs first on every push, enforcing linting, TypeScript compilation, Jest tests, and Docker build validation. On the Vercel side, deployment triggers independently from the same push to main. This means CI will still flag broken AI-generated code, though you should configure Vercel to require passing checks before deploying if you want to gate deployment on CI status.
Serverless caveats: Vercel runs Express apps as serverless functions. The default function timeout on the free tier is 10 seconds, and function bundles must stay under 50 MB compressed (as of Vercel's current free-tier documentation). The in-memory data store defined in the PRD is suitable for local development and testing only -- on serverless platforms like Vercel, state is not persisted across function invocations. Each cold start resets the task array. For persistent storage, add a database. You may also need a vercel.json with route rewrites to handle Express routing correctly in the serverless environment.
The Production Vibe Coding Checklist
- Write a Markdown PRD with functional requirements, technical constraints, folder structure, and acceptance criteria.
- Feed the PRD to Claude Code and generate the full project scaffold in a single pass.
- Verify TypeScript compilation with
npx tsc --noEmitand confirm the folder structure matches the spec. Ensuretsconfig.jsonexists with"strict": true,"include": ["src/**/*"], and"exclude": ["node_modules", "dist", "tests"]. - Generate Jest tests mapped directly to PRD acceptance criteria. Ensure
jest.config.tsexists with thets-jestpreset and usesmodule.exports(notexport default). - Run tests locally with
npm test; fix failures via targeted re-prompts referencing specific PRD sections. - Create a multi-stage Dockerfile that builds TypeScript and produces a minimal production image. Ensure
CMDinvokesdist/server.js, notdist/index.js. Add a.dockerignorefile. - Set up a GitHub Actions CI pipeline with stages for lint (ESLint), test (Jest), and Docker build -- and confirm that an
.eslintrc.jsonfile exists in the repository root so the lint step does not fail on config resolution. - Connect the repository to Vercel for automatic deployment from
main. Note that Vercel deploys from repository source, not from the Docker image. - Push to
mainand confirm a green pipeline and live deployment. Verify by runningcurl https://<your-vercel-url>/tasksand confirming a200response with[]. - Iterate the cycle: update the PRD, re-prompt, re-test, redeploy.
Making Vibe Coding Reliable
Start the next side project with a real PRD rather than a casual prompt. That single change -- a spec the AI must follow, not a suggestion it can interpret -- is a high-impact improvement to output quality. The AI agent changes how code gets written, not whether it needs to be verified. Formal specs, automated tests, and gated CI/CD pipelines apply the same way they always have. The tooling is new. The discipline is not.
The AI agent changes how code gets written, not whether it needs to be verified. Formal specs, automated tests, and gated CI/CD pipelines apply the same way they always have. The tooling is new. The discipline is not.