Build Your Own Lovable: AI Code Generation, Live Preview and Tool-Call Edits

Posted on: 5/14/2026 4:28:02 PM

Lovable, v0.dev, Bolt.new — in just the last 18 months, a wave of "AI app builders" has reshaped how non-technical users build software: type a sentence, watch UI appear in the browser, refine via chat, export a GitHub repo. It feels like "having a developer next to you doing what you ask". But on close inspection, Lovable isn't magic — it's a clever combination of 4 existing pieces: an LLM with tool use, a virtual file system, a runtime for preview, and a streaming UI to render each step. This guide walks you through building a minimal version — enough to actually run, enough to understand every architectural decision in real products.

~700Lines of core code (chat loop + tools + preview)
5Minimum tools (write/read/edit/bash/done)
2-3sTime-to-first-preview for simple prompts
$0.05Average cost per turn (Sonnet, 50k tokens)

1. What does Lovable actually do under the hood?

Strip away marketing and a Lovable-style AI app builder is a 4-step loop running inside your browser tab:

graph LR
    U["User prompt
(chat input)"] --> AGENT["Agent Loop
(LLM + tools)"] AGENT -->|"tool_use"| FS["Virtual
File System"] FS -->|"file change events"| PREVIEW["Sandbox Preview
(WebContainer)"] AGENT -->|"text stream"| UI["Chat UI
(thinking + diffs)"] PREVIEW -->|"iframe"| USER2["User"] UI --> USER2 USER2 -.->|"new prompt"| U classDef u fill:#e94560,stroke:#fff,color:#fff classDef a fill:#16213e,stroke:#fff,color:#fff classDef sb fill:#ff9800,stroke:#fff,color:#fff classDef st fill:#f8f9fa,stroke:#e94560,color:#2c3e50 class U,USER2 u class AGENT a class PREVIEW sb class FS,UI st

Figure 1 — The 4-component loop of a Lovable-style app builder.

Each component has alternatives. This guide picks the simplest stack that's still "real":

ComponentOur pickProduction alternatives
Frontend / Chat UINext.js 15 (App Router) + React 19Remix, SvelteKit, Astro
LLM + Tool useClaude Sonnet 4.6 (Anthropic SDK)GPT-5, Gemini 2.5
Virtual File SystemZustand store (in-memory Map)IndexedDB, Yjs CRDT, S3
Sandbox PreviewWebContainer API (StackBlitz)Sandpack, E2B, Vercel Sandbox
StreamingAnthropic SDK streaming + Server-Sent EventsVercel AI SDK, tRPC streaming

2. The heart of the system — Tool schema

Lovable works because the LLM doesn't invent the chat UI itself. It's equipped with a tool set that has clear schemas, and each model response is a sequence of tool calls against the virtual file system. These are the 5 minimum tools — borrowed from Anthropic Computer Use and Cursor Composer:

// lib/tools.ts
import Anthropic from "@anthropic-ai/sdk";

export const TOOLS: Anthropic.Tool[] = [
  {
    name: "write_file",
    description: "CREATE new or OVERWRITE entire file. Use for new components or >50% rewrites.",
    input_schema: {
      type: "object",
      properties: {
        path: { type: "string", description: "Relative path from root, e.g. src/App.tsx" },
        content: { type: "string", description: "Full file content" },
      },
      required: ["path", "content"],
    },
  },
  {
    name: "edit_file",
    description: "Edit a portion of a file via str_replace. More efficient than write_file for small changes.",
    input_schema: {
      type: "object",
      properties: {
        path: { type: "string" },
        old_str: { type: "string", description: "EXACT text to replace (must be unique in file)" },
        new_str: { type: "string", description: "Replacement text" },
      },
      required: ["path", "old_str", "new_str"],
    },
  },
  {
    name: "read_file",
    description: "Read file content. REQUIRED before edit_file.",
    input_schema: {
      type: "object",
      properties: { path: { type: "string" } },
      required: ["path"],
    },
  },
  {
    name: "run_command",
    description: "Run a shell command in the  (e.g. npm install lucide-react).",
    input_schema: {
      type: "object",
      properties: { command: { type: "string" } },
      required: ["command"],
    },
  },
  {
    name: "list_files",
    description: "List files under a directory.",
    input_schema: {
      type: "object",
      properties: { path: { type: "string", description: "e.g. src" } },
      required: ["path"],
    },
  },
];

Why split write_file and edit_file?

Lesson from Cursor and Claude Code: with only one "write" tool, the model always overwrites the entire file → high cost + risk of breaking unrelated code. Adding edit_file with a str_replace pattern lets the model do "surgical edits" — touching only what needs changing. Rule of thumb: new file → write_file; existing file → read_file first, then edit_file.

3. Virtual File System in React state

No database or disk needed. A Map<string, string> in Zustand is enough:

// store/files.ts
import { create } from "zustand";

interface FileStore {
  files: Map<string, string>;
  versions: Array<Map<string, string>>; // snapshot rollback

  write: (path: string, content: string) => void;
  edit: (path: string, oldStr: string, newStr: string) => { ok: boolean; error?: string };
  read: (path: string) => string | null;
  list: (prefix: string) => string[];
  snapshot: () => void;
  rollback: () => void;
}

export const useFiles = create<FileStore>((set, get) => ({
  files: new Map(),
  versions: [],

  write: (path, content) => set((s) => {
    const next = new Map(s.files);
    next.set(path, content);
    return { files: next };
  }),

  edit: (path, oldStr, newStr) => {
    const cur = get().files.get(path);
    if (!cur) return { ok: false, error: `File ${path} not found` };
    const idx = cur.indexOf(oldStr);
    if (idx === -1) return { ok: false, error: `old_str not found in ${path}` };
    if (cur.indexOf(oldStr, idx + 1) !== -1) {
      return { ok: false, error: `old_str appears multiple times in ${path} — need a more unique snippet` };
    }
    const updated = cur.replace(oldStr, newStr);
    get().write(path, updated);
    return { ok: true };
  },

  read: (path) => get().files.get(path) ?? null,
  list: (prefix) => [...get().files.keys()].filter((k) => k.startsWith(prefix)),

  snapshot: () => set((s) => ({ versions: [...s.versions, new Map(s.files)] })),
  rollback: () => set((s) => {
    const last = s.versions[s.versions.length - 1];
    return last ? { files: new Map(last), versions: s.versions.slice(0, -1) } : s;
  }),
}));

Why must edit check uniqueness?

If old_str matches multiple places in the file, String.replace only swaps the FIRST occurrence — not necessarily where the LLM intended. This is a maddening bug to debug. Anthropic's official str_replace_based_edit_tool enforces uniqueness too and returns a clear error so the model self-corrects — copy this pattern exactly.

4. Live Preview with WebContainer

WebContainer runs Node.js inside WebAssembly right in the browser tab — no backend needed. It accepts a FileSystemTree, mounts it into a virtual VM, runs npm install and the dev server, then exposes it via an iframe URL.

// lib/preview.ts
import { WebContainer } from "@webcontainer/api";

let wcInstance: WebContainer | null = null;

export async function getContainer() {
  if (wcInstance) return wcInstance;
  wcInstance = await WebContainer.boot();
  return wcInstance;
}

export async function syncFiles(files: Map<string, string>) {
  const wc = await getContainer();
  const tree: Record<string, any> = {};
  for (const [path, content] of files) {
    const parts = path.split("/");
    let node = tree;
    for (let i = 0; i < parts.length - 1; i++) {
      node[parts[i]] ??= { directory: {} };
      node = node[parts[i]].directory;
    }
    node[parts[parts.length - 1]] = { file: { contents: content } };
  }
  await wc.mount(tree);
}

export async function startDevServer(): Promise<string> {
  const wc = await getContainer();
  const install = await wc.spawn("npm", ["install"]);
  if ((await install.exit) !== 0) throw new Error("npm install failed");

  const dev = wc.spawn("npm", ["run", "dev"]);
  return new Promise((resolve) => {
    wc.on("server-ready", (_port, url) => resolve(url));
  });
}

Then in React, a simple effect auto-resyncs whenever files change:

// components/Preview.tsx
const files = useFiles((s) => s.files);
const [previewUrl, setPreviewUrl] = useState<string | null>(null);

useEffect(() => {
  syncFiles(files); // Vite/Next dev server hot-reloads automatically
}, [files]);

useEffect(() => {
  startDevServer().then(setPreviewUrl);
}, []);

return previewUrl
  ? <iframe src={previewUrl} className="w-full h-full border-0" />
  : <div>Booting ...</div>;

WebContainer vs Sandpack vs E2B

  • WebContainer: runs entirely in the browser, free, full Node.js. Caveats: Chromium-only, requires CORS headers (COEP, COOP).
  • Sandpack (CodeSandbox): lighter but only bundles frontend (no Node runtime). Good for UI prototypes.
  • E2B/Modal: server-side, no browser limits, more powerful but costs money and adds latency.

The real Lovable uses a mix: WebContainer for fast preview, server-side build for production deploys.

5. System prompt — the agent's brain

This is the most important piece, defining the builder's "personality". A good system prompt for a Lovable-clone needs:

  1. Role + scope: I'm a code generator for Vite + React + TypeScript + Tailwind apps.
  2. Conventions: file structure (src/components, src/App.tsx), naming, allowed dependencies.
  3. Tool usage rules: when to write vs edit, mandatory read_file before edit_file.
  4. Definition of Done: model must end with text describing the change (no infinite tool-call loop).
export const SYSTEM_PROMPT = `You are an AI engineer building web apps for the user.

REQUIRED STACK:
- Vite + React 19 + TypeScript + Tailwind CSS v4
- No other frameworks (Next.js, Remix, ...) —  does not support them.
- Entry files: src/main.tsx, src/App.tsx; html in index.html.

TOOL USAGE RULES:
1. Creating a NEW file → write_file
2. Editing an existing file → REQUIRED read_file first, then edit_file with str_replace
3. Need a new dependency → run_command "npm install <pkg>"
4. When done, STOP calling tools — write 1-2 sentences summarizing changes for the user

CODE RULES:
- Every component has clear TypeScript prop types
- Tailwind utility classes only, no inline styles
- No needless comments, no in-code explanations
- Each file < 200 lines, split into smaller components when needed

WHEN USER ASKS FOR A SMALL UI TWEAK:
- Touch only files that genuinely need changing
- Don't refactor unrelated code, don't create unnecessary files`;

6. Conversation loop with streaming

This is where everything fits together. A Next.js API route receives messages, calls Anthropic streaming, applies each tool_use to the virtual FS immediately, and streams each text delta back to the client.

// app/api/chat/route.ts
import Anthropic from "@anthropic-ai/sdk";
import { NextRequest } from "next/server";
import { SYSTEM_PROMPT } from "@/lib/prompt";
import { TOOLS } from "@/lib/tools";

const client = new Anthropic();

export async function POST(req: NextRequest) {
  const { messages, files } = await req.json();
  const encoder = new TextEncoder();
  const fsState = new Map<string, string>(Object.entries(files));

  const stream = new ReadableStream({
    async start(controller) {
      const send = (event: string, data: any) =>
        controller.enqueue(encoder.encode(`event: ${event}\ndata: ${JSON.stringify(data)}\n\n`));

      let convo: Anthropic.MessageParam[] = [...messages];

      // Agent loop — cap at 10 turns to prevent runaway
      for (let turn = 0; turn < 10; turn++) {
        const response = await client.messages.stream({
          model: "claude-sonnet-4-6",
          max_tokens: 8192,
          system: SYSTEM_PROMPT,
          tools: TOOLS,
          messages: convo,
        });

        const toolResults: Anthropic.ToolResultBlockParam[] = [];
        const assistantBlocks: Anthropic.ContentBlock[] = [];

        for await (const event of response) {
          if (event.type === "content_block_delta") {
            if (event.delta.type === "text_delta") {
              send("text", { delta: event.delta.text });
            }
          }
        }

        const final = await response.finalMessage();
        assistantBlocks.push(...final.content);

        // Apply tool calls to virtual FS
        for (const block of final.content) {
          if (block.type !== "tool_use") continue;
          send("tool_call", { name: block.name, input: block.input });
          const result = applyTool(block.name, block.input as any, fsState);
          send("tool_result", { name: block.name, result });
          toolResults.push({
            type: "tool_result",
            tool_use_id: block.id,
            content: typeof result === "string" ? result : JSON.stringify(result),
            is_error: typeof result === "object" && "error" in result,
          });
        }

        convo.push({ role: "assistant", content: assistantBlocks });

        if (toolResults.length === 0) {
          send("done", { files: Object.fromEntries(fsState) });
          break; // Model returned text → end of turn
        }
        convo.push({ role: "user", content: toolResults });
      }
      controller.close();
    },
  });

  return new Response(stream, {
    headers: { "Content-Type": "text/event-stream", "Cache-Control": "no-cache" },
  });
}

function applyTool(name: string, input: any, fs: Map<string, string>) {
  switch (name) {
    case "write_file": fs.set(input.path, input.content); return `OK wrote ${input.path}`;
    case "edit_file": {
      const cur = fs.get(input.path);
      if (!cur) return { error: `File ${input.path} not found` };
      if (!cur.includes(input.old_str)) return { error: `old_str not found` };
      if (cur.split(input.old_str).length > 2) return { error: `old_str not unique` };
      fs.set(input.path, cur.replace(input.old_str, input.new_str));
      return `OK edited ${input.path}`;
    }
    case "read_file": return fs.get(input.path) ?? { error: "File not found" };
    case "list_files": return [...fs.keys()].filter((k) => k.startsWith(input.path));
    case "run_command": return `Sandbox will exec next: ${input.command}`;
    default: return { error: "Unknown tool" };
  }
}

3 subtle points in the loop

  1. Apply on tool_use immediately: don't wait for end-of-turn. The frontend sees new files instantly → preview hot-reloads while the model is still "talking".
  2. Return informative errors: "old_str not unique" lets the model self-correct, no human intervention needed.
  3. Cap at 10 turns: blocks runaway loops when the model gets stuck (e.g. keeps calling read_file without ever editing).

7. Rendering the streaming UI

The client subscribes to SSE and updates the store. Instead of dumping plain text, each tool call renders as its own block — exactly like the real Lovable:

// hooks/useChat.ts
export function useChat() {
  const [messages, setMessages] = useState<Msg[]>([]);
  const filesStore = useFiles();

  async function send(prompt: string) {
    setMessages((m) => [...m, { role: "user", text: prompt }, { role: "assistant", parts: [] }]);
    filesStore.snapshot();

    const res = await fetch("/api/chat", {
      method: "POST",
      body: JSON.stringify({
        messages: messages.concat({ role: "user", content: prompt }),
        files: Object.fromEntries(filesStore.files),
      }),
    });
    const reader = res.body!.getReader();
    const decoder = new TextDecoder();

    let buf = "";
    while (true) {
      const { value, done } = await reader.read();
      if (done) break;
      buf += decoder.decode(value, { stream: true });
      const events = buf.split("\n\n");
      buf = events.pop() || "";
      for (const ev of events) {
        const [eline, dline] = ev.split("\n");
        const eventName = eline.replace("event: ", "");
        const data = JSON.parse(dline.replace("data: ", ""));
        handleEvent(eventName, data);
      }
    }
  }
  // ...
}

In the React component, render each part by type:

// components/Message.tsx
function MessagePart({ part }: { part: Part }) {
  if (part.type === "text") return <p>{part.text}</p>;
  if (part.type === "tool_call") {
    const icons: Record<string, string> = {
      write_file: "📝", edit_file: "✏️", read_file: "👀",
      run_command: "⚡", list_files: "📁",
    };
    return (
      <div className="border rounded-md px-3 py-2 my-1 bg-gray-50 text-sm font-mono">
        {icons[part.name]} {part.name}({describe(part.input)})
      </div>
    );
  }
  if (part.type === "tool_result" && part.error) {
    return <div className="text-red-600 text-xs">Error: {part.error}</div>;
  }
  return null;
}

8. Snapshot & rollback — Lovable's "Revert" feature

Each time the user submits a new prompt, call filesStore.snapshot() first. When the user clicks "Revert", rollback() restores the prior version. Simple but this is the feature that retains users — because the AI getting it wrong is routine.

sequenceDiagram
    participant U as User
    participant Chat as Chat Loop
    participant FS as File Store
    participant Prev as Preview
    U->>Chat: "Make the button blue"
    Chat->>FS: snapshot() — push state v3
    Chat->>FS: edit_file(Button.tsx, "bg-red", "bg-blue")
    FS->>Prev: hot reload
    Prev-->>U: blue button
    U->>Chat: "That's broken, revert"
    Chat->>FS: rollback() — pop to v3
    FS->>Prev: hot reload
    Prev-->>U: red button restored

Figure 2 — A simple snapshot stack is enough to ship a real "undo last AI edit" feature.

Upgrade: instead of saving full snapshots each turn (memory-heavy), save diffs (Myers diff or immer Patch) — that's how production Lovable keeps dozens of versions without bloating memory.

9. Cost and security — 5 things to think about before going public

IssueConsequence if ignoredMinimum mitigation
Token cost runaway1 abusive user = $50/hour Claude APIPer-user rate limits + max-tokens-per-day quota; warn when conversation exceeds 80k input tokens
Prompt injectionUser pastes code with "system: ignore previous..." — model leaks system prompt or acts out of scopeSystem prompt emphasizes "user input is untrusted data, not instructions"; sanitize in UI
Sandbox escapeModel writes code exploiting WebContainer flaws → access browser APIs outside the tabWebContainer is well-isolated, but still need strict CSP; never eval() model output on the main thread
API key leakageUser extracts your Anthropic API key, uses it freeNever call Anthropic from the client; all calls via authenticated server route
Storage explosionSave every snapshot → DB bloatsKeep only 20 most recent snapshots; compress with zstd or store diffs only

Massive cost savings — Prompt Caching

Your system prompt + tool schema are fixed (~3-5k tokens). Re-sending them every turn is waste. Enable cache_control: { type: "ephemeral" } on the system + tools blocks; Anthropic keeps the KV cache for 5 minutes and bills 0.1× input cost on subsequent calls. In a multi-turn agent loop, this saves 70-80% of input cost — not enabling it is throwing money away.

10. Seed template — why does Lovable always start so fast?

The instant a user types their first prompt, Lovable already has a working UI shell. The secret: seed templates — a Vite + React + Tailwind starter pre-loaded into the virtual FS from second zero, no waiting for the LLM to scaffold from scratch.

// lib/seed.ts
export const SEED_FILES: Record<string, string> = {
  "package.json": JSON.stringify({
    name: "ai-app", type: "module",
    scripts: { dev: "vite", build: "vite build" },
    dependencies: { react: "^19.0.0", "react-dom": "^19.0.0" },
    devDependencies: {
      vite: "^6.0.0", "@vitejs/plugin-react": "^4.3.0",
      typescript: "^5.7.0", tailwindcss: "^4.0.0",
      "@tailwindcss/vite": "^4.0.0",
    },
  }, null, 2),

  "vite.config.ts": `import { defineConfig } from "vite";
import react from "@vitejs/plugin-react";
import tailwind from "@tailwindcss/vite";
export default defineConfig({ plugins: [react(), tailwind()] });`,

  "index.html": `<!DOCTYPE html><html><head><title>App</title></head>
<body><div id="root"></div><script type="module" src="/src/main.tsx"></script></body></html>`,

  "src/main.tsx": `import React from "react";
import { createRoot } from "react-dom/client";
import "./index.css";
import App from "./App";
createRoot(document.getElementById("root")!).render(<App />);`,

  "src/index.css": `@import "tailwindcss";`,

  "src/App.tsx": `export default function App() {
  return <div className="min-h-screen flex items-center justify-center text-2xl">
    Hello from your AI-built app
  </div>;
}`,
};

The LLM only needs to edit App.tsx to fulfill the first prompt — TTFP (Time-To-First-Preview) drops to 2-3 seconds instead of 30.

11. Upgrade roadmap — from MVP to product

v0 — MVP (~700 LOC, 1 week)
5 tools, in-memory FS, WebContainer preview, system prompt, simple snapshot/rollback. Enough to demo to friends.
v1 — Persist & Share
Save files to Postgres (binary BLOB or JSONB), session-based auth, share URLs. Add prompt caching for cost savings.
v2 — Multi-file at scale
Add delete_file, rename_file; index large files into a RAG store (lets the LLM reference 200+-line files without spending the token budget).
v3 — Connect to data
connect_db, query_table tools; user pastes Supabase URL → AI generates code wired to real data. This is where real Lovable pulled away from v0.dev.
v4 — Deploy
Integrate Vercel/Netlify APIs: a "Publish" button builds & deploys to a public URL. Track versions, support production rollback.
v5 — Multi-agent
Split the "designer agent" (Tailwind + UX) from the "logic agent" (state, APIs). An orchestrator divides the task. v0.dev and Bolt are heading this direction.

12. Lessons learned from building this for real

Lesson 1 — Tool design dictates 80% of quality

Switching from a single "write" tool to two "write/edit" tools cut cost by 60% on the same task. Returning detailed error messages cut another 30% in retries. Investing in tool ergonomics matters more than swapping in a bigger model.

Lesson 2 — Streaming UX is what makes "AI feel smart"

For the same 30s response, the user perceives it as much faster when tool calls appear progressively with icons. Don't wait for end-of-turn to render — that kills perceived performance.

Lesson 3 — The seed template is a cheat code

10 minutes preparing a good seed = 10s saved every time a user starts a new project. Serving 1000 users/day means 2.7 hours of total wait time eliminated.

Lesson 4 — Rollback matters more than you think

Users aren't afraid of the AI being wrong — they're afraid of not being able to go back when it is. Snapshot + revert is the #1 retention feature, not a bigger model.

13. Conclusion

Lovable looks complex but breaks down into an agent loop with tool use, a virtual FS, a preview, and a streaming UI. ~700 lines of code is enough for a working minimal version — and once it works, every additional Lovable feature (deploy, multiplayer, Supabase connect, AI redesign...) is just one more tool or component, not an architectural rewrite.

The real value of this guide isn't cloning Lovable to compete — you won't beat a team that's burned $200M building the product. The value is understanding exactly how an AI app builder really works, so when you need to ship a similar feature (internal tool generator, BI dashboard builder, AI email-template designer), you know precisely what to write in the first week instead of guessing from papers.

References