Architecture

How Frontman Works

Three parts: a framework plugin in your dev server, an MCP server in your browser, and an AI agent on a separate backend. Here's how they connect.

The Problem

AI coding tools read your source files. They see your component code and class names. What they don't see is the rendered page.

When you ask an AI agent to "fix the hero section overflow on mobile," it reads your component file and guesses which element you mean, which CSS property to change, and what the result will look like. It never opens a browser. It doesn't know that p-4 md:p-8 resolves to 32px at your current viewport width, or that a sibling div's margin is collapsing into the element you're trying to fix.

The information needed to make accurate visual edits exists in the browser's runtime: the DOM tree, computed CSS values, the component hierarchy, viewport dimensions. Source files alone are not enough.

Frontman solves this by putting the AI agent where the information is: inside the browser, connected to the framework's dev server.

Three-Part Architecture

Frontman has three components that work together. Each handles a different part of the workflow.

1

Framework Plugin

Runs inside your dev server

You install a framework-specific package: @frontman-ai/nextjs, @frontman-ai/astro, or @frontman-ai/vite. This installs as middleware in your dev server. It is not a proxy or a separate process.

The plugin does three things:

  • Serves the Frontman overlay UI at /frontman
  • Relays file operations (read, write, search, grep) to the dev server's file system
  • Captures framework-specific context: the component tree, server-side routes, compilation errors, build warnings, and server logs

Because it runs inside the framework, it has access to information that external tools can't reach. For Next.js, that means React Server Components and API routes. For Astro, it means Islands architecture boundaries and client directives. For Vite, it means the module graph and HMR pipeline.

2

Browser-Side MCP Server

Runs in your browser

When you open /frontman in your browser, a JavaScript module loads that acts as an MCP (Model Context Protocol) server. This server exposes the browser's runtime to the AI agent through a set of tools:

  • Read the full DOM tree, element attributes, and text content
  • Read computed CSS: actual pixel values, resolved colors, cascaded font sizes, box model dimensions
  • Click any element and resolve it to its source file and line number via source maps
  • Capture screenshots of the current viewport or specific elements
  • Switch viewport dimensions and test responsive behavior
  • Read console.log output, runtime errors, and build warnings

The AI agent calls these tools through the MCP protocol. When it needs to know the computed width of an element or take a screenshot after an edit, it makes a tool call and gets a structured response.

3

AI Agent Server

Separate Elixir/Phoenix backend

The AI agent runs on a separate server built with Elixir and Phoenix. It orchestrates the full workflow:

  • Receives your natural-language instruction ("make this card's shadow more subtle")
  • Calls the browser-side MCP server to inspect the element, read its styles, and take a screenshot
  • Calls the LLM (Claude, GPT, or your configured provider) with the full context: source code, computed styles, component tree, screenshot
  • Receives the LLM's proposed edit and writes it to the source file via the framework plugin
  • The framework's HMR picks up the file change and hot-reloads the browser automatically
  • Takes a follow-up screenshot to verify the result

Your API keys are stored encrypted on this server and never exposed to the browser. The server communicates with the browser via WebSocket and with the framework plugin via HTTP.

What Happens When You Click an Element

This is the core workflow. Here's the exact sequence.

1

You click an element in the browser

The browser-side MCP server intercepts the click and identifies the DOM element. It reads the element's tag, classes, attributes, text content, and computed CSS.

2

Source map resolution

The framework plugin uses source maps to trace the element back to its source location: the exact file path, component name, and line number that rendered it. This is why framework integration matters — a generic DOM inspector can't do this reliably.

3

You describe the change

Type something like "make the shadow more subtle" or "increase padding on mobile." The AI already has the element's context, so you don't need to specify which file or which component.

4

The agent reads and edits

The agent reads the source file via the framework plugin, consults the computed styles from the MCP server, sends everything to the LLM, and writes the proposed change back to the file.

5

Hot reload shows the result

The framework's built-in HMR detects the file change and reloads the component in the browser. You see the result immediately without refreshing the page or switching windows.

6

Iterate or accept

If the result is wrong, describe what's off and the agent tries again — with the new screenshot as additional context. If it's right, move on. The change is a normal file edit in your project, visible in git status.

What the AI Agent Sees

Here's a concrete example. When you click a card component in a Next.js app, the AI agent receives:

Source location

src/components/ProductCard.tsx:23

Component name

<ProductCard> (React)

Computed styles

width: 320px, padding: 16px, box-shadow: 0 1px 3px rgba(0,0,0,0.12), font-size: 14px, color: #374151

DOM context

Parent: <div class="grid grid-cols-3">, siblings: 2 other ProductCard elements

Source code

Full file contents of ProductCard.tsx, including props interface and Tailwind classes

Screenshot

Current viewport capture showing the element in context

This is the information gap. An IDE-based agent would have the source code. A browser extension would have the DOM. Frontman provides both, linked together through source maps and framework middleware.

Why Middleware, Not a Browser Extension

Browser extensions can inspect the DOM. They can read computed styles. They can take screenshots. So why build this as framework middleware?

Start with source maps. When you click a <div>, a browser extension knows it's a div with certain classes. It doesn't know that this div was rendered by ProductCard.tsx at line 23. Source map resolution requires access to the build pipeline, which only the dev server has. The framework plugin provides this.

Then there's the component tree. The browser DOM is flat. It has no concept of React components, Vue reactivity, or Svelte reactive declarations. The framework plugin walks the framework's internal tree (React fiber nodes, Vue component instances, etc.) and maps DOM elements to their component boundaries, props, and state.

Middleware also gives you server-side context. An extension runs in the browser only. Frontman's middleware runs on both sides: it captures server routes, server logs, middleware execution, and database query timing. If a page is slow because of a server component, the AI knows that.

And finally, file operations. A browser extension can't write to your file system (security sandbox). Frontman's framework plugin handles file reads, writes, search, and grep on the dev server, which has file system access by design.

The trade-off is setup time. A browser extension is zero-install. Frontman requires running an installer command (npx @frontman-ai/nextjs install) that adds middleware to your project. This takes about 60 seconds and modifies your dev configuration. Whether that trade-off makes sense depends on how much runtime context your workflow needs.

Multi-Select: Batch Edits

Frontman supports multi-select. Hold Shift, click multiple elements, add separate instructions to each, and the agent generates edits for all of them in one pass.

If multiple selections map to the same source file, the file is read once and edited once with all changes combined. This avoids the overhead of sequential round-trips for what are often related visual fixes. Three spacing adjustments, two color changes, and a font size update can be a single operation.

What Frontman Cannot Do

Frontman is specialized. It is good at visual frontend editing with runtime context. It is not a general-purpose AI coding tool.

  • Frontman doesn't provide inline code suggestions. If you want tab completion, use Cursor, Windsurf, or Copilot alongside Frontman.
  • Frontman cannot run shell commands, execute tests, or do git operations. For that, use Claude Code or any IDE agent.
  • Framework support is limited to Next.js, Astro, and Vite (which covers React, Vue, Svelte, and SolidJS via Vite plugins). Angular, Remix, and standalone SvelteKit are not supported yet.
  • Frontman edits existing running apps. It doesn't generate new projects from scratch. For that, use v0, Bolt, or Lovable.
  • You need a local dev server running. Frontman doesn't work in StackBlitz, CodeSandbox, or other cloud environments.

For general-purpose AI coding, use Frontman alongside your existing tools. Frontman in the browser for visual frontend work. Your IDE agent for everything else.

Frequently Asked Questions

Does Frontman modify my production build? +

No. The framework plugin only activates when NODE_ENV=development. In a production build, the middleware is stripped out by tree-shaking. Your deployment bundle is identical whether Frontman is installed or not.

Which frameworks does Frontman support? +

Next.js (App Router and Pages Router, Webpack and Turbopack), Astro (SSR, SSG, hybrid, Islands architecture), and Vite (React, Vue, Svelte, SolidJS). Each has a dedicated integration package: @frontman-ai/nextjs, @frontman-ai/astro, and @frontman-ai/vite.

What LLM providers does Frontman support? +

You bring your own API keys. Supported providers include Anthropic (Claude), OpenAI (ChatGPT), OpenRouter, and any OpenAI-compatible API. You can also sign in with your Claude or ChatGPT subscription via OAuth. The AI agent server handles the routing — your keys are stored encrypted and never exposed to the browser.

Can Frontman edit backend code? +

Yes. The AI agent can read, write, search, and grep any file in your project directory. It can also capture console logs, build errors, and run Lighthouse audits. It is optimized for visual frontend editing, but it is not limited to it.

How does Frontman handle security? +

Frontman runs locally. It only modifies files in your project directory. It cannot run arbitrary shell commands, deploy code, or push to git. API keys are encrypted on the server and never exposed to the browser. Every change produces a git diff for your normal code review process. Read the full security model at /blog/security/.

Try It Yourself

One command. No account. No credit card.

$ npx @frontman-ai/nextjs install Next.js
$ npx @frontman-ai/vite install Vite (React, Vue, Svelte, SolidJS)
$ astro add @frontman-ai/astro Astro