How it works

A short tour of the stack, the data model, and how the hosted and desktop builds relate to each other.

Go backend

WriteKit is a single executable. The HTTP server, rendering engine, MCP endpoint, dashboards, database migrations, and static assets are all compiled into one file via //go:embed. To run it, you need the executable and a config.

Request Browser / MCP client writekit — one Go binary Chi router web (templates) auth · billing · docs React SPAs embed.FS site render goldmark · chroma · D2 MCP server pages · collections · teams events.Bus cache · embeddings platform (Postgres) users · tenants · sessions tenant pool (SQLite WAL) one DB per site
One binary · one process · everything embedded

Why Go

Go produces statically-linked executables and has an HTTP server in its standard library. That keeps the runtime dependency surface small — there's no separate language runtime to install alongside the server. Concurrency is handled with goroutines, so each request, background worker, and event subscriber runs independently without a thread pool to tune. The same source cross-compiles for Linux, macOS, and Windows, which is what makes the desktop build (below) feasible.

The dashboards are client-side JavaScript applications. They're built ahead of time into static bundles and embedded in the executable, so at runtime the server hands out static files.

SQLite per site

Each site gets its own SQLite file on disk. Pages, collections, versions, embeddings, and settings all live together in that one file. There's no shared content database and no row-level tenant column — tenancy is enforced by which file you open.

tenant.Pool WAL + LRU open · cache · close handle per request alice.db ~/data/tenants/alice/ bob.db ~/data/tenants/bob/ carol.db ~/data/tenants/carol/ per-file tables pages collections page_versions page_embeddings settings migrations/*.sql
One file per site · strong isolation · trivial backup

Why one file per site

Tradeoff: SQLite allows one writer at a time per file. We use WAL mode so readers don't block each other, and a connection pool so writes are serialized per site. For a publishing workflow this matches how the system is used — most writes come from one person or one AI session at a time.

Desktop app

The desktop app uses the same codebase as the hosted server, wrapped with Wails. It imports the same internal/app package with LOCAL=true and serves the user SPA over loopback inside a native window.

Hosted · writekit.dev OAuth + email subdomain route internal/app · buildRouter() Postgres tenant SQLite Desktop · Wails window context-injected identity internal/app · buildLocalRouter() single SQLite · local folder same code
Two build targets · one internal/app · no platform shim

What's different in local mode

The desktop build doesn't require internet access to run. If you want semantic search, you can point it at a local Ollama model for embeddings, which also runs locally.

Open source

The codebase is at github.com/Macawls/writekit. That includes the server, desktop app, frontends, migrations, and the MCP tool definitions.

The repo is self-hostable. The docker compose file used for local development (Postgres and Ollama) is the same one that can run a self-hosted instance.

Data & privacy

This section describes what data WriteKit stores, where it stores it, and which data paths exist between your device and our servers.

Your device AI client · desktop app drafts in progress local SQLite (desktop) OAuth tokens writekit.dev (hosted) only what we need to run account · email · tenant your pages (your site) Stripe customer ID What we don't analytics pixels third-party trackers reading your drafts training on your content
What's stored where · and what isn't

Hosted mode — what's stored on our servers

What we don't collect

Desktop mode

The desktop app doesn't send data to writekit.dev. It writes a SQLite file to your user data directory, and the network is only touched if you configure an external service (such as a local Ollama instance for embeddings).

Content is stored as Markdown inside a standard SQLite schema, so exporting is a matter of reading rows from the file — there's no separate export pipeline or proprietary format.