Go + React: Our full-stack architecture
Why we chose Go for the backend, React for the frontend, and how they work together.
When we started building WriteKit, the technology choices weren’t arbitrary. Every decision optimized for one thing: staying fast at scale with minimal infrastructure.
Why Go for the backend
Go wasn’t the trendy choice. Next.js with Vercel would have been faster to prototype. But we had specific requirements that ruled out Node.js:
| Requirement | Go | Node.js |
|---|---|---|
| Memory per connection | ~2KB (goroutine) | ~1MB (thread) |
| Cold start time | 0ms (native binary) | 100-500ms (JIT) |
| Deployment artifact | Single 15MB binary | node_modules hell |
| CPU-bound performance | 10-40x faster1 | Limited by V8 |
Memory matters when you have thousands of tenants
WriteKit runs a separate SQLite database per tenant. That means the server manages potentially thousands of database connections. Go’s goroutines make this trivial—each costs about 2KB of stack space.
In Node.js, each connection would require significantly more memory, and you’d hit event loop blocking issues under load.
The single binary advantage
Our entire backend compiles to one executable:
$ ls -lh writekit
-rwxr-xr-x 1 user user 15M Jan 15 12:00 writekit
$ file writekit
writekit: ELF 64-bit LSB executable, x86-64, statically linked
No node_modules. No Python virtualenv. No Ruby gems. No “works on my machine” deployment problems.
Real request handling
Here’s what a typical handler looks like:
func (s *Server) getPost(w http.ResponseWriter, r *http.Request) {
slug := chi.URLParam(r, "slug")
tenant := s.getTenant(r)
post, err := tenant.Posts.GetBySlug(r.Context(), slug)
if err != nil {
s.notFound(w, r)
return
}
s.render(w, "post.html", post)
}
No middleware chains hiding behavior. No decorator magic. No framework abstractions. The code does exactly what it says.
SQLite: The database that scales down
Most multi-tenant SaaS uses PostgreSQL or MySQL with row-level tenancy:
-- Traditional approach: everyone in one database
SELECT * FROM posts
WHERE tenant_id = 'abc123'
AND slug = 'my-post';
WriteKit uses database-level tenancy:
-- WriteKit: each tenant has their own database
-- (connected to tenant's SQLite file)
SELECT * FROM posts WHERE slug = 'my-post';
The benefits are substantial:
| Aspect | Row-level tenancy | Database-level tenancy |
|---|---|---|
| Data isolation | Logical (enforced by code) | Physical (separate files) |
| Export user data | Complex queries + transforms | Copy a file |
| Noisy neighbor risk | High | Zero |
| Backup/restore | Entire database | Single tenant |
| Read performance | Index on tenant_id | No tenant filtering |
SQLite performance numbers
For read-heavy workloads (which blogs are), SQLite is remarkably fast:
- Simple SELECT: 0.02ms
- JOIN across 3 tables: 0.15ms
- Full-text search: 0.8ms
These numbers are from our production environment. Network latency to PostgreSQL would add 1-5ms per query. SQLite runs in-process—zero network overhead.
React only where it matters
The blog reader experience is server-rendered HTML:
<!-- What readers see: plain HTML -->
<article>
<h1>My Blog Post</h1>
<p>Content rendered server-side...</p>
</article>
No JavaScript required to read a blog post. Accessible by default. Fastest possible time-to-first-byte.
The admin studio is a React SPA because editors need:
- Real-time markdown preview
- Drag-and-drop image uploads
- Autosave with optimistic updates
- Complex state management
// Studio uses Nanostores for lightweight state
import { atom, computed } from 'nanostores'
import { useStore } from '@nanostores/react'
const $editor = atom<EditorState>({
content: '',
isDirty: false,
lastSaved: null,
})
const $canSave = computed($editor, (state) =>
state.isDirty && state.content.length > 0
)
function SaveButton() {
const canSave = useStore($canSave)
return <button disabled={!canSave}>Save</button>
}
Nanostores adds 1KB to the bundle. Redux would add 40KB+. When every kilobyte affects load time, these decisions compound.
The development experience
We optimize heavily for developer experience:
# docker-compose.yml (simplified)
services:
app:
build: .
volumes:
- .:/app
command: air # Hot reload for Go
vite:
build: ./frontends
volumes:
- ./frontends:/app
command: vite dev # HMR for React
Change a Go file → rebuilds in ~800ms.
Change a React component → hot module replacement, no refresh.
One command (docker-compose up) starts the entire stack.
Performance results
After a year of optimization:
| Metric | Value |
|---|---|
| Time to First Byte (blog) | 12ms p50, 45ms p99 |
| Largest Contentful Paint | 0.8s |
| Total JS on blog pages | 0 bytes |
| Total JS in admin studio | 180KB gzipped |
| Memory per tenant (idle) | 2.1MB |
| Requests/second (single core) | 12,000+ |
These numbers come from actual production monitoring, not synthetic benchmarks.
Lessons learned
-
Boring technology wins. Go and SQLite have decades of battle-testing. They don’t surprise you at 3am.
-
Match architecture to workload. Blogs are read-heavy. SQLite excels at reads. Don’t fight your use case.
-
Separate concerns completely. Server-rendered for readers. SPA for writers. Each optimized independently.
-
Measure everything. We added OpenTelemetry tracing before our second user. Data-driven decisions from day one.
The architecture isn’t clever. It’s appropriate. And that’s exactly the point.
-
Based on TechEmpower benchmarks and our internal testing. CPU-bound Go code (JSON parsing, template rendering) consistently outperforms Node.js by 10-40x. ↩︎