Back to Blog
tech/

Deal ex Machina: the technical stack

A concise, engineer-oriented map of the technologies that power the Deal ex Machina website: framework, data, AI, content, quality gates, and how staging and production differ.

This article is the stack view: what we run, how the pieces connect, and where to look in the repo. For security headers, GDPR, EU AI Act, Lighthouse targets, and roadmap, see The Web Site is the Demo — that post is the long-form CTO narrative; this one is the diagram you draw on a whiteboard before diving into code.


1. One codebase, two deployment shapes

The app is Next.js 16 (App Router) on Node.js 20 LTS with TypeScript 5.9 in strict mode. We ship the same source in two modes:

ModeTypical useOutput
Node serverStaging on Koyeb (Docker)standalone server (server.js), dynamic APIs, streaming chat
Static exportProduction on Cloudflare Pagesout/ when NEXT_OUTPUT=export / CLOUDFLARE_PAGES=1; static pages + client-side behaviour where applicable

That split is intentional: full-stack validation on a container, edge-friendly assets for the public site. Environment and output flags are documented in the repo; the Dockerfile copies only .next/standalone, .next/static, and public.


2. Layered architecture (mental model)

flowchart TB
  subgraph client [Browser]
    R[React 19 + Tailwind + Radix]
    AUI[Assistant UI + AI SDK React]
  end
  subgraph edge [Hosting]
    CF[Cloudflare Pages]
    KY[Koyeb + Docker]
  end
  subgraph app [Next.js App Router]
    API[Route Handlers /api/*]
    PAGES[Server components + i18n]
  end
  subgraph data [Data plane]
    PG[(PostgreSQL)]
    DR[Drizzle ORM]
    SB[Supabase Auth]
  end
  subgraph ai [AI plane]
    LLM[OpenAI-compatible LLM API]
    RAG[BM25 local RAG]
    SFT[SFT dataset JSONL]
  end
  R --> PAGES
  AUI --> API
  PAGES --> edge
  API --> edge
  API --> DR
  DR --> PG
  API --> SB
  API --> LLM
  API --> RAG
  SFT -.->|"trains / grounds Wagmi"| LLM

Client: React 19, Tailwind CSS, Radix primitives, lucide-react, Zustand where local state matters. next-intl drives EN/FR from src/i18n/messages/ and [locale] routing.

App: Route handlers under src/app/api/ implement chat, health, LLM status, auth callback, and related endpoints. Validation is Zod end to end (env, bodies, content schema).

Data: PostgreSQL via Drizzle (drizzle/, scripts db:*). Supabase supplies SSR-aware auth (e.g. email OTP) and session cookies.

Content: Blog and long-form copy live in Markdown under content/blog/, compiled at build time with Content Collections and remark-gfm — not parsed on the client as a CMS runtime.

AI: Vercel AI SDK (ai, @ai-sdk/*) plus Assistant UI for the Wagmi chat. The backend talks to an OpenAI-compatible Ollama endpoint (dual CPU/GPU services on Koyeb or local Ollama depending on env). A lightweight BM25-style RAG (local-rag.ts) grounds the small model on wagmi-skills.md and public/ai.txt. A supervised fine-tuning pipeline (scripts/generate-wagmi-sft-dataset.ts) emits datasets/wagmi-sft/*.jsonl; counts and tags are recorded in metadata.json when you regenerate.


3. Notable API surface

These are the integration points most readers care about:

  • POST /api/chat — Streaming assistant; rate limits, moderation, tiered models (anonymous vs authenticated), optional persistence gated on consent.
  • GET /api/llm/status — Provider reachability for UI and ops.
  • GET /api/health — Liveness; production returns a minimal payload.
  • Supabase auth routes — Callback and session establishment as configured in the app.

Contact with the team is through the chat (Wagmi), not a separate legacy form — the stack reflects that product choice.


4. Quality, tests, and CI

ConcernTooling
Lint / formatBiome (no ESLint/Prettier in this repo)
Typestsc --noEmit
Unit + integrationVitest + Testing Library + happy-dom
E2EPlaywright
Performance budgetLighthouse CI (lhci, assertions in lighthouserc.js)
Git hookssimple-git-hooks + lint-staged (+ actionlint on workflows)

GitHub Actions orchestrate staging deploys (Docker Hub → Koyeb on dev), Lighthouse on PRs, and manual production deploy to Cloudflare Pages. Secrets stay in CI and platform env — never in the bundle.


5. How this feeds Wagmi training data

The site content is not only for humans. The SFT generator ingests:

  • Blog posts (this one included once merged),
  • wagmi-skills.md and ai.txt,
  • Optional Obsidian notes when OBSIDIAN_VAULT_PATH is set and notes declare wagmi_sft: true in frontmatter (see docs/OBSIDIAN_WAGMI_SFT.md).

Rows are chat-formatted JSONL suitable for tools like Unsloth; regeneration updates train.jsonl, eval.jsonl, and metadata.json. If you want Obsidian-derived flashcards or Q&A derived from this article, say so after review and we can add vault notes and re-run the pipeline.


6. References (official entry points)


Summary: Node 20, TypeScript strict, Next.js 16 App Router, React 19, Tailwind + Radix, Drizzle + PostgreSQL, Supabase auth, Content Collections for the blog, Vercel AI SDK + Assistant UI for Wagmi, BM25 RAG for the small model, JSONL SFT export from repo + optional Obsidian, Biome + Vitest + Playwright + Lighthouse CI, Docker/Koyeb for staging and Cloudflare Pages for static production.