TDD v2.0 — Technical Design Document
Source: tdd-v2.md
Technical Design Document
Group Availability Scheduling Tool
Version: 2.1.0 — Cloudflare-native replatform
Date: 2026-04-11
Classification: Internal — CODITECT Platform Artifact A5
Status: Draft — supersedes v1.0.0
Owner: Platform Engineering
Change summary: Full rewrite from Node.js/PostgreSQL/Redis/Docker to Cloudflare Workers · D1 · KV · Durable Objects · Pages. Addresses Kimi K2.5 architecture review findings: TDD/SDD stack mismatch, DO HTTP broadcast interface, WebSocket authentication, memory backpressure, CSV streaming, optimistic locking, rate limit hardening. | v2.1: Email capture, Google Calendar service, VCF/CSV/JSON contact export, JWT service account auth for Workers.
1. Purpose
This TDD translates the SDD v2.0 (Cloudflare-native replatform) into concrete implementation guidance for engineers. Where the SDD describes *what* to build, this document describes *how* — within the Cloudflare Workers runtime exclusively.
Supersedes TDD v1.0.0 which described a Node.js/PostgreSQL/Redis/Docker stack. That stack was replaced by ADRs 006 and 007. Do not reference TDD v1.0 for implementation guidance.
2. Technology Stack
2.1 Canonical Stack (Do Not Deviate)
| Layer | Choice | Version | Cloudflare Primitive |
| Frontend | React + Vite | 19.x / 6.x | Cloudflare Pages |
| API router | Hono.js | 4.x | Cloudflare Workers (ES modules) |
| Language | TypeScript | 5.x (strict) | — |
| Relational DB | Cloudflare D1 | — | SQLite at edge |
| Key-value | Cloudflare KV | — | Rate limits, lockout state |
| Real-time | Durable Objects | — | WebSocket hibernation API |
| Edge cache | Cloudflare Cache API | — | Poll metadata, results |
| Object storage | Cloudflare R2 | — | Large export staging |
| Email Workers + MailChannels | — | Organizer notifications | |
| ORM | Drizzle ORM (D1 adapter) | 0.30+ | Type-safe SQL, migration files |
| Styling | Tailwind CSS | 4.x | WCAG 2.1 AA tokens |
| Package manager | pnpm workspaces | 9.x | Monorepo |
| Local dev | Wrangler 3 | 3.x | Full CF stack locally |
2.2 Runtime Constraints (Workers ≠ Node.js)
| Forbidden | Replacement |
| `bcrypt` | PBKDF2 via `crypto.subtle.deriveBits` (100k iterations, SHA-256) |
| `fs`, `path`, `os`, `net` | Cloudflare bindings (D1, KV, R2) |
| `ws` npm | `WebSocketPair` + Durable Objects |
| `require()` | ES module `import/export` |
| `setTimeout` for deferred work | DO alarms or Cron Triggers |
| `crypto` (Node) | Web Crypto API (`crypto.subtle`, `crypto.getRandomValues`) |
| PostgreSQL functions | SQLite equivalents (see ADR-006) |
2.3 Runtime Limits
| Limit | Value | Mitigation |
| Worker CPU time | 30s | Chunk large operations; use DO alarms for deferred work |
| Worker memory | 128MB | Stream large responses; cursor-paginated D1 reads |
| DO memory | 128MB | Connection backpressure at 400 WS clients |
| D1 write concurrency | Serialized per DB | Acceptable for write-light scheduling workload |
| KV consistency | Eventual | Use DO for strong-consistency rate limits (PIN verification) |
3. Repository Structure
scheduling-tool/
├── apps/
│ ├── web/ # React + Vite → Cloudflare Pages
│ │ ├── src/
│ │ │ ├── pages/
│ │ │ │ ├── Home.tsx # Create poll
│ │ │ │ ├── Poll.tsx # Participant response
│ │ │ │ └── Manage.tsx # Organizer results
│ │ │ ├── components/
│ │ │ │ ├── SlotGrid.tsx # Availability toggle grid
│ │ │ │ ├── HeatmapCell.tsx # Single cell + ARIA
│ │ │ │ ├── ResultsPanel.tsx # Live ranked results
│ │ │ │ ├── PollForm.tsx # Create poll form
│ │ │ │ └── RealtimeProvider.tsx # DO WebSocket client
│ │ │ ├── hooks/
│ │ │ │ ├── usePoll.ts
│ │ │ │ ├── useRespond.ts
│ │ │ │ └── useRealtime.ts # WS + polling fallback
│ │ │ └── lib/
│ │ │ ├── api.ts # Typed fetch client
│ │ │ └── timezone.ts # Intl timezone helpers
│ │ ├── public/
│ │ └── vite.config.ts
│ └── worker/ # Cloudflare Worker (API + DO)
│ ├── src/
│ │ ├── index.ts # Worker entry — Hono mount
│ │ ├── router/
│ │ │ ├── polls.ts # /api/polls CRUD
│ │ │ ├── responses.ts # /api/polls/:slug/responses
│ │ │ ├── results.ts # /api/polls/:slug/results
│ │ │ ├── manage.ts # /api/polls/:slug/close + export
│ │ │ └── ws.ts # /api/polls/:slug/ws → DO upgrade
│ │ ├── services/
│ │ │ ├── poll.service.ts
│ │ │ ├── response.service.ts
│ │ │ ├── recommendation.engine.ts
│ │ │ ├── export.service.ts
│ │ │ └── notification.service.ts
│ │ ├── durable-objects/
│ │ │ └── PollHub.ts # DO: WS fan-out + HTTP broadcast
│ │ ├── db/
│ │ │ ├── schema.ts # Drizzle D1 schema
│ │ │ ├── client.ts # D1 Drizzle client factory
│ │ │ └── migrations/ # Plain SQL files
│ │ ├── lib/
│ │ │ ├── rate-limit.ts # KV fixed-window (general)
│ │ │ ├── pin-rate-limit.ts # DO sliding-window (PIN verification)
│ │ │ ├── slug.ts # crypto.getRandomValues slug
│ │ │ ├── pin.ts # PBKDF2 hash + verify
│ │ │ ├── cache.ts # CF Cache API helpers
│ │ │ └── security.ts # Headers, CORS, output encoding
│ │ └── types/
│ │ └── env.d.ts # Cloudflare binding types
│ └── wrangler.toml
└── packages/
└── shared/ # Shared types + zod schemas
└── src/
├── types.ts
└── schemas.ts
4. Environment Setup
4.1 Prerequisites
- Node.js ≥ 22.0.0 LTS (for build tooling only — not the runtime)
- pnpm ≥ 9.0
- Wrangler 3 (
npm install -g wrangler) - No Docker required — Wrangler emulates D1, KV, DO, and R2 locally
4.2 Local Development
git clone <repo>
cd scheduling-tool
pnpm install
# Create local D1 database + apply migrations
cd apps/worker
wrangler d1 execute scheduling-tool --local --file=src/db/migrations/0001_initial.sql
wrangler d1 execute scheduling-tool --local --file=src/db/migrations/0002_optimistic_locking.sql
# Start Worker (API + DO) — runs D1/KV/DO locally
pnpm wrangler dev --local --port 8787
# In another terminal — start frontend
cd apps/web
pnpm dev
# → http://localhost:5173 (proxies /api/* to localhost:8787)
4.3 Vite proxy config (`apps/web/vite.config.ts`)
export default defineConfig({
server: {
proxy: {
'/api': {
target: 'http://localhost:8787',
changeOrigin: true,
ws: true, // proxy WebSocket upgrades
},
},
},
});
4.4 Environment variables (`apps/worker/wrangler.toml`)
name = "scheduling-tool-worker"
main = "src/index.ts"
compatibility_date = "2026-01-01"
compatibility_flags = ["nodejs_compat"]
[[d1_databases]]
binding = "DB"
database_name = "scheduling-tool"
database_id = "<your-d1-database-id>"
[[kv_namespaces]]
binding = "KV"
id = "<your-kv-namespace-id>"
[[r2_buckets]]
binding = "R2"
bucket_name = "scheduling-exports"
[durable_objects]
bindings = [
{ name = "POLL_HUB", class_name = "PollHub" },
{ name = "PIN_LIMITER", class_name = "PinRateLimiter" },
]
[[migrations]]
tag = "v1"
new_classes = ["PollHub", "PinRateLimiter"]
[vars]
POLL_EXPIRY_DEFAULT_DAYS = "14"
POLL_PURGE_GRACE_DAYS = "30"
RATE_LIMIT_WINDOW_SECONDS = "60"
RATE_LIMIT_MAX = "10"
PIN_PBKDF2_ITERATIONS = "100000"
BASE_URL = "https://your-domain.pages.dev"
WS_MAX_CONNECTIONS_PER_POLL = "400"
[[triggers]]
crons = ["0 * * * *", "0 2 * * *"]
5. Core Implementation Details
5.1 Slug & PIN Generation (`lib/slug.ts`, `lib/pin.ts`)
// lib/slug.ts — 128-bit URL-safe slug
export function generateSlug(): string {
const bytes = new Uint8Array(16);
crypto.getRandomValues(bytes);
// base64url encoding without padding
return btoa(String.fromCharCode(...bytes))
.replace(/\+/g, '-').replace(/\//g, '_').replace(/=/g, '');
}
// lib/pin.ts — PBKDF2 (Web Crypto API, no bcrypt)
const ITERATIONS = 100_000;
const HASH = 'SHA-256';
export async function hashPin(pin: string): Promise<{ hash: string; salt: string }> {
const salt = crypto.getRandomValues(new Uint8Array(16));
const saltHex = Array.from(salt).map(b => b.toString(16).padStart(2, '0')).join('');
const keyMaterial = await crypto.subtle.importKey(
'raw', new TextEncoder().encode(pin), 'PBKDF2', false, ['deriveBits']
);
const bits = await crypto.subtle.deriveBits(
{ name: 'PBKDF2', salt, iterations: ITERATIONS, hash: HASH },
keyMaterial, 256
);
const hash = Array.from(new Uint8Array(bits)).map(b => b.toString(16).padStart(2, '0')).join('');
return { hash, salt: saltHex };
}
export async function verifyPin(pin: string, storedHash: string, storedSalt: string): Promise<boolean> {
const salt = Uint8Array.from(storedSalt.match(/.{2}/g)!.map(h => parseInt(h, 16)));
const keyMaterial = await crypto.subtle.importKey(
'raw', new TextEncoder().encode(pin), 'PBKDF2', false, ['deriveBits']
);
const bits = await crypto.subtle.deriveBits(
{ name: 'PBKDF2', salt, iterations: ITERATIONS, hash: HASH },
keyMaterial, 256
);
const hash = Array.from(new Uint8Array(bits)).map(b => b.toString(16).padStart(2, '0')).join('');
return hash === storedHash;
}
5.2 Rate Limiting — General (KV Fixed-Window)
For general API rate limiting (response submission, poll creation). KV eventual consistency is acceptable here — the threat model is abuse prevention, not security-critical.
// lib/rate-limit.ts — KV fixed-window
export async function checkRateLimit(
kv: KVNamespace,
key: string,
windowSeconds: number,
max: number
): Promise<{ allowed: boolean; remaining: number }> {
const now = Math.floor(Date.now() / 1000);
const kvKey = `rl:${key}:${Math.floor(now / windowSeconds)}`;
const raw = await kv.get(kvKey);
const count = raw ? parseInt(raw, 10) : 0;
if (count >= max) return { allowed: false, remaining: 0 };
await kv.put(kvKey, String(count + 1), { expirationTtl: windowSeconds * 2 });
return { allowed: true, remaining: max - count - 1 };
}
Known limitation: Fixed-window allows up to 2× burst at window boundaries. Acceptable for general rate limiting where the goal is throttling, not security.
5.3 Rate Limiting — PIN Verification (DO Sliding-Window)
PIN brute-force prevention requires strong consistency. KV's eventual consistency allows distributed bypass across POPs. Use a Durable Object for the critical path.
// durable-objects/PinRateLimiter.ts
import { DurableObject } from 'cloudflare:workers';
interface LimiterState {
timestamps: number[];
}
export class PinRateLimiter extends DurableObject {
private maxAttempts = 5;
private windowMs = 15 * 60 * 1000; // 15 minutes
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url);
const action = url.pathname.split('/').pop();
if (action === 'check') {
return this.checkLimit();
}
if (action === 'record') {
return this.recordAttempt();
}
return new Response('Not found', { status: 404 });
}
private async checkLimit(): Promise<Response> {
const now = Date.now();
const stored: LimiterState = (await this.ctx.storage.get('state')) ?? { timestamps: [] };
const valid = stored.timestamps.filter(t => now - t < this.windowMs);
const allowed = valid.length < this.maxAttempts;
const retryAfterMs = allowed ? 0 : this.windowMs - (now - valid[0]);
return Response.json({ allowed, remaining: Math.max(0, this.maxAttempts - valid.length), retryAfterMs });
}
private async recordAttempt(): Promise<Response> {
const now = Date.now();
const stored: LimiterState = (await this.ctx.storage.get('state')) ?? { timestamps: [] };
const valid = stored.timestamps.filter(t => now - t < this.windowMs);
valid.push(now);
await this.ctx.storage.put('state', { timestamps: valid });
return Response.json({ recorded: true, count: valid.length });
}
}
Usage in PIN verification route:
// router/manage.ts — close poll endpoint
const limiterStub = env.PIN_LIMITER.get(env.PIN_LIMITER.idFromName(slug));
const checkRes = await limiterStub.fetch(new Request('https://do/check'));
const { allowed, retryAfterMs } = await checkRes.json();
if (!allowed) {
return c.json({ error: 'Too many attempts', retryAfterMs }, 429);
}
const valid = await verifyPin(pin, poll.organizer_pin_hash, poll.organizer_pin_salt);
if (!valid) {
await limiterStub.fetch(new Request('https://do/record', { method: 'POST' }));
return c.json({ error: 'Invalid PIN' }, 401);
}
5.4 Durable Object — PollHub (WS Fan-out + HTTP Broadcast + Auth + Backpressure)
Critical gap addressed: v1.0 PollHub only handled WS upgrade. This version adds:
1. HTTP fetch handler for Worker→DO broadcast communication
2. WebSocket authentication via query parameter
3. Connection backpressure (reject at 80% of memory limit)
4. Connection count monitoring
// durable-objects/PollHub.ts
import { DurableObject } from 'cloudflare:workers';
const MAX_CONNECTIONS = 400; // 80% of ~500 theoretical limit (128MB)
export class PollHub extends DurableObject {
/**
* HTTP fetch handler — dispatches WS upgrades and broadcast commands.
* Worker calls this via doStub.fetch() for both use cases.
*/
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url);
// Route 1: WebSocket upgrade from client browser
if (request.headers.get('Upgrade') === 'websocket') {
return this.handleWebSocketUpgrade(url);
}
// Route 2: HTTP POST /broadcast from Worker after response submission
if (url.pathname === '/broadcast' && request.method === 'POST') {
const payload = await request.json();
this.broadcast(payload);
return new Response('OK', { status: 200 });
}
// Route 3: GET /status — connection count for monitoring
if (url.pathname === '/status' && request.method === 'GET') {
const count = this.ctx.getWebSockets().length;
return Response.json({ connections: count, maxConnections: MAX_CONNECTIONS });
}
return new Response('Expected WebSocket upgrade or /broadcast POST', { status: 426 });
}
/**
* WebSocket upgrade with authentication and backpressure.
* Query params: ?token={editToken} or ?role=viewer (read-only)
*/
private handleWebSocketUpgrade(url: URL): Response {
// Backpressure: reject new connections approaching memory limit
const currentConnections = this.ctx.getWebSockets().length;
if (currentConnections >= MAX_CONNECTIONS) {
return new Response(
JSON.stringify({ error: 'Poll at connection capacity', connections: currentConnections }),
{ status: 503, headers: { 'Retry-After': '30' } }
);
}
// Authentication: require role param (viewer is read-only, editor has token)
const role = url.searchParams.get('role') ?? 'viewer';
// TODO: For editor role, validate token against D1 before accepting
// For v2.0 MVP, accept viewer connections freely but enforce max connections
const pair = new WebSocketPair();
const [client, server] = Object.values(pair);
// Tag the WebSocket with metadata for filtering
this.ctx.acceptWebSocket(server, [role]);
return new Response(null, { status: 101, webSocket: client });
}
/**
* Handle incoming client messages. For now, broadcast to all peers.
* Future: filter by role tags.
*/
async webSocketMessage(ws: WebSocket, message: string | ArrayBuffer): Promise<void> {
// Validate message is a known type before broadcasting
try {
const data = JSON.parse(message as string);
if (!data.type) return; // drop malformed messages
for (const client of this.ctx.getWebSockets()) {
if (client !== ws && client.readyState === WebSocket.READY_STATE_OPEN) {
client.send(typeof message === 'string' ? message : '');
}
}
} catch {
// Drop non-JSON messages silently
}
}
/**
* Broadcast payload to ALL connected WebSocket clients.
* Called via HTTP POST /broadcast from the Worker.
*/
private broadcast(payload: unknown): void {
const msg = JSON.stringify(payload);
for (const ws of this.ctx.getWebSockets()) {
if (ws.readyState === WebSocket.READY_STATE_OPEN) {
ws.send(msg);
}
}
}
async webSocketClose(ws: WebSocket, code: number, reason: string): Promise<void> {
// Handled by hibernation API — DO sleeps when no connections remain
}
async webSocketError(ws: WebSocket, error: unknown): Promise<void> {
ws.close(1011, 'Internal error');
}
}
Worker→DO broadcast call (in response submission route):
// router/responses.ts — after successful D1 write
const doId = env.POLL_HUB.idFromName(slug);
const doStub = env.POLL_HUB.get(doId);
// Fire-and-forget broadcast via ctx.waitUntil — not in critical path
c.executionCtx.waitUntil(
doStub.fetch(new Request('https://do/broadcast', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
type: 'results-updated',
data: aggregatedResults,
timestamp: Date.now(),
}),
}))
);
5.5 WebSocket Client — Reconnection with Polling Fallback (`useRealtime.ts`)
// hooks/useRealtime.ts
import { useEffect, useRef, useCallback, useState } from 'react';
const MAX_RECONNECT_DELAY = 30_000;
const BASE_DELAY = 1_000;
const POLL_INTERVAL = 5_000;
export function useRealtime(slug: string, onUpdate: (data: unknown) => void) {
const wsRef = useRef<WebSocket | null>(null);
const reconnectAttempt = useRef(0);
const [mode, setMode] = useState<'ws' | 'polling'>('ws');
const pollTimer = useRef<ReturnType<typeof setInterval>>();
const connect = useCallback(() => {
const protocol = location.protocol === 'https:' ? 'wss:' : 'ws:';
const ws = new WebSocket(`${protocol}//${location.host}/api/polls/${slug}/ws?role=viewer`);
ws.onopen = () => {
reconnectAttempt.current = 0;
setMode('ws');
// Stop polling fallback if active
if (pollTimer.current) clearInterval(pollTimer.current);
};
ws.onmessage = (event) => {
try {
const data = JSON.parse(event.data);
if (data.type === 'results-updated') onUpdate(data.data);
} catch { /* ignore malformed */ }
};
ws.onclose = (event) => {
wsRef.current = null;
if (event.code === 4003) return; // server rejected — don't retry
// Exponential backoff with jitter
const delay = Math.min(
BASE_DELAY * 2 ** reconnectAttempt.current + Math.random() * 1000,
MAX_RECONNECT_DELAY
);
reconnectAttempt.current++;
// After 5 failed attempts, fall back to polling
if (reconnectAttempt.current > 5) {
setMode('polling');
startPolling();
return;
}
setTimeout(connect, delay);
};
ws.onerror = () => ws.close();
wsRef.current = ws;
}, [slug, onUpdate]);
const startPolling = useCallback(() => {
pollTimer.current = setInterval(async () => {
try {
const res = await fetch(`/api/polls/${slug}/results`);
if (res.ok) onUpdate(await res.json());
} catch { /* retry next interval */ }
}, POLL_INTERVAL);
}, [slug, onUpdate]);
useEffect(() => {
connect();
return () => {
wsRef.current?.close();
if (pollTimer.current) clearInterval(pollTimer.current);
};
}, [connect]);
return { mode };
}
5.6 CSV Export — Cursor-Paginated Streaming
Critical gap addressed: D1 db.all() loads entire result set into memory. For 500 participants × 30 slots = 15,000 rows, this risks breaching the 128MB Worker memory limit. Solution: OFFSET/LIMIT pagination with TransformStream.
// services/export.service.ts — streaming CSV export
const BATCH_SIZE = 100;
export function streamCSVExport(db: D1Database, pollId: string, slots: Slot[]): ReadableStream {
const encoder = new TextEncoder();
let offset = 0;
let headerSent = false;
return new ReadableStream({
async pull(controller) {
if (!headerSent) {
// CSV header: Name, Slot1_Date_Label, Slot2_Date_Label, ...
const header = ['Name', ...slots.map(s => `${s.slot_date} ${s.session_label}`)].join(',');
controller.enqueue(encoder.encode(header + '\n'));
headerSent = true;
}
// Fetch a batch of responses with their slot_responses
const batch = await db.prepare(`
SELECT r.id, r.display_name
FROM responses r
WHERE r.poll_id = ?
ORDER BY r.created_at
LIMIT ? OFFSET ?
`).bind(pollId, BATCH_SIZE, offset).all();
if (!batch.results.length) {
controller.close();
return;
}
for (const response of batch.results) {
// Fetch slot_responses for this response
const slotResponses = await db.prepare(`
SELECT slot_id, status FROM slot_responses WHERE response_id = ?
`).bind(response.id).all();
const statusMap = new Map(slotResponses.results.map(sr => [sr.slot_id, sr.status]));
const row = [
escapeCSV(response.display_name as string),
...slots.map(s => statusMap.get(s.id) ?? 'NO_RESPONSE'),
].join(',');
controller.enqueue(encoder.encode(row + '\n'));
}
offset += BATCH_SIZE;
},
});
}
function escapeCSV(value: string): string {
if (value.includes(',') || value.includes('"') || value.includes('\n')) {
return `"${value.replace(/"/g, '""')}"`;
}
return value;
}
Usage in export route:
// router/manage.ts — GET /api/polls/:slug/export/csv
const stream = streamCSVExport(env.DB, poll.id, slots);
return new Response(stream, {
headers: {
'Content-Type': 'text/csv; charset=utf-8',
'Content-Disposition': `attachment; filename="${poll.slug}-results.csv"`,
},
});
5.7 Optimistic Locking for Response Edits
Critical gap addressed: Two devices with the same edit token can submit concurrent PATCH requests, causing lost updates. Solution: version column with If-Match header.
// router/responses.ts — PATCH /api/polls/:slug/responses/:id
app.patch('/api/polls/:slug/responses/:id', async (c) => {
const { slug, id } = c.req.param();
const ifMatch = c.req.header('If-Match');
if (!ifMatch) {
return c.json({ error: 'If-Match header required' }, 428);
}
const expectedVersion = parseInt(ifMatch, 10);
const db = c.env.DB;
// Attempt update with version check
const result = await db.prepare(`
UPDATE responses SET version = version + 1, updated_at = ?
WHERE id = ? AND poll_id = (SELECT id FROM polls WHERE slug = ?) AND version = ?
`).bind(Math.floor(Date.now() / 1000), id, slug, expectedVersion).run();
if (result.meta.changes === 0) {
// Either response doesn't exist or version mismatch
const exists = await db.prepare('SELECT version FROM responses WHERE id = ?').bind(id).first();
if (!exists) return c.json({ error: 'Response not found' }, 404);
return c.json({
error: 'Conflict — response was modified by another client',
currentVersion: exists.version,
}, 409);
}
// Update slot_responses...
// ...
return c.json({ version: expectedVersion + 1 }, 200);
});
5.8 Security — Output Encoding & Headers
Critical gap addressed: Display names rendered without encoding allow XSS.
// lib/security.ts
/** Escape HTML entities in user-provided strings before rendering */
export function escapeHTML(str: string): string {
return str
.replace(/&/g, '&')
.replace(/</g, '<')
.replace(/>/g, '>')
.replace(/"/g, '"')
.replace(/'/g, ''');
}
/** Standard security headers middleware for Hono */
export function securityHeaders(): MiddlewareHandler {
return async (c, next) => {
await next();
c.header('X-Content-Type-Options', 'nosniff');
c.header('X-Frame-Options', 'DENY');
c.header('Referrer-Policy', 'strict-origin');
c.header('Content-Security-Policy',
"default-src 'self'; connect-src 'self' wss:; script-src 'self'; style-src 'self' 'unsafe-inline'");
c.header('Permissions-Policy', 'camera=(), microphone=(), geolocation=()');
};
}
Note: Output encoding is applied server-side in API responses. React's JSX auto-escapes by default, but display_name must never be rendered via dangerouslySetInnerHTML.
5.9 Recommendation Engine (unchanged from v1.0)
Pure function — no I/O, no platform dependencies. Identical to TDD v1.0 §5.3. See services/recommendation.engine.ts.
5.10 ICS Export (adapted for Workers)
Same logic as TDD v1.0 §5.5, but using Web Crypto for UID generation and TextEncoder for response encoding. No Buffer — use new TextEncoder().encode().
6. Database Migrations
6.1 Drizzle Config (`apps/worker/drizzle.config.ts`)
import type { Config } from 'drizzle-kit';
export default {
schema: './src/db/schema.ts',
out: './src/db/migrations',
dialect: 'sqlite', // D1 is SQLite — NOT postgresql
driver: 'd1-http',
} satisfies Config;
6.2 Schema Changes from v1.0
| Change | Reason |
| `responses.edit_token_salt` column added | PBKDF2 requires explicit salt (Kimi finding) |
| `responses.version` column added | Optimistic locking for concurrent edits (Kimi finding) |
| All `UUID` → `TEXT` | SQLite has no native UUID type |
| All `TIMESTAMPTZ` → `INTEGER` | Unix seconds; timezone in application layer |
| `gen_random_uuid()` removed | Use `crypto.randomUUID()` in application code |
6.3 Migration Files
0001_initial.sql — Creates polls, slots, responses, slot_responses tables and indexes. (See SDD v2.0 §7.2 for full schema.)
0002_optimistic_locking.sql:
ALTER TABLE responses ADD COLUMN version INTEGER NOT NULL DEFAULT 1;
ALTER TABLE responses ADD COLUMN edit_token_salt TEXT NOT NULL DEFAULT '';
-- Backfill note: existing rows get version=1 and empty salt (re-hash on next edit)
0002_email_columns.sql:
ALTER TABLE responses ADD COLUMN email TEXT NOT NULL DEFAULT '';
ALTER TABLE polls ADD COLUMN organizer_email TEXT NOT NULL DEFAULT '';
ALTER TABLE polls ADD COLUMN calendar_event_id TEXT DEFAULT '';
ALTER TABLE polls ADD COLUMN google_meet_link TEXT DEFAULT '';
6.4 Migration Workflow
# Generate migration from Drizzle schema changes
cd apps/worker
pnpm drizzle-kit generate
# Apply locally
wrangler d1 execute scheduling-tool --local --file=src/db/migrations/XXXX_name.sql
# Apply to production (remote D1)
wrangler d1 execute scheduling-tool --remote --file=src/db/migrations/XXXX_name.sql
# Inspect remote schema
wrangler d1 execute scheduling-tool --remote --command ".schema"
6.5 Rollback Strategy
- Each migration file is immutable once deployed
- Rollback = new migration that reverses the change
- D1/SQLite does not support transactional DDL — plan migrations carefully
- Column drops: 3-step deploy (add new → migrate data → drop old) across separate releases
- No
wrangler d1 executein CI without explicit approval gate
7. Testing Strategy
7.1 Test Pyramid
| Layer | Tool | Coverage Target | What is tested |
| Unit | Vitest | ≥ 90% on services/lib | RecommendationEngine, slug/PIN, rate limit, ICS, CSV escaping, output encoding |
| Integration | Vitest + Miniflare | ≥ 80% on API routes | Full request → D1 → response; DO broadcast; KV rate limiting |
| E2E | Playwright | Critical paths | Create poll → respond → see results → export |
Note: Miniflare (Wrangler's local simulator) replaces Docker + testcontainers from v1.0. It emulates D1, KV, DO, and R2 in-process.
7.2 Key Unit Test Cases
// recommendation.engine.test.ts — unchanged from v1.0
describe('rankSlots', () => {
it('ranks slot with most available respondents first');
it('applies 0.5 weight to tentative responses');
it('flags slot as BEST when availableCount >= 50% of total');
it('breaks ties by earliest date');
it('returns empty array when no respondents');
it('handles all-unavailable scenario correctly');
});
// pin.test.ts — PBKDF2 (replaces bcrypt tests)
describe('hashPin / verifyPin', () => {
it('returns true for correct PIN against hash + salt');
it('returns false for wrong PIN');
it('produces different hashes for same PIN with different salts');
it('hash output is 64-char hex string');
it('salt output is 32-char hex string');
});
// security.test.ts — output encoding
describe('escapeHTML', () => {
it('escapes <script> tags');
it('escapes &, <, >, ", single quotes');
it('passes through safe strings unchanged');
});
// export.service.test.ts — CSV streaming
describe('streamCSVExport', () => {
it('produces valid CSV header matching slot count');
it('escapes display names containing commas');
it('handles 0 responses gracefully');
});
7.3 Integration Test Pattern (Miniflare)
// polls.api.test.ts
import { unstable_dev } from 'wrangler';
describe('POST /api/polls', () => {
let worker: UnstableDevWorker;
beforeAll(async () => { worker = await unstable_dev('src/index.ts'); });
afterAll(async () => { await worker.stop(); });
it('creates a poll and returns slug + organizer PIN', async () => {
const res = await worker.fetch('/api/polls', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(validPollPayload),
});
expect(res.status).toBe(201);
const body = await res.json();
expect(body.pollSlug).toMatch(/^[A-Za-z0-9_-]{22}$/);
expect(body.organizerPin).toMatch(/^\d{6}$/);
});
it('returns 429 after exceeding rate limit');
it('returns 400 for missing title');
});
7.4 DO Integration Tests
// PollHub.test.ts
describe('PollHub Durable Object', () => {
it('accepts WebSocket upgrade and returns 101');
it('rejects non-WebSocket requests with 426');
it('rejects connections when at MAX_CONNECTIONS with 503');
it('broadcasts payload to all connected clients via POST /broadcast');
it('returns connection count via GET /status');
it('drops malformed non-JSON WebSocket messages');
});
describe('PinRateLimiter Durable Object', () => {
it('allows first 5 attempts within window');
it('rejects 6th attempt with retryAfterMs');
it('resets after window expires');
});
7.5 E2E Test Suite (Playwright)
Same as TDD v1.0 §7.4. No changes to E2E test structure — the UI contract is identical.
7.6 CI Test Commands
pnpm typecheck # TypeScript — zero errors required
pnpm lint # ESLint — zero errors required
pnpm test # Vitest unit + integration (Miniflare)
pnpm test:e2e # Playwright E2E (against wrangler dev)
8. Performance Considerations
8.1 Caching Strategy (Cloudflare Cache API)
| Resource | Cache | TTL | Invalidation |
| GET `/api/polls/:slug` | Cache API | 60s | On poll update/close |
| GET `/api/polls/:slug/results` | Cache API | 5s | On response submission |
| Static assets (JS/CSS) | Cloudflare CDN | 1 year | Content-hash filenames |
Cache-aside pattern:
const cacheKey = new Request(`https://cache/polls/${slug}/results`);
const cache = caches.default;
const cached = await cache.match(cacheKey);
if (cached) return cached;
const data = await fetchResultsFromD1(db, slug);
const response = Response.json(data);
response.headers.set('Cache-Control', 'max-age=5');
c.executionCtx.waitUntil(cache.put(cacheKey, response.clone()));
return response;
8.2 Database Query Optimization
resultsendpoint: single aggregating query withGROUP BY slot_id, status— no N+1- Slug lookup:
idx_polls_slugindex — constant time - Expiry cron: partial index
idx_polls_expires_at WHERE status = 'OPEN' - CSV export: OFFSET/LIMIT pagination (§5.6) — bounded memory per batch
8.3 Real-Time Scalability
- DO hibernation: near-zero idle cost between messages
- Single DO per poll: natural horizontal isolation
- Connection limit: 400 per DO instance (§5.4)
- Fallback: client degrades to 5s HTTP polling after WS failure (§5.5)
9. Observability
9.1 Structured Logging (Workers `console.log` → Logpush)
function log(level: string, service: string, data: Record<string, unknown>) {
console.log(JSON.stringify({
level,
service,
timestamp: new Date().toISOString(),
...data,
}));
}
// Usage:
log('info', 'response.service', {
action: 'submit_response',
pollSlug: slug,
durationMs: Date.now() - start,
});
PII rules: displayName logged only as displayName_length. No IP addresses persisted.
Logs shipped via Cloudflare Logpush to R2 or an external SIEM.
9.2 Key Metrics
| Metric | Source | Alert |
| `poll.created` | Worker log | — |
| `response.submitted` | Worker log | — |
| `api.latency_ms` | Worker log | p99 > 800ms |
| `rate_limit.hit` | KV/DO log | > 100/min |
| `ws.connections` | DO `/status` | > 350 per poll |
| `do.memory_usage` | Cloudflare dashboard | > 100MB |
| `d1.query_duration_ms` | Worker log | p99 > 200ms |
| `poll.purge.count` | Cron log | — |
9.3 Health Endpoint
GET /api/health — tests D1 connectivity:
app.get('/api/health', async (c) => {
try {
await c.env.DB.prepare('SELECT 1').first();
return c.json({ status: 'ok', d1: 'ok', version: '2.0.0' });
} catch {
return c.json({ status: 'degraded', d1: 'error' }, 503);
}
});
No Redis health check needed — Redis is eliminated in v2.0.
10. CI/CD Pipeline
10.1 Pipeline Stages (GitHub Actions)
on: push (main, feat/*)
jobs:
quality:
- pnpm install
- pnpm typecheck
- pnpm lint
- pnpm test # Vitest + Miniflare
e2e:
needs: quality
- wrangler dev & # start local CF stack
- pnpm test:e2e # Playwright
deploy-worker:
needs: e2e
if: branch == main
- wrangler d1 execute scheduling-tool --remote --file=migrations/pending.sql
- wrangler deploy # Worker + DO
deploy-frontend:
needs: deploy-worker
if: branch == main
- cd apps/web && pnpm build
- wrangler pages deploy dist/ --project-name scheduling-tool
smoke-test:
needs: [deploy-worker, deploy-frontend]
- curl -f https://scheduling-tool.pages.dev/api/health
10.2 No Dockerfile
Docker is eliminated in v2.0. The entire stack is deployed via wrangler. No containers, no container registry, no orchestration layer.
10.3 Migration Safety Gate
D1 migrations in CI require an explicit approval step:
deploy-migration:
needs: e2e
environment: production # GitHub environment with required reviewers
steps:
- run: wrangler d1 execute scheduling-tool --remote --file=$MIGRATION_FILE
11. Runbook
11.1 Deploy Checklist
- [ ] Run pending D1 migrations via
wrangler d1 execute --remote - [ ] Deploy Worker via
wrangler deploy - [ ] Deploy frontend via
wrangler pages deploy - [ ] Verify
GET /api/healthreturns 200 withd1: ok - [ ] Check Cron Triggers registered:
wrangler triggers list - [ ] Verify DO classes deployed:
wrangler deployments list
11.2 Common Issues
Poll page returns 404 after deploy
- Check: slug exists in D1 (
wrangler d1 execute scheduling-tool --remote --command "SELECT * FROM polls WHERE slug = '...'") - Check: poll not expired (
status = 'OPEN',expires_at > unix_now)
WebSocket not updating
- Check: DO
/statusendpoint returns connections > 0 - Check: Worker→DO broadcast call succeeds (check Worker logs for errors)
- Check: client
useRealtimehasn't fallen back to polling (check browser console) - Check: CORS allows WebSocket upgrade from Pages domain
Rate limit false positives (general)
- Check:
RATE_LIMIT_WINDOW_SECONDSandRATE_LIMIT_MAXin wrangler.toml[vars] - KV eventual consistency may cause brief over-counting — acceptable
PIN lockout false positives
- Check: PinRateLimiter DO state via
wrangler d1(DOs have separate storage) - The 15-minute window auto-clears — wait or manually reset DO storage
CSV export timeout
- For polls with 400+ participants, export may approach 30s Worker CPU limit
- Mitigation: reduce BATCH_SIZE or move to DO alarm-based chunked export
- Future: stage CSV to R2 via DO alarm, return pre-signed R2 URL
Cron not running
- Check:
[[triggers]]section in wrangler.toml - Check: Cron execution logs in Cloudflare dashboard → Workers → Triggers
11.3 Data Purge Verification
wrangler d1 execute scheduling-tool --remote --command \
"SELECT COUNT(*) as stale FROM polls WHERE expires_at < unixepoch() - (30 * 86400);"
# Expected: 0
11.4 D1 Schema Inspection
wrangler d1 execute scheduling-tool --remote --command ".schema"
wrangler d1 execute scheduling-tool --remote --command ".tables"
Appendix A: Gap Resolution Matrix
Issues identified by Kimi K2.5 architecture review (2026-04-10) and their resolution in this TDD:
| # | Gap | Severity | Resolution | Section |
| 1 | TDD v1.0 describes Node.js/PostgreSQL/Redis — wrong stack | Critical | Full TDD rewrite for Cloudflare | All |
| 2 | DO PollHub missing HTTP fetch handler for broadcasts | Critical | Added `/broadcast` POST + `/status` GET routes | §5.4 |
| 3 | WebSocket authentication missing | Critical | Query param auth + backpressure at 400 connections | §5.4 |
| 4 | PBKDF2 salt column missing from responses table | High | `edit_token_salt` added; migration 0002 | §6.2, §6.3 |
| 5 | KV rate limit 2× burst vulnerability for PIN | High | DO-based sliding window for PIN verification | §5.3 |
| 6 | CSV export loads full result set into memory | High | OFFSET/LIMIT cursor pagination with TransformStream | §5.6 |
| 7 | No optimistic locking for concurrent response edits | High | `version` column + `If-Match` / 409 Conflict | §5.7 |
| 8 | XSS via display names | Medium | Output encoding + CSP headers | §5.8 |
| 9 | WebSocket reconnection logic unspecified | Medium | Exponential backoff + polling fallback | §5.5 |
| 10 | No CI/CD pipeline for D1 migrations | Medium | Wrangler-based pipeline with approval gate | §10.1, §10.3 |
| 11 | No local dev workflow for cross-service routing | Low | Vite proxy config for /api/* → Wrangler | §4.3 |
| 12 | No contact capture — participants are anonymous | High | Email field added to responses, VCF/CSV/JSON export endpoints | §5.6, §6.3 |
| 13 | No calendar integration — manual ICS download only | High | Google Calendar API via service account, auto-event on close | §5.10 |
*CODITECT Artifact A5 — TDD v2.0.0 · Group Availability Scheduling Tool · Cloudflare-native*
*Generated: 2026-04-10 · Status: Draft · Supersedes: v1.0.0*
*Reconciled with: SDD v2.0.0, Master System Prompt v2.0.0, ADRs 001–007*
*Reviewed by: Kimi K2.5 architecture analysis*