Feature 00: Project Foundation
Overview
Set up the Rust Cargo workspace, database infrastructure, Docker Compose development environment, and CI pipeline. This is the skeleton that every subsequent feature builds on — no game logic yet, just the scaffolding to build, run, test, and deploy.
Technical Tasks
1. Initialize Cargo Workspace
- Create root
Cargo.tomlwith workspace members - Create crate stubs:
crates/api— binary, depends ondb,types,game,jobscrates/workers— binary, depends ondb,types,game,jobscrates/game— librarycrates/db— librarycrates/types— librarycrates/jobs— library
- Add shared dependencies to workspace
Cargo.toml:tokio(async runtime)serde,serde_json(serialization)sqlx(database, withpostgres,runtime-tokio,tls-rustlsfeatures)redis(withtokio-compfeature)uuid(withv4,serdefeatures)chrono(timestamps, withserdefeature)tracing,tracing-subscriber(logging)thiserror,anyhow(error handling)
- Verify
cargo buildsucceeds with empty crates - Verify
cargo testruns (no tests yet, but harness works)
2. API Server Skeleton (crates/api)
- Add Axum dependencies:
axum,tower,tower-http(cors, tracing, compression) - Create
main.rswith:- Tokio runtime initialization
- Tracing subscriber setup (JSON format for structured logging)
- Axum router with a health check endpoint:
GET /api/health→{ "status": "ok" } - Graceful shutdown on SIGTERM/SIGINT
- Create
AppStatestruct holding:PgPool(SQLx connection pool)redis::Client(Redis connection)
- Wire AppState into Axum via
ExtensionorState - Create
src/error.rs— unified error type that converts to Axum responses with appropriate HTTP status codes - Create
src/middleware/directory with placeholder modules
3. Worker Binary Skeleton (crates/workers)
- Create
main.rswith:- Tokio runtime initialization
- Tracing subscriber setup
- CLI argument parsing (e.g.,
--schedulerflag) viaclap - Placeholder job processing loop: poll Redis, log “no jobs”, sleep
- Graceful shutdown
4. Database Setup (crates/db)
- Add SQLx CLI as a dev dependency or document
cargo install sqlx-cli - Create
migrations/directory - Create initial migration
0001_create_extensions.sql:CREATE EXTENSION IF NOT EXISTS "pgcrypto";(forgen_random_uuid())
- Create
src/lib.rs:create_pool(database_url: &str) -> PgPoolfunctionrun_migrations(pool: &PgPool)function
- Create
src/models/mod.rs— empty, will hold row types - Create
src/queries/mod.rs— empty, will hold query functions
5. Job Queue Foundation (crates/jobs)
- Create Redis-backed delayed job queue:
enqueue(queue: &str, payload: &[u8], delay: Duration)— adds job to Redis sorted set withscore = now + delayenqueue_immediate(queue: &str, payload: &[u8])— adds withscore = nowpoll(queue: &str) -> Option<Job>—ZPOPMINwhere score <= nowack(job_id: &str)— remove from processing setfail(job_id: &str, error: &str)— move to dead letter queue
- Job struct:
{ id: String, queue: String, payload: Vec<u8>, enqueued_at: DateTime, attempts: u32 } - Use Redis
ZADD/ZPOPMINwith timestamps as scores for delayed execution - Add retry logic: on failure, re-enqueue with exponential backoff (max 3 retries)
6. Docker Compose Development Environment
- Create
docker-compose.yml:- PostgreSQL 16 Alpine (port 5432, volume for persistence)
- Valkey 7 Alpine (port 6379)
- Mailpit (ports 8025 web UI, 1025 SMTP)
- Create
.env.examplewith:DATABASE_URL=postgres://delve:devpassword@localhost:5432/delveREDIS_URL=redis://localhost:6379SMTP_HOST=localhostSMTP_PORT=1025
- Add
.envto.gitignore
7. Configuration Management
- Create
crates/api/src/config.rs:- Load config from environment variables (with
dotenvyfor .env files in dev) - Struct:
Config { database_url, redis_url, host, port, smtp_host, smtp_port } - Validate required fields on startup, panic with clear error if missing
- Load config from environment variables (with
8. CI Pipeline
- Create
.github/workflows/ci.yml:- Trigger on PR and push to main
- Steps:
cargo fmt --check,cargo clippy,cargo test,cargo build --release - Use
actions-rs/toolchainordtolnay/rust-toolchain - Cache cargo registry and target directory via
Swatinem/rust-cache - Start PostgreSQL and Redis services for integration tests
9. Dockerfile
- Create multi-stage
Dockerfile:- Stage 1:
cargo-chefto cache dependency compilation - Stage 2: Build release binaries
- Stage 3:
FROM debian:bookworm-slim(or Alpine) — copy binaries, set entrypoint
- Stage 1:
- Produce two targets:
delve-apianddelve-workers - Verify
docker buildsucceeds and images are < 50MB
10. SvelteKit Client Skeleton
- Initialize SvelteKit project in
client/directory:pnpm create svelte@latest client(skeleton project, TypeScript)- Install:
tailwindcss,@tanstack/svelte-query,@tanstack/query-sync-storage-persister - Configure Tailwind with design token stubs (colors for rarity tiers, etc.)
- Configure SvelteKit static adapter (SPA mode)
- Configure Vite proxy:
/api→http://localhost:3000in dev
- Create root layout (
+layout.svelte):- QueryClientProvider wrapper
- Placeholder navigation shell
- Create
GET /api/healthquery to verify client↔server connectivity - Create
lib/api/client.ts— base fetch wrapper with auth header injection
Tests
Unit Tests
crates/jobs: Testenqueue+pollreturns job after delay has elapsedcrates/jobs: TestpollreturnsNonewhen delay hasn’t elapsedcrates/jobs: Testackremoves job from processing setcrates/jobs: Testfailwith retry re-enqueues with backoffcrates/jobs: Testfailafter max retries moves to dead letter queuecrates/db: Testcreate_poolconnects to test databasecrates/db: Testrun_migrationsapplies migrations without errorcrates/api/config: Test config loads from env varscrates/api/config: Test config panics on missing required fields
Integration Tests
- API server starts and
GET /api/healthreturns 200 with{ "status": "ok" } - API server connects to PostgreSQL (pool creation succeeds)
- API server connects to Redis (ping succeeds)
- Worker binary starts and exits cleanly on SIGTERM
- Job queue round-trip: enqueue → poll → ack (against real Redis)
- Migrations run cleanly on a fresh database
Client Tests
- SvelteKit builds without errors (
pnpm build) - Health check query hits the API and receives a response (dev proxy test)