Live on Cloudflare

From a two-person boutique
to enterprise-grade.

Implementing every feature and taking them farther. Appointments, events, queuing, payments, Exchange, reporting, brand management, native mobile app generation — unified on a single edge-native platform. 55,000 lines of modern TypeScript. A 264× cost reduction.

680,000 lines of platform to understand
680,000 lines of production code across 31 repositories. The Exchange integration alone is 63,000 lines of C# on Kubernetes. Three separate platforms — Global, Enterprise, Business — each with their own capabilities. Four coexisting scheduling engines. Three cart implementations. Six API versions. The scale of what's been built over the years is enormous — and every behavior needed to be understood before building forward.
680,000+
Lines of legacy code
31
Repositories analyzed
258
Database tables
287
ActiveRecord models
1,412
Legacy spec files
6
Coexisting API versions
18+
Payment processors
4
Scheduling engine generations
62
Background job workers
1,500+
Database migrations
Every line read. Every behavior captured.
We didn't just read the code—we ran it. We stood up the legacy monolith in Docker (Ruby 2.3.3, Rails 4.0.13, MySQL 8), executed 996 scheduler specs with zero failures, and built an automated fixture extraction harness that exercised the actual production engine to generate provably-correct test oracles. Every fixture traces back to real legacy behavior.
996
Legacy Specs Validated
Ruby scheduler suite, zero failures
527
Oracle Fixtures
JSON from actual legacy engine
1,696
TypeScript Fixtures
Across all domains + Exchange
4,177
Lines of Behavioral Specs
Acceptance criteria for every module
Clean-Room Test Harness
2,294 total golden fixtures (1,696 TypeScript + 527 Ruby JSON oracle + 71 C# JSON oracle) form a clean-room testing harness. The TypeScript fixtures define expected behavior extracted from specification analysis. The JSON fixtures are generated by exercising the actual legacy engines inside Docker—provably correct outputs from provably correct inputs. Together, they let us validate our rebuild against the real system without copying a single line of legacy code.
AI-generated. Test-first. Provably correct.
Every line of Voyage was generated by AI using a test-first methodology. The legacy codebase is the ground truth oracle—not a source of copied code. We analyzed it, extracted behavioral fixtures, defined acceptance criteria, and then generated a clean-room implementation that passes every test. The legacy code was never read to write production code. It was read to write tests.
516
Unit Tests
All passing, across 15 packages
2,294
Golden Fixtures
1,696 TS + 527 Ruby JSON + 71 C# JSON
53
E2E Browser Tests
Playwright, 20 spec files, video recordings
12/12
Edge Cases Covered
101 targeted edge case tests
The Clean-Room Pipeline
1
Analyze Legacy
Read 680K lines of Ruby + 63K lines of C#. Map every model, endpoint, and edge case. Never copy.
2
Extract Oracles
Run legacy tests in Docker (Ruby) and .NET (C#). Extract input/output pairs as golden fixtures proving correct behavior.
3
Write Tests First
Define behavioral specs and test cases from the fixtures. Every module has acceptance criteria before any implementation exists.
4
Generate & Verify
AI generates TypeScript implementations. Tests run automatically. Code ships only when every fixture passes.
Why clean-room matters
Clean-room implementation means zero IP contamination. The legacy code was used exclusively to define what the system should do, never how it should do it. The golden fixtures prove behavioral equivalence without copying a single algorithm, data structure, or line of business logic. This is how you rebuild a decade-old platform with legal confidence, architectural freedom, and provable correctness.
Cloudflare-native. Zero servers.
Every component runs on Cloudflare's edge. Workers for compute, D1 for storage, Durable Objects for coordination, R2 for assets, Queues for async jobs. Each customer gets isolated databases—not shared shards. The cost of an idle tenant approaches zero.
Frontend Apps — Cloudflare Pages
Admin Portal
React 19
Booking Flow
React 19
Staff Portal
React 19
Kiosk
React 19
Edge Gateway
Edge Gateway
Tenant resolution • Auth • Rate limits • CORS • Tracing
Domain Workers — Hono
Auth
29 tests
Catalog
32 tests
Availability
33 tests
Booking
41 tests
Events
28 tests
Payments
20 tests
Notifications
21 tests
Integrations
20 tests
Queue
35 tests
Reporting
42 tests
Exchange Sync
55 tests
Queue Simulator
Cron every 2 min
Shared Packages
Schedule Engine
121 tests • 2,294 fixtures
API Contracts
Zod / OpenAPI
Domain Types
TypeScript
Cloudflare Infrastructure — Per Tenant
Control Plane D1
Tenant registry
Config D1
Per-tenant config
Ops D1
Per-tenant data
Durable Objects
Booking locks
R2
Assets
Queues
Async jobs
From Black Box to Clean Room
The legacy Exchange integration is the system nobody wants to touch—63,000 lines of C# across 5 microservices, glued together with 3,000 lines of Ruby, RabbitMQ, Redis, SQL Server, and Kubernetes. Nobody on the team fully understands it. We reverse-engineered the entire thing, ran 653 C# tests to green, extracted golden fixtures proving behavioral equivalence, and rebuilt it as ~2,000 lines of TypeScript in a single Cloudflare Worker with 55 tests passing. Same functionality. Zero infrastructure.
Legacy — What Exists Today
  • 681 C# files across 20 .NET projects
  • 5 internal microservices
  • RabbitMQ message bus
  • Redis for distributed locks
  • SQL Server + Entity Framework
  • Kubernetes + Helm charts
  • 3,000 lines of Ruby glue code
  • 8-layer call chain for one calendar sync
Voyage — The Replacement
  • 1 TypeScript Worker (~2,000 lines)
  • Direct Microsoft Graph API calls
  • Cloudflare Queues (replaces RabbitMQ)
  • KV (replaces Redis)
  • D1 (replaces SQL Server)
  • wrangler deploy (replaces K8s)
  • 2-step call chain
  • Event-driven, no polling
63K → 2K
Lines of Code
97% reduction
681 → 15
Files
98% reduction
5 → 1
Services
Microservices → Worker
2,294
Golden Fixtures
Proving equivalence
$0.50/mo
Per-Tenant Cost
Infrastructure included
Clean-room methodology, applied to C#
We stood up the full .NET solution, ran 653 C# tests to green, then extracted 71 JSON oracle fixtures from the test harness. A second pass extracted 276 TypeScript fixtures from the C# test logic—covering calendar sync, subscription lifecycle, conflict resolution, timezone edge cases, and retry semantics. Combined with integration-level golden fixtures, 2,294 total test oracles prove the Voyage Worker behaves identically to the legacy stack for every documented scenario. The same methodology that worked for the Ruby scheduling engine, now applied to the C# Exchange integration.
Queuing that runs itself.
Walk-in customers join from a kiosk or their phone. Staff see a live board with estimated wait times. The system auto-manages positions, no-shows, and service completion—then automatically assigns the best available staff member using real performance data. A simulator generates realistic traffic so you can see the queue in action right now.
Legacy JRNI Queue
  • Sidekiq background jobs for position recalculation
  • Redis for real-time state (ephemeral, can lose data)
  • Complex “sanitize queuers” logic to fix inconsistent state
  • Sequential ticket numbering with counter reset hacks
  • Pusher enterprise ($36K/yr) for real-time updates
  • No built-in analytics or reporting
  • acts_as_list gem for position management
  • 8-table domain model with join tables
Voyage Queue
  • Single Worker, no background job infrastructure
  • D1 for persistent state (never loses data)
  • Position calculated from timestamps (always consistent)
  • ULID ticket numbers (globally unique, no reset needed)
  • 10-second polling auto-refresh (no Pusher cost)
  • Built-in analytics: avg wait, throughput, peak hours
  • Autonomous simulator for demo/testing
  • Cron Trigger-driven, real API traffic every 2 min
242+
Queue Entries
Generated today by simulator
2 min
Avg Wait
Join to call
28 min
Avg Service
Call to completion
0%
No-Show Rate
All entries served
35
Tests
Queue worker passing
How it works under the hood
A Cloudflare Worker with D1 persistence, Cron Trigger-driven simulator generating real API traffic every 2 minutes. Queue state is consistent—positions calculated from actual entry timestamps, not in-memory counters. The same Worker serves the kiosk join endpoint, the admin board, and the reporting metrics. No sanitization logic needed because there is no inconsistent state to sanitize. The legacy system requires Sidekiq jobs running sanitize_queuers to fix position drift, staff assignment bugs, and counter resets. Voyage eliminates the entire class of problems by storing state in D1 and computing positions on read.
One work unit for everything.
The legacy platform has separate models for appointments, events, queue entries, and group bookings — each with its own lifecycle, its own state machine, its own payment path. Voyage collapses all of these into a single abstraction: the service_session. Every interaction, from a branch appointment to a 500-person seminar, is a session with transitions.
Legacy — Separate Models Per Domain
  • Bookings with 12 statuses and ad-hoc transitions
  • Events with their own ticketing lifecycle
  • Queue entries with separate state management
  • Group bookings bolted onto appointments
  • 3 different cart implementations
  • Effects scattered across callbacks and background jobs
  • No unified audit trail across domains
  • Features can't cross domain boundaries
Voyage — Unified Session Architecture
  • service_session as the single work unit
  • Configurable state machine with typed transitions
  • 16 effect types, per-tenant, condition-evaluated
  • Mixed carts: appointments + events + add-ons
  • Full audit trail on every transition
  • Effects engine fires webhooks, emails, SMS, queue updates
  • Cross-domain flows emerge naturally
  • One payment path for all session types
Cross-domain flows that were impossible before
A walk-in upsold into an afternoon workshop. A no-show slot instantly offered to the queue. Event attendees converting to recurring appointments. A group booking that mixes services across staff and locations in a single cart. These flows were impossible in the legacy platform because each domain was a silo. With unified sessions, they're just transitions with effects — configured per tenant, evaluated at runtime, audited on every step.
16
Effect Types
Webhook, email, SMS, queue, audit, etc.
1
Work Unit
service_session for everything
1
Cart & Payment Path
Mixed sessions, single checkout
100%
Audit Coverage
Every transition, every effect
Intelligent assignment. No other scheduling platform does this.
Walk-in customers join a queue via kiosk or staff. Then something remarkable happens: the system automatically assigns the best available staff member. Not randomly. Not round-robin. Based on real performance data and customer value—a two-sided matching algorithm that pairs your highest-value customers with your highest-performing staff, automatically, in real time.
FIFO
First come, first served. Next available staff member. Simple and fair.
Round Robin
Even distribution across all staff. No one sits idle while others are overloaded.
Performance
Weighted by conversion rate, ratings, completion speed. Best staff get more customers.
Value Match
High-value customers routed to top-performing staff. Maximize revenue per interaction.
Balanced
Blends performance, fairness, and value. The smart default for most businesses.
Staff Performance Scores
Conversion rate
Customer ratings
Completion rate
Service speed
Revenue generated
Smart
Match
Customer Value Scores
Lifetime value
Visit frequency
Recency
Service tier
No-show history
5,300+
Real Bookings
Powering the scoring model
378
Customer Scores
Value computed from history
16
Workers Deployed
Cloudflare edge network
4
React Apps
Admin, booking, staff, kiosk
Unified Staff Calendar
Appointments and queue walk-ins appear on one unified calendar. Walk-ins intelligently fill gaps between scheduled appointments—no idle time, no double bookings. Interactive blocks with one-click complete and no-show actions. Staff see their full day at a glance: scheduled appointments, queue assignments, and available gaps.
Why this matters
Every other scheduling platform treats walk-ins and appointments as separate systems. Customers wait in line while staff sit idle between bookings. Voyage merges both into a single intelligent workflow. The result: higher staff utilization, shorter wait times, and your best customers always getting your best people—without a manager making assignment decisions.
D1-native analytics. No data warehouse required.
The legacy platform pipes data through Elasticsearch, Looker, and custom ETL jobs to produce reports. Voyage runs analytics directly against D1 using SQL aggregations at the edge—no separate data warehouse, no ETL pipeline, no additional cost.
5
Report Types
Bookings, revenue, staff utilization, customer retention, and CSV export. Each with date range filtering, drill-down by service/staff, and trend analysis.
44
Tests
Full coverage of aggregation queries, GROUP BY, heatmaps, day-of-week analysis, lead time calculations, cancellation rates, and CSV generation.
0
External Dependencies
No Elasticsearch. No Looker. No ETL. Pure D1 SQL with strftime, julianday, GROUP BY, HAVING, and CASE expressions running at the edge.
Built-in Analytics
Booking reports: status breakdown, top services, day-of-week & hourly heatmaps, cancellation rate, average lead time, daily trend
Revenue reports: total/paid/unpaid/refunded, by service, by staff, daily trend
Utilization: per-staff booked hours vs available hours, busiest/least busy, peak hours
Customers: total, new vs returning, retention rate, top customers, avg bookings/customer
Export: CSV for bookings, orders, customers, audit events with proper quoting
Full audit trail included
Every mutation (booking, cancellation, reschedule, check-in, config change) writes to an append-only audit_events table with actor, action, entity, and timestamp. Admin and internal audit endpoints with date-range queries and CSV export. Compliance-ready from day one, not bolted on later.
569 tests. All passing. All green.
516 unit tests across 14 domain workers and a scheduling engine. 53 end-to-end browser tests across 20 Playwright spec files covering desktop and mobile viewports. 2,294 golden fixtures extracted from the running legacy systems. Nothing hand-waved. The tests and fixtures validate the exact same D1 database that's serving the applications you just used. Public evidence at /tests/.
516
Unit Tests
Across 15 packages (14 workers + schedule engine), including 101 edge case tests covering DST, concurrency, capacity, and more. All passing.
53
E2E Browser Tests
Full booking lifecycle tested end-to-end: hold → checkout → confirm → cancel. 20 Playwright spec files covering desktop and mobile viewports. View test evidence →
2,294
Golden Fixtures
1,696 TypeScript + 527 Ruby JSON + 71 C# JSON oracles. Every fixture traces to real legacy engine behavior. Clean-room methodology.
16
Workers Deployed
Auth, Catalog, Availability, Booking, Events, Payments, Notifications, Integrations, Queue, Queue Simulator, Smart Assignment, Staff Calendar, Reporting, Exchange Sync, Edge Gateway, Control Plane.
4
Frontend Apps
Admin portal, customer booking flow, staff portal, self-service kiosk. React 19 on Cloudflare Pages. Tested on desktop and mobile viewports.
Phase 0
Evals
Phase 1
Control Plane
Phase 2
Bootstrap
Phase 3
Scheduling
Phase 4
Booking
Phase 5
Frontend
Phase 6
Payments
Phase 7
Staff
Phase 8
Events
Phase 9
Queue
Phase 10
Integrations
Phase 11
Harden
Book a real appointment. See it in real-time.
These aren't mockups with canned data. All four apps are backed by real D1 data. Book an appointment in the customer app — it creates a real record in D1. Open the admin portal — you'll see it in the bookings list and revenue reports. Check the queue board — the simulator is generating live walk-in traffic right now. Same database, four different views. That's the whole point.
Start Guided Tour → Booking → Staff → Admin → Kiosk
680K lines → 55K. One platform for every customer.
A 12× codebase reduction. Every component is deployed and queryable right now. One platform, every customer size, no deployment tiers. Same code from a 10-booking branch to HSBC at 510K bookings/year.
16
Workers Deployed
Edge gateway, auth, catalog, availability, booking, events, payments, notifications, queue, queue-simulator, smart-assignment, staff-calendar, integrations, reporting, exchange-sync, control-plane
3
D1 Databases
Control plane, tenant config (services, staff, schedules), tenant ops (bookings, orders, customers, queue entries)
5,300+
Real Bookings
3 years of realistic banking data: growth curves, seasonal patterns, staff turnover, 378 customers
242+
Queue Entries
Live walk-in traffic from autonomous simulator running every 2 minutes
12×
Codebase Reduction
680K legacy lines → 55K TypeScript
1
Platform
Replaces Global, Enterprise, and Business tiers
# Verify it's real — try these URLs:
curl https://mnb-edge-gateway.porivo.workers.dev/t/meridian-national/v1/public/services
# → 7 real services from D1
curl https://mnb-edge-gateway.porivo.workers.dev/t/meridian-national/v1/admin/bookings?per_page=3
# → 5,300+ real bookings, paginated
curl https://mnb-edge-gateway.porivo.workers.dev/t/meridian-national/v1/public/queues/queue-kensington/board
# → Live queue board with real walk-in entries
curl https://mnb-edge-gateway.porivo.workers.dev/t/meridian-national/v1/admin/reports/bookings?date_from=2025-01-01&date_to=2026-03-24
# → Real aggregated analytics from D1
$1M+/year in AWS. $4K on Cloudflare.
The legacy platform runs on AWS inside per-customer VPCs with dedicated MySQL shards, Redis clusters, ECS tasks, Sidekiq workers, Elasticsearch, and a separate Kubernetes stack for Exchange integration. Every tenant carries provisioned infrastructure cost whether they book one appointment or ten thousand. The total bill exceeds $1M/year.
Infrastructure Component AWS (Annual) Cloudflare (Annual) Notes
Database $216,000 $900 3x db.r5.xlarge Multi-AZ RDS → D1 at $0.75/GB
Application Compute $252,000 $1,800 30 ECS tasks (Rails + Puma) → Workers at $0.30/M
Background Jobs $108,000 $120 12 Sidekiq ECS tasks + Redis → Queues at $0.40/M
Cache $72,000 $60 ElastiCache 2x r5.large Multi-AZ → KV included
Search $86,400 $0 3-node ES cluster → D1 FTS5 (included)
Exchange Integration $96,000 $300 K8s + RabbitMQ + Redis + SQL Server → 1 Worker
Networking $60,000 $0 VPCs, NAT Gateways, ALBs → edge routing free
Real-time / WebSockets $36,000 $400 Pusher enterprise → Durable Objects
CDN / Static Assets $24,000 $200 CloudFront + S3 → Pages (free) + R2
Monitoring $48,000 $0 Datadog/CloudWatch → Workers Analytics (included)
Total $998,400 $3,780
$998K
Legacy AWS (Annual)
Provisioned infra across all tenants
$3.8K
Voyage on Cloudflare
Same workload, pay-per-use
264×
Cost Reduction
Portfolio-wide, not per-tenant
$0
Idle Tenant Cost
No provisioned infra at rest
Why the gap is this large
AWS charges for provisioned capacity. Every RDS instance, ECS task, and ElastiCache node runs 24/7 whether customers are booking or sleeping. Cloudflare charges for usage. Workers execute on-demand. D1 stores data at $0.75/GB. Queues process messages at $0.40/M. An idle tenant costs literally zero. Even HSBC UK at full load—327 branches, 1,308 staff, 510K bookings/year—projects to $5.74/month on Cloudflare. Bespoke per-tenant workspaces become manageable at fleet scale when each one costs fractions of a penny. The infrastructure cost of the entire customer portfolio drops from seven figures to less than a team lunch—the way Imad eats.
From two-person boutique to enterprise-grade.
Every feature implemented and taken farther. 55,000 lines of modern TypeScript. 264× cheaper infrastructure. One platform that scales from a single location to a global operation. Appointments, events, and queuing — unified.