Building a Reproducible Multi-Service Postman Insights Demo Platform

Why This Project Existed

  • Single-service demos lacked credibility
  • No realistic service-to-service dependencies
  • No CI → Catalog integration
  • No failure realism
  • Fragile demo environments

Speaker Notes

The problem wasn’t that the features didn’t work — it was that our demo environment didn’t reflect how real systems behave. Enterprise customers care about dependency graphs, runtime debugging, and governance signals under real traffic. A single-service demo simply didn’t hold up in senior engineering conversations.

Business Context & Constraints

  • Major product release approaching
  • SEs needed full enablement within 2 weeks
  • Environment had to be built in under 1 week
  • Must run reliably on any laptop

Speaker Notes

The timeline constraint shaped the entire architecture. This wasn’t an open-ended infrastructure project. We needed something credible, reproducible, and easy to reset — fast. That forced disciplined trade-offs.

Goals

  1. Demonstrate full lifecycle: Spec → CI → Catalog → Insights
  2. Support one-workspace-per-API model
  3. Enable realistic failure scenarios
  4. Make the environment reproducible and resettable
  5. Allow SEs to run independently

Speaker Notes

Notice these goals balance product storytelling with operational reliability. It wasn’t just about technical correctness — it was about field usability.

Architecture Overview

Services

  • identity-api (standalone baseline)
  • accounts-api (dependent + orchestrator)
  • catalog-api (dependent)

Infrastructure

  • Kind (local Kubernetes)
  • ingress-nginx (single entrypoint)
  • Postman Insights Agent (DaemonSet)

Speaker Notes

Each service runs in its own namespace to preserve clean boundaries. Ingress routes traffic via path-based rules. The Insights Agent runs cluster-wide as a DaemonSet for operational parity with production observability patterns.

Repository Model

3 Repositories:

  • identity-api (service-only)
  • catalog-api (service-only)
  • accounts-api (service + orchestration owner)

accounts-api handles:

  • Cluster lifecycle
  • Agent installation
  • Image builds
  • Manifest application
  • Traffic simulation
  • Teardown

Speaker Notes

We considered a fourth “platform” repository but rejected it to reduce onboarding friction. Three repos with one orchestrator struck the right balance.

Key Technical Challenge #1

Cross-Namespace Routing

Problem:

  • Ingress cannot cleanly route across namespaces

Solution:

  • Introduced ExternalName bridge services

Trade-off:

  • Slightly more Kubernetes objects
  • Clearer ownership and debugging model

Speaker Notes

This was a pragmatic Kubernetes constraint. Rather than flatten namespaces, we added minimal indirection to keep boundaries intact.

Key Technical Challenge #2

Reproducibility & Reset

Implemented:

  • Idempotent bootstrap script
  • Idempotent teardown (with dry-run)
  • Optional full cluster deletion

Impact:

  • Reset time reduced from ~30 minutes to <2 minutes

Speaker Notes

In demo infrastructure, reset speed matters more than optimization. If something breaks mid-demo, the ability to recover quickly is critical.

Key Technical Challenge #3

Realistic Traffic

Traffic generator:

  • Valid requests
  • Intentional 400 errors
  • Optional latency
  • Cross-service calls

Output:

  • ✓ for 2xx
  • ✗ for non-2xx
  • Slow mode for live narration

Speaker Notes

A dashboard showing only green signals lacks credibility. Injecting controlled failure makes the observability story tangible.

Timeline Trade-Off: Local First

Chose Kind over Cloud because:

  • No IAM setup delays
  • No VPC or load balancer provisioning
  • Faster iteration
  • Offline-capable

Trade-off:

  • Less infrastructure realism

Speaker Notes

Under a one-week build window, velocity mattered more than infrastructure purity. We consciously sequenced realism later.

Results & Impact

  • Fully reproducible across machines
  • Resettable in minutes
  • Adopted by SEs for demos
  • More credible enterprise conversations

Enabled discussions around:

  • Dependency graphs
  • Runtime debugging
  • CI-driven quality signals

Speaker Notes

The real success metric wasn’t “does it run?” It was “does it elevate the technical depth of customer conversations?”

Next Phase: Productionization

Two-tier model:

Local Tier (Kind)

  • Fast setup
  • Enablement-focused

Cloud Tier (AWS via Terraform / Pulumi)

  • EKS or ECS
  • Managed secrets
  • IAM + network controls
  • CI-driven infra

Speaker Notes

The local tier solves enablement. The cloud tier unlocks enterprise realism. This phased strategy preserves speed while enabling scale.

Closing

This project demonstrates:

  • System design under constraint
  • Pragmatic trade-offs
  • Operational thinking
  • Focus on reproducibility
  • Alignment with business outcomes

The goal wasn’t to build a demo. It was to build a platform for credible conversations.

Speaker Notes

The core contribution wasn’t any single script or manifest. It was the architecture and sequencing decisions that made this sustainable, reproducible, and aligned with the business.

Building a Reproducible Multi-Service Postman Insights Demo Platform

By Ronak Raithatha

Building a Reproducible Multi-Service Postman Insights Demo Platform

  • 28