The Integration Layer: APIs, CLIs, and MCP
Before the data warehouse. Before the BIOS. Before agents. There's the integration layer — the nervous system that connects every platform your brand touches.
The Nervous System
A brand in 2026 doesn't live in one place. It lives across Shopify, Klaviyo, Meta, Google, Stripe, Supabase, Resend, Cloudflare, and a dozen more platforms. Each holds a piece of the truth — customer data here, transaction data there, campaign performance somewhere else.
The Integration Layer is how we wire all of it together. It's Step 2 of the Context-First methodology, and it sits between the local environment (Step 1) and the Data Warehouse (Step 3). Without it, the warehouse has nothing to eat.
The Five Integration Components
The Nervous System
APIs, CLIs, and Model Context Protocol servers — how agents touch the real world.
The Integration Layer isn't just "API calls." It's five distinct technologies working together:
1. APIs — The Endpoints
Every platform exposes data through APIs. But they don't all speak the same language. Shopify uses REST and GraphQL. Meta uses a Graph API with batch endpoints. Klaviyo uses a REST API with cursor-based pagination. Stripe uses a REST API with expandable objects.
The first step is endpoint scoping — mapping every API route we'll need before writing a single fetch call. For Celtic Knot's BIOS installation, the scope looked like this:
- Shopify: Products, Collections, Orders, Customers, Metafields
- Klaviyo: Profiles, Campaigns, Flows, Metrics, Segments
- Meta: Campaigns, Ad Sets, Ads, Insights, Audiences
- Google: Campaigns, Ad Groups, Keywords, Conversions
- Stripe: Products, Prices, Payment Intents, Subscriptions
That's 25+ endpoint groups. Each has its own authentication, rate limits, pagination model, and data format. Treating them all the same is how you build fragile systems.
2. SDKs — The Libraries
Raw API calls work, but they're verbose and error-prone. SDKs wrap the complexity:
@shopify/shopify-api— handles OAuth, session tokens, GraphQL clientstripe— typed Node SDK with webhook signature verificationresend— email sending with template support and scheduling@supabase/supabase-js— database client with RLS-aware queries
The key insight: don't install what you don't need. Every SDK adds to your bundle, your dependency surface, and your security perimeter. We audit dependencies before installing them, not after.
3. CLIs — The Custom Scripts
This is where it gets interesting. For each platform, we build custom command-line scripts that do the heavy lifting:
scripts/seed-products.ts— reads Stripe products and creates local seed datascripts/create-stripe-products.ts— provisions the entire product ladder from a specification filescripts/export-warehouse.ts— dumps the data warehouse to portable JSONscripts/health-check.ts— verifies every integration is alive
These scripts aren't throwaway utilities. They're production tools that save hours of manual dashboard clicking. When we need to recreate the entire Stripe product catalog — 11 products with prices, metadata, and tax codes — one script does it in 30 seconds.
The methodology we share. The scripts themselves are proprietary — they're built for specific clients and specific platforms. The /setup-cli workflow teaches you how to scope, scaffold, and wire your own.
4. MCP — The AI Bridge
The Model Context Protocol is what gives AI agents direct access to these integrations during development. Instead of copying API responses into a chat window, the agent can:
- Query the Supabase database directly
- Read project files and configuration
- Run terminal commands to test integrations
- Take screenshots of running applications
MCP turns an AI assistant into a development partner. When I tell an agent to "verify that Stripe webhooks are processing correctly," it doesn't guess — it reads the webhook handler code, checks the Supabase logs, and confirms the payment record exists.
5. OAuth — The Trust Layer
Not all APIs use simple API keys. Shopify requires OAuth handshakes. Google requires service account credentials with scope grants. Meta requires long-lived system user tokens. Slack requires bot tokens AND user tokens AND app-level tokens.
Each OAuth flow is different, and each has expiration rules:
- Shopify: Session tokens (24h), offline access tokens (permanent until revoked)
- Meta: System user tokens (60 days), extended by re-authentication
- Google: Service account keys (no expiry, but rotation recommended)
- Slack: Refresh tokens (automatic rotation)
We document every token's lifecycle in the service inventory so nothing silently expires at 2 AM on launch day.
The Scaffolding Process
The /setup-cli workflow follows a strict sequence:
Phase 1: Scope
- List every platform the project touches
- Map required endpoints per platform
- Identify authentication model for each
- Document rate limits and pagination patterns
Phase 2: Scaffold
- Create
scripts/directory with typed TypeScript files - Install platform SDKs with exact versions (no
^) - Create shared utilities:
fetchWithRetry,paginateAll,rateLimiter
Phase 3: Wire
- Connect each script to its
.envcredentials - Add authentication headers and token refresh logic
- Implement error handling with actionable error messages
Phase 4: Test
- Run each script against the live API with read-only calls
- Verify response shapes match expected schemas
- Time execution and note rate limit headroom
The whole process takes 2-4 hours depending on the number of platforms. But it saves 10x that over the project lifecycle because every integration is verified, documented, and reproducible.
Why This Step Can't Be Skipped
I've worked with teams who built their entire application before confirming their Meta API token had the right permissions. They discovered, three weeks before launch, that their conversion API couldn't send purchase events because the dataset ID was wrong.
The Integration Layer prevents that. Every connection is validated before the first feature is built. Every credential is tested before it's trusted. Every rate limit is understood before it's hit.
When the Data Warehouse (Step 3) starts pulling data, it doesn't encounter surprises. The pipes are already tested. The authentication is already verified. The pagination is already handling edge cases.
The Pattern
Environment → Integration Layer → Data Warehouse. Each step trusts the one before it. Each step validates before it proceeds.
This isn't over-engineering. It's the difference between a project that ships and a project that fights fires. The integration layer is invisible when it works — and catastrophic when it doesn't.
Want to apply this to your brand?