← Back to Journal
The systemFebruary 5, 2026· 10 min read

SYS Loader: Export Your BIOS to Any LLM Project

Your BIOS shouldn't be locked into one tool. SYS Loader packages 33 specs into portable documents that any AI platform can consume — Claude, ChatGPT, Gemini, or your own custom system.

The Lock-In Problem

You spent weeks building a BIOS. Thirty-three specifications, data-backed, reviewed across platforms, versioned to v1.3.0. It's the most complete representation of your brand intelligence that exists anywhere.

And then you want to use it in a different tool.

Maybe you're switching from ChatGPT to Claude for a specific use case. Maybe you're integrating brand context into a custom RAG pipeline. Maybe a team member uses Cursor and needs the BIOS in their project context. Maybe you're building an automated system that needs brand constraints available at inference time.

If your BIOS is locked into one platform's format — Claude Projects, ChatGPT Custom GPTs, or a proprietary knowledge base — you're trapped.

SYS Loader solves this. It's the export and portability layer of the Context-First methodology.

What SYS Loader Produces

SYS Loader takes the 33 BIOS specifications and packages them into platform-agnostic documents designed for machine consumption:

SYS Loader

BIOS → Portable Intelligence

33 specs packaged for any AI platform. No vendor lock-in.

Context Master Document~80K Tokens
All 33 specs in one hierarchical markdown file — for full-context conversations
Tier-Level Modules6 Files
Self-contained per-tier docs with cross-references — load only what you need
Agent Loader Files~12–20K Each
Role-specific context bundles — email agent loads Tier 1 + 3 + 5 only
Platform Templates4 Formats
Claude Projects · ChatGPT Custom GPTs · Gemini Gems · JSON-LD for custom APIs

1. The Context Master Document

A single, comprehensive markdown file that contains the entire BIOS in a structured, hierarchical format. This is the "load everything at once" option — useful when working in a single conversation with a powerful model that has a large context window.

Structure:

# [Brand Name] — Brand Intelligence Operating System
## Version: 1.3.0 | Last Updated: 2026-02-28

### Tier 1: Brand Foundation
#### 1.1 Brand Ethos
[Full specification content]

#### 1.2 Brand Archetype
[Full specification content]

... (all 33 specs)

2. Tier-Level Modules

Six separate documents, one per tier. This is the "load what you need" option — useful when you only need customer intelligence for a segmentation task, or only product specs for a catalog description job.

Each module is self-contained with internal cross-references:

# Tier 3: Customer Intelligence
## Dependencies: Requires Tier 1 (Brand Foundation) for voice constraints

### 3.1 Primary Customer Archetype
### 3.2 Secondary Customer Archetypes
### 3.3 Psychographic Profiles
### 3.4 Journey Maps
### 3.5 Substrate Signatures

3. Agent Loader Files

Specialized documents designed for specific agent roles. Instead of loading the entire BIOS, an email marketing agent loads only the specs it needs:

  • Tier 1 (Voice & Governance) — to write on-brand
  • Tier 3 (Customer Archetypes) — to personalize
  • Tier 5 (Content & Messaging) — to follow the strategy
  • Tier 6 (KPIs) — to optimize for the right metrics

This selective loading is a context efficiency technique. Instead of burning 80,000 tokens loading the full BIOS, the email agent loads 15,000 relevant tokens and operates within constraints that matter for its task.

4. System Prompt Templates

Pre-formatted system prompts for each major AI platform:

  • Claude Projects: Structured as a knowledge document with XML-tagged sections
  • ChatGPT Custom GPT: Formatted as instructions with knowledge file references
  • Gemini Gems: Structured as context instructions with emphasis markers
  • Custom APIs: JSON-LD structured data for programmatic consumption

Each template accounts for the platform's specific context handling:

  • Claude has a 200K context window but performs best with structured markdown
  • ChatGPT has retrieval-augmented knowledge files with 20 document limit
  • Gemini prefers flat instruction format with numbered sections

The Export Process

SYS Loader runs as a CLI script:

# Export full BIOS as all formats
node scripts/sys-loader.ts --brand celtic-knot --version 1.3.0 --format all

# Export specific tier for a specific platform
node scripts/sys-loader.ts --brand celtic-knot --tier 3 --platform claude

# Export agent-specific loader
node scripts/sys-loader.ts --brand celtic-knot --agent email-marketing

The output is a directory of portable files:

exports/celtic-knot-v1.3.0/
├── full-context.md              # All 33 specs in one document
├── tier-1-foundation.md          # Tier-level modules
├── tier-2-context.md
├── tier-3-customer.md
├── tier-4-product.md
├── tier-5-content.md
├── tier-6-operations.md
├── agents/
│   ├── email-marketing.md        # Role-specific loaders
│   ├── ad-creative.md
│   ├── content-writer.md
│   └── customer-service.md
├── platforms/
│   ├── claude-project.md         # Platform-specific formats
│   ├── chatgpt-gpt.json
│   └── gemini-gem.md
└── metadata.json                 # Version, export date, checksums

Context Efficiency

SYS Loader isn't just about portability. It's about context efficiency — the art of giving an AI agent exactly the right amount of context for its task.

The full BIOS is roughly 80,000 tokens. Loading all of it for every task wastes context window space and can actually degrade performance — models get noisy when overloaded with irrelevant context.

The selective loading approach:

Agent Role Tiers Loaded Token Budget
Email Marketing 1, 3, 5 ~15,000
Ad Creative 1, 2, 4, 5 ~20,000
Customer Service 1, 3, 6 ~12,000
Product Copy 1, 2, 4 ~14,000
Strategic Planning All 6 ~80,000

Only strategic planning needs the full BIOS. Every other role operates more effectively with a focused subset.

This is a counterintuitive insight: less context often produces better output. An email agent that only knows brand voice, customer archetypes, and messaging strategy writes sharper emails than one drowning in competitive landscape analysis and inventory velocity data it doesn't need.

Why Portability Matters

The AI landscape is moving fast. Today's best model is tomorrow's second choice. If your brand intelligence is platform-locked, every model migration means rebuilding from scratch.

SYS Loader makes the BIOS a portable asset:

  • Switch from ChatGPT to Claude for a specific use case? Load the platform-specific template.
  • A new employee joins and uses a different AI tool? Export their role-specific loader.
  • Building a custom agent pipeline with LangChain or LlamaIndex? Use the JSON-LD export.
  • Want to test if Gemini produces better ad creative than Claude? Load the same BIOS into both and compare.

The BIOS is the intelligence. The platform is just the runtime. SYS Loader ensures you never confuse the two.

The Bigger Picture

SYS Loader is the reason the Context-First methodology isn't a consulting deliverable that gathers dust. It's a living system that flows across tools, across teams, and across platforms.

Step 1 builds the environment. Step 2 wires the integrations. Step 3 creates the memory. Step 4 generates the intelligence. And SYS Loader ensures that intelligence goes wherever it's needed — without vendor lock-in, without format conversion headaches, and without losing a single constraint along the way.

Want to apply this to your brand?