← Back to Journal
The workshopMarch 8, 2026· 7 min read

Giving AI a Memory: Publishing Engram and Redesigning the Product Page

We published Engram to GitHub, hardened the repo, then redesigned the /engram page from 3 generic sections to 7 content-rich sections — porting the README's strongest arguments directly onto the web.

The Problem We Kept Running Into

Twelve sessions into a complex project. Dozens of decisions made. Alternatives explored. Dead ends rejected. Then you open a new chat — and the AI says: "I don't have context from previous conversations. Could you summarize what you've discussed so far?"

Every person working with AI assistants hits this wall. The AI is brilliant in-session but has total amnesia between sessions. The solutions out there either require a database, an API, a plugin, or a subscription. None of them felt right for how we actually work.

So we built Engram.

What Engram Actually Is

Engram is a lightweight documentation protocol that gives AI assistants persistent memory across sessions and platforms. No database. No API. No plugins. Just markdown files and a protocol.

The key insight: the AI doesn't need to read 50 pages of logs. It needs an index and a grep. Engram uses a layered retrieval model:

Engram · Layered Retrieval

Context Engineering, Not Brute Force

Four layers. ~1–3K tokens at cold-start. Scales to hundreds of sessions.

HOT
STATE.md + ENGRAM.mdEvery Session
Current project state and system instructions. ~1,000–3,000 tokens.
WARM
HANDOFF.mdSession Start
Where you left off, what to pick up next. ~500–1,500 tokens.
COLD
ENGRAM-INDEX.mdOn Demand
Searchable index of all sessions, decisions, and artifacts. ~200–800 tokens per lookup.
ARCH
ENGRAM-LOG.mdReconcile Only
Full verbatim history. 15K–200K tokens. Ground truth, never read in full during sessions.

The economics are brutal in favor of structure. At 50 sessions, dumping raw logs into context would cost 80,000–200,000 tokens. With Engram's layered retrieval, cold-start stays at ~1–3K tokens regardless of project duration.

Publishing to GitHub

The first milestone: getting Engram from an internal tool to a public repository. The repo hit at github.com/ecomxco/engram.

Repo Hardening

Before going public, we ran a full audit:

  • Init scriptinit-engram.sh generates all 8 protocol files, the visualizer dashboard, and system instructions. Tested with arguments, without arguments, and in directories with existing files.
  • README — The full persuasive narrative: problem statement, before/after dialogue, architecture diagram, token economics, quick-start, FAQs. The README is the sales page for open-source.
  • Contributing guidelines — CONTRIBUTING.md, CODE_OF_CONDUCT.md, issue templates, PR templates. First-time visitors have guardrails.
  • Dashboard screenshots — Added to docs/ for the README. Overview, timeline, and full-log views of the session visualizer.
  • License — MIT. No ambiguity.

Redesigning the /engram Page

The published README is 330 lines of rich, persuasive content. The existing /engram page on ecom-x.com had ~200 words across 3 generic sections. The mismatch was obvious — the GitHub README was a better product page than the actual product page.

From 3 Sections to 7

We rebuilt the entire page, porting the README's strongest arguments into web components:

  1. Hero"Give your AI a memory." Problem-framing subheadline lifted straight from the README's opening vignette: "You're 12 sessions in... then you open a new chat — and everything is gone."

  2. Before & After — Side-by-side dialogue comparison. Left: AI with amnesia. Right: AI reading STATE.md and picking up exactly where you left off. The single most persuasive element from the README.

  3. How It Works — The layered retrieval table: Hot/Warm/Cold/Archive rows with file names, trigger conditions, and token counts. Includes the callout: "Session cold-start: ~1–3K tokens. At 50 sessions, a raw log would be 80,000–200,000."

  4. Features — 4 cards rewritten from feature-led to outcome-led. "8-File Memory System" became "Layered Memory: Boots in ~1–3K tokens. Scales to 200+ sessions."

  5. Dashboard — Full-width screenshots of the session visualizer. Overview stats and timeline view. This feature barely existed on the old page.

  6. Quick Start — The 3-line curl install rendered as a styled code block. "Download. Run. Your AI has a memory."

  7. CTA — From defensive ("No email required. No account needed.") to action-forward ("One script. Three minutes. Your AI never forgets again."). Fixed the license from CC-BY 4.0 to MIT.

The Architecture Decision

Each new section is its own React component following the same patterns as our Axiom page:

Component Purpose
EngramProblem.tsx Before/after dialogue
EngramHowItWorks.tsx Layered retrieval model
EngramDashboard.tsx Visualizer screenshots
EngramQuickStart.tsx Code block install

All copy lives in en.json via next-intl — no hardcoded strings. The page went from importing 3 components to 7, with all i18n keys rewritten.

What Made the Difference

The README did the hard work of framing the narrative. The web page's job was to present that narrative with better visual hierarchy:

  • The before/after dialogue is immediately visceral. You don't need to understand the architecture to feel the pain point.
  • The retrieval table satisfies the technical audience. Token counts make the efficiency concrete.
  • The dashboard screenshots prove the visualizer exists. Screenshots > descriptions.
  • The code block removes friction. If you're convinced, you're 3 lines from installation.

By the Numbers

Metric Value
Word count (old page) ~200
Word count (new page) ~800
Page sections 3 → 7
New components 4
i18n keys rewritten All
Dashboard screenshots 2 (overview + timeline)
Time to install Engram 3 minutes

What's Next

The Engram repo is live and the marketing page now does it justice. Next: monitoring GitHub stars and traffic, iterating on the page based on heatmap data, and continuing to dogfood Engram across all active projects. If you're 12 sessions into anything with an AI — you should probably clone Engram.

Want to apply this to your brand?