Clone Factory Playbook

Paint-by-Numbers Expert Boarding — Agent + Human Workflow
v1.0 | March 27, 2026 | Athio Clone Factory
Agent (automated)
Human (manual)
Verification Checkpoint
Time estimate

What This Playbook Covers

This is the complete paint-by-numbers guide for boarding a new expert onto the Athio Mastery Platform using the 18-skill Clone Factory pipeline. It covers every step from initial research through deployment, with clear markers for what the agent handles vs. what requires human input.

Total pipeline time: 2-4 hours agent execution + 1-2 days human review

Reference implementation: Samuel Ngu / Align360 (Run #1, GREEN 91.9%)

MetricSamuel Run #1Expected Range
Agent execution time~3 hours2-4 hours
Skills invoked1717-18
Extraction files generated77
Compiled artifacts44
Test scenarios2725-50
Clone score91.9%85%+ for GREEN

Pre-Flight Checklist

0
Gather Source Materials
Human 30-60 min

Collect everything available from the expert. More sources = higher quality clone. Minimum viable: 1 system prompt/methodology doc + 1 unedited content piece.

Source TypePriorityExamples
Existing system prompt / methodology docCRITICALv6.1 prompt, knowledge files, training docs
Raw transcripts (coaching, interviews)HIGHZoom recordings, podcast transcripts
Published contentHIGHBooks, blog posts, social media, newsletters
Team/partner transcriptsMEDIUMStrategy calls, product discussions
Brand assetsMEDIUMWebsite, logo, style guide
Pricing/offers docsMEDIUMSales pages, rate cards, partnership agreements

Pre-Flight Verification

  • At least 2 source files collected (ideally 5+)
  • Expert's name, brand name, and primary domain known
  • Claude Code installed with skills directory populated
  • Environment variables set: PPLX_API_KEY (for deep-research)
  • MASTERYBOOK_API_KEY + MASTERYBOOK_API_URL (optional, for masterybook-sync)

Phase 0: Recon (Layer 0) 30-45 min

1
Initialize Workspace
Agent 1 min

Run the boarding orchestrator to create the workspace and kanban board.

Command: /boarding-orchestrator init {expert-name} --recon

What happens: Creates _workspaces/{expert-slug}/ with subdirectories, generates kanban.json with 24 cards, starts deep-research immediately.

If sources already exist: Use /boarding-orchestrator init {expert-name} (no --recon, 19 cards, skip to Phase 2)

2
Deep Research
Agent 10-15 min

Skill: deep-research (wraps Perplexity Sonar Deep Research API)

What it does: Runs 5 research queries against the expert's public presence. Produces a narrative intelligence report + structured data.

  • Output: sources/deep-research-report.md, deep-research.json, deep-research-sources.json
  • Expect: 100K-200K chars of raw research
3
Expert Recon
Agent 10-15 min

Skill: expert-recon (orchestrates scraping + synthesis)

What it does: Scrapes the expert's website, social profiles, and key pages. Synthesizes with deep-research into a boarding readiness score.

  • Output: sources/raw-sources/* (scraped content), {expert}-intelligence-report.html
  • Boarding readiness score >= 7: sources sufficient, auto-proceed
  • Score 4-6: partial sources, human should supplement
  • Score < 4: need manual uploads
4
MasteryBook Sync
Agent 2-5 min

Skill: masterybook-sync

What it does: Uploads all sources to a MasteryBook notebook for team-accessible RAG Q&A. Optional — gracefully degrades if API unavailable.

  • Output: sources/masterybook-sync-status.json, shareable notebook URL
  • Runs in parallel with Step 5
Recon Checkpoint
Verify

Before proceeding, verify:

  • Intelligence report exists and has substantive content (100K+ chars)
  • Raw sources scraped (at least 2 pages from expert's website)
  • Boarding readiness score >= 7 (or manual sources uploaded to supplement)
  • sources/ directory has at least 3 files

Phase 1: Framework Discovery (Layer 0.25) 15-25 min

5
Expert Framework Creator
Agent 15-25 min

Skill: expert-framework-creator

What it does: Reads ALL raw sources and discovers the expert's unconscious quality framework — the criteria they naturally apply but never codified. Uses 3-path convergence protocol.

  • Path A: Framework-Forward (structure to content)
  • Path B: Content-Back (content to structure)
  • Path C: Anti-Pattern-First (failures to principles)
  • Output: expert-framework.json with acronym(s), scoring rubrics, anti-patterns
  • Expect: 1-3 framework acronyms (single, dual, or triple architecture)

Why this matters

This is the CRITICAL DOMINO. The expert's framework becomes the quality bar for everything downstream. Without it, rubrics are incomplete and the clone can drift. The 3-path convergence ensures we didn't hallucinate the framework — all 3 paths must agree to 90%+ convergence.

Framework Checkpoint
Verify

Before proceeding, verify:

  • expert-framework.json exists with version, expert name, architecture type
  • Convergence score >= 90% across paths
  • At least 3/5 framework elements have HIGH confidence
  • Anti-patterns defined (at least 2)
  • Scoring rubrics included for each dimension (1-5 scale)

Phase 2: Extraction (Layer 2) 30-60 min

Parallelization

All 5-6 extractors can run in parallel once sources are ready. This saves ~60% wall-clock time vs. sequential. Launch them simultaneously using Task agents.

6a
Soul Extractor
Agent (parallel) 10-15 min

Skill: soul-extractor

Output: soul.json — thinking loops, values hierarchy, governance rules, anti-patterns, background behaviors, canonical statements

6b
Voice Extractor
Agent (parallel) 10-15 min

Skill: voice-extractor

Output: voice.json — tone pairs, forbidden phrases, energy spectrum, signature elements, reading level

6c
Framework Extractor
Agent (parallel) 10-20 min

Skill: framework-extractor

Output: frameworks.json — tools/stacks, phases, taxonomies, routing pathways, cross-phase bridges

Note

For experts with 15+ tools, this may hit the 32K token limit. If so, the extractor chunks output automatically. Watch for truncation warnings.

6d
Resource Extractor
Agent (parallel) 5-10 min

Skill: resource-extractor

Output: resources.json — books, podcasts, courses, tools, mentors, governance rules for recommendations

6e
Offer Extractor
Agent (parallel) 5-10 min

Skill: offer-extractor

Output: offers.json — offers, pricing tiers, conversion pathways, monetization governance (ZERO COERCION rules)

6f
Design System Extractor (optional)
Agent (parallel) 5-10 min

Skill: design-system-extractor

Output: Design system HTML — colors, typography, spacing, components. Optional if brand assets already documented.

Extraction Checkpoint
Verify

All 5 core files must exist:

  • soul.json — has values, anti-patterns, canonical statements
  • voice.json — has tone pairs, forbidden phrases, energy spectrum
  • frameworks.json — has phases, stacks with purpose/inputs/outputs
  • resources.json — has categorized resources with governance
  • offers.json — has pricing tiers and conversion pathways
  • Each file has a confidence_notes section with HIGH/MEDIUM/NEEDS_VERIFICATION

Phase 3: Audit (Layer 2.5) 10-15 min

7
Gap Analyzer
Agent 5-10 min

Skill: gap-analyzer

What it does: Cross-references all extraction files, finds missing information, contradictions, and incomplete sections. Produces prioritized gaps.

  • Output: gaps.json — each gap has severity (critical/important/nice-to-have), source file, specific field, and a question to resolve it
  • 0 critical gaps = proceed to compilation
  • Any critical gap = STOP and resolve before continuing
8
Rubric Builder
Agent 5-10 min

Skill: rubric-builder

What it does: Generates 6-dimension scoring rubrics from extractions + expert framework. These rubrics score every downstream artifact.

  • Output: rubrics/rubrics.json
  • 6 dimensions: voice fidelity, framework accuracy, governance compliance, completeness, expert quality (FORGE or equivalent), universal quality (GOLDEN+SHARP)
  • Requires expert-framework.json — if missing, warns and builds 5-dimension version
9
Resolve Critical Gaps
Human Varies

When: Only if gap-analyzer found critical or important gaps that block compilation.

Action: Review gaps.json, collect missing information from expert/team, update source files or provide answers inline.

Skip if: 0 critical gaps and all important gaps have architectural resolutions (e.g., "clone-compiler will generate templates").

Audit Checkpoint
Verify

Before compilation:

  • gaps.json exists with 0 critical gaps
  • rubrics/rubrics.json exists with 6 dimensions
  • expert_framework_loaded: true in rubrics.json
  • All important gaps either resolved or marked non-blocking

Phase 4: Compile (Layer 3) 15-30 min

10
Clone Compiler
Agent 15-25 min

Skill: clone-compiler

What it does: Reads all 7 extraction files + expert-framework.json and compiles them into a complete AI coaching OS.

  • artifacts/system-prompt.md — 14-section system prompt (30-50KB)
  • artifacts/knowledge-file-part1.md — active phases with full stack specs
  • artifacts/knowledge-file-part2.md — coming-soon phases + resources
  • artifacts/tool-configs.json — machine-readable tool registry

Key behavior

The compiler generates prompt templates for any stacks that lack explicit templates (common — Samuel had 30/36 without explicit templates). These are flagged as "generated" vs "extracted" in the output.

11
Lead Magnet Builder
Agent (parallel with 12) 5-10 min

Skill: lead-magnet-builder

Output: artifacts/lead-magnet.html — standalone assessment page with questions, scoring, result types, email capture

12
Onboarding Builder
Agent (parallel with 11) 5-10 min

Skill: onboarding-builder

Output: artifacts/onboarding-flow.html — Netflix-style card-based onboarding with discovery questions and tool routing

Phase 5: Validate (Layer 3.5) 15-25 min

13
Clone Tester
Agent 15-25 min

Skill: clone-tester

What it does: Runs the compiled clone through 25-50 simulated scenarios across 5 categories. Scores against 8 dimensions. Produces deployment decision.

  • test-results/test-results.json — full scored results
  • test-results/audit-sheet.md — human-readable audit document
  • test-results/simulation-log.json — all test scenarios with responses
DecisionScoreAction
GREEN>= 85%Proceed to human audit
YELLOW70-84%Apply fix instructions, re-test
RED< 70% or governance violationBlock. Trace failure to extraction. Re-extract.
Validation Checkpoint
Verify

Before sending to human review:

  • Deployment decision is GREEN (>= 85%)
  • Governance compliance is 10/10 (PERFECT — any violation is auto-RED)
  • 0 anti-pattern matches
  • audit-sheet.md contains 3 strongest passages, 3 weakest passages
  • Expert review questions generated (5+ questions for the expert)
  • If YELLOW: fix instructions applied and re-tested before proceeding

Phase 6: Human Audit (Layer 4) 1-2 days

14
Generate Audit Form
Agent 5-10 min

What: Generate an interactive audit form from the test results and expert review questions. Publish to NowPage for the expert and team to fill out.

Generic Audit Form Template (10 Screens)

ScreenPurposeQuestions
WelcomeIntroduction + resume capabilitySession detection, start/resume
1. Expert VoiceExpert validates clone's voice5 open-ended Qs from audit-sheet.md
2. Team ReviewTeam validates product decisions5 open-ended Qs from audit-sheet.md
3. Crisis HandlingApprove crisis voice patternsRadio select + notes on crisis responses
4. TerminologyMap sensitive terms to preferred languageTable of term mappings (e.g., spiritual → secular alternatives)
5. Feature DecisionsPrioritize capabilities for launchCard-grid selection (community features, persona choices)
6. Persona DefinitionDefine AI persona if applicableDefer or define personality traits
7. Pricing/BusinessLock operational decisionsAlpha pricing model, tier pricing confirmation
ReviewSee all answers, edit before submitEditable summary with section-level editing
SubmitProcess and publish resultsAuto-generates report, emails team, publishes to NowPage

Implementation Pattern

Frontend: Single-page HTML with screen-by-screen wizard, localStorage auto-save, card-select components, editable review tables.

Backend: POST to /api/a360/audit-submit route — saves to database (partnership_applications, status: 'audit'), publishes styled report to NowPage, emails team via Resend.

Reference: align360.asapai.net/audit-form (Samuel's live form)

15
Expert Fills Audit Form
Human 30-60 min

Send the audit form link to the expert and team. They review:

  • 3 strongest clone passages — "Does this sound like me?"
  • 3 weakest clone passages — "What would you actually say?"
  • Governance violations (should be zero)
  • Side-by-side clone vs. expert comparisons
  • Specific voice/terminology/pricing decisions

The form auto-saves progress. Expert can stop and resume anytime.

16
Process Audit Response
Agent 10-15 min

When audit form is submitted:

  • Read the audit report from NowPage
  • Apply any corrections to extraction files (voice adjustments, terminology changes, pricing confirmations)
  • If significant changes: re-run clone-compiler and clone-tester
  • If minor changes: patch system-prompt.md directly
  • Publish updated audit page with response incorporated
Human Audit Checkpoint
Verify

Before deployment:

  • Expert has reviewed and approved the audit form
  • All voice corrections applied
  • Pricing confirmed (or explicitly deferred)
  • Persona decisions made (or explicitly deferred)
  • If re-compilation was needed, clone-tester re-run and still GREEN
  • Team sign-off received (explicit "go" from product lead)

Phase 7: Deploy 30-60 min

17
Deploy to Platform
Human 30-60 min

Action: Load compiled artifacts into the Mastery OS platform.

  • Upload artifacts/system-prompt.md as the system prompt
  • Upload artifacts/knowledge-file-part1.md and part2.md as knowledge files
  • Configure tool activations from artifacts/tool-configs.json
  • Set up the lead magnet page (from artifacts/lead-magnet.html)
  • Configure onboarding flow (from artifacts/onboarding-flow.html)
18
Publish Audit Pages
Agent 5 min

Publish to NowPage:

  • /{brand}-pipeline-run-{N} — Pipeline run detail (scores, heatmap)
  • /{brand}-build-registry — Chapter table for this expert's builds
  • Update /factory-build-registry with new expert entry
  • Update /boarding-pack with deployment status
19
48-Hour Monitoring
Human 48 hours

After deployment, monitor for 48 hours before public launch:

  • Watch for governance violations in live usage
  • Check that tool activation works correctly
  • Verify onboarding flow routes users to correct tools
  • Collect alpha user feedback
  • No go/no-go decision for public launch until monitoring complete

Folder Conventions

Workspace Structure (per expert)

_workspaces/{expert-slug}/ kanban.json ← orchestrator state soul.json ← L2 extraction voice.json ← L2 extraction frameworks.json ← L2 extraction resources.json ← L2 extraction offers.json ← L2 extraction expert-framework.json ← L0.25 framework gaps.json ← L2.5 audit {expert}-intelligence-report.html ← L0 recon sources/ deep-research-report.md ← L0 deep-research deep-research.json raw-sources/ *.txt ← scraped web pages artifacts/ system-prompt.md ← L3 compiled (primary) knowledge-file-part1.md ← L3 compiled knowledge-file-part2.md ← L3 compiled tool-configs.json ← L3 compiled lead-magnet.html ← L3 build onboarding-flow.html ← L3 build rubrics/ rubrics.json ← L2.5 scoring rubrics score-*.json ← per-artifact scores test-results/ test-results.json ← L3.5 validation audit-sheet.md ← human audit doc simulation-log.json ← all test scenarios

Naming Conventions

ElementConventionExample
Expert sluglowercase, hyphenssamuel-ngu, tony-gaskins
Workspace path_workspaces/{expert-slug}/_workspaces/samuel-ngu/
Pipeline run page/{brand}-pipeline-run-{N}/a360-pipeline-run-1
Build registry/{brand}-build-registry/a360-build-registry
Boarding pack/boarding-pack (versioned)/boarding-pack (v2.0)
Test result versionstest-results-v{N}.jsontest-results-v2.json

Quick Commands Reference

ActionCommandWhen
Cold start (full recon)/boarding-orchestrator init {name} --reconNew expert, no source files
Warm start (files exist)/boarding-orchestrator init {name}Source files already uploaded
JV demo (fast-lane)/boarding-orchestrator init {name} --demoNeed a demo for a sales meeting
Check status/boarding-orchestrator statusAny time during pipeline
Run next step/boarding-orchestrator nextAfter completing current step
Score an artifact/boarding-orchestrator score {artifact}After compilation
Full report/boarding-orchestrator reportBefore human review

Troubleshooting

Clone score YELLOW (70-84%)

Read fix_instructions in test-results.json. Apply each fix to the system prompt or knowledge files. Re-run clone-tester. Must reach GREEN before human audit.

Clone score RED (<70%)

Trace failures to source extractions. Usually means insufficient source material. Collect more sources (especially raw transcripts), re-run relevant extractors, then re-compile and re-test.

Governance violation detected

Automatic RED regardless of overall score. Check which guardrail was violated in test-results.json. Fix the system prompt section that handles that scenario. Re-test the full suite.

Framework convergence <90%

The 3 extraction paths disagree. Review each path's output. Often means the expert has inconsistent messaging across contexts. Flag for human clarification.

Token limit on framework-extractor

Expert has 15+ tools/stacks. The extractor should auto-chunk. If truncated, re-run with explicit phase boundaries: extract Phase 0-1 first, then Phase 2-4 separately, then merge.

Expert has no existing system prompt

Common for new experts. The pipeline still works — deep-research + raw content provide enough signal. Expert-framework-creator may need more sources to reach HIGH confidence. Expect a slightly lower clone score (80-88%).

Reference: Samuel Ngu Run #1

MetricValueNotes
ExpertSamuel NguAlign360 / FLC / CBI
DateMarch 26-27, 20262 sessions
Mode--recon (cold boarding)24 cards
Sources8 files, ~48K words2 raw interviews, 3 transcripts, 2 system prompts, intel report
Deep research197K chars, 47 sources5 Perplexity queries
Expert frameworkFORGE+SHIFT (dual)98% convergence, Path B won
Extractions7 filesAll HIGH confidence except voice (MED-HIGH)
Gaps14 (0 critical)7 important, 7 nice-to-have
Compiledv7.0, 4 filessystem-prompt 49KB, KF1 54KB, KF2 20KB, configs 28KB
Test scenarios275 categories
Score91.9% GREEN0 violations, 0 anti-pattern matches
StrongestCrisis handling (9.78)Governance PERFECT (10/10)
WeakestFORGE input quality (7.74)Structural — tactical responses can't be narrative

Full details: Pipeline Run Report | Build Registry | Factory Registry | Boarding Pack v2

Clone Factory Playbook v1.0 | Athio Agentic Boarding System

18 skills | 8 layers | 24 cards | Paint-by-numbers for any expert

Built from Samuel Ngu reference run (GREEN 91.9%) | March 27, 2026