This is the complete paint-by-numbers guide for boarding a new expert onto the Athio Mastery Platform using the 18-skill Clone Factory pipeline. It covers every step from initial research through deployment, with clear markers for what the agent handles vs. what requires human input.
Total pipeline time: 2-4 hours agent execution + 1-2 days human review
Reference implementation: Samuel Ngu / Align360 (Run #1, GREEN 91.9%)
| Metric | Samuel Run #1 | Expected Range |
|---|---|---|
| Agent execution time | ~3 hours | 2-4 hours |
| Skills invoked | 17 | 17-18 |
| Extraction files generated | 7 | 7 |
| Compiled artifacts | 4 | 4 |
| Test scenarios | 27 | 25-50 |
| Clone score | 91.9% | 85%+ for GREEN |
Collect everything available from the expert. More sources = higher quality clone. Minimum viable: 1 system prompt/methodology doc + 1 unedited content piece.
| Source Type | Priority | Examples |
|---|---|---|
| Existing system prompt / methodology doc | CRITICAL | v6.1 prompt, knowledge files, training docs |
| Raw transcripts (coaching, interviews) | HIGH | Zoom recordings, podcast transcripts |
| Published content | HIGH | Books, blog posts, social media, newsletters |
| Team/partner transcripts | MEDIUM | Strategy calls, product discussions |
| Brand assets | MEDIUM | Website, logo, style guide |
| Pricing/offers docs | MEDIUM | Sales pages, rate cards, partnership agreements |
Run the boarding orchestrator to create the workspace and kanban board.
Command: /boarding-orchestrator init {expert-name} --recon
What happens: Creates _workspaces/{expert-slug}/ with subdirectories, generates kanban.json with 24 cards, starts deep-research immediately.
If sources already exist: Use /boarding-orchestrator init {expert-name} (no --recon, 19 cards, skip to Phase 2)
Skill: deep-research (wraps Perplexity Sonar Deep Research API)
What it does: Runs 5 research queries against the expert's public presence. Produces a narrative intelligence report + structured data.
sources/deep-research-report.md, deep-research.json, deep-research-sources.jsonSkill: expert-recon (orchestrates scraping + synthesis)
What it does: Scrapes the expert's website, social profiles, and key pages. Synthesizes with deep-research into a boarding readiness score.
sources/raw-sources/* (scraped content), {expert}-intelligence-report.htmlSkill: masterybook-sync
What it does: Uploads all sources to a MasteryBook notebook for team-accessible RAG Q&A. Optional — gracefully degrades if API unavailable.
sources/masterybook-sync-status.json, shareable notebook URLSkill: expert-framework-creator
What it does: Reads ALL raw sources and discovers the expert's unconscious quality framework — the criteria they naturally apply but never codified. Uses 3-path convergence protocol.
expert-framework.json with acronym(s), scoring rubrics, anti-patternsThis is the CRITICAL DOMINO. The expert's framework becomes the quality bar for everything downstream. Without it, rubrics are incomplete and the clone can drift. The 3-path convergence ensures we didn't hallucinate the framework — all 3 paths must agree to 90%+ convergence.
All 5-6 extractors can run in parallel once sources are ready. This saves ~60% wall-clock time vs. sequential. Launch them simultaneously using Task agents.
Skill: soul-extractor
Output: soul.json — thinking loops, values hierarchy, governance rules, anti-patterns, background behaviors, canonical statements
Skill: voice-extractor
Output: voice.json — tone pairs, forbidden phrases, energy spectrum, signature elements, reading level
Skill: framework-extractor
Output: frameworks.json — tools/stacks, phases, taxonomies, routing pathways, cross-phase bridges
For experts with 15+ tools, this may hit the 32K token limit. If so, the extractor chunks output automatically. Watch for truncation warnings.
Skill: resource-extractor
Output: resources.json — books, podcasts, courses, tools, mentors, governance rules for recommendations
Skill: offer-extractor
Output: offers.json — offers, pricing tiers, conversion pathways, monetization governance (ZERO COERCION rules)
Skill: design-system-extractor
Output: Design system HTML — colors, typography, spacing, components. Optional if brand assets already documented.
Skill: gap-analyzer
What it does: Cross-references all extraction files, finds missing information, contradictions, and incomplete sections. Produces prioritized gaps.
gaps.json — each gap has severity (critical/important/nice-to-have), source file, specific field, and a question to resolve itSkill: rubric-builder
What it does: Generates 6-dimension scoring rubrics from extractions + expert framework. These rubrics score every downstream artifact.
rubrics/rubrics.jsonWhen: Only if gap-analyzer found critical or important gaps that block compilation.
Action: Review gaps.json, collect missing information from expert/team, update source files or provide answers inline.
Skip if: 0 critical gaps and all important gaps have architectural resolutions (e.g., "clone-compiler will generate templates").
Skill: clone-compiler
What it does: Reads all 7 extraction files + expert-framework.json and compiles them into a complete AI coaching OS.
artifacts/system-prompt.md — 14-section system prompt (30-50KB)artifacts/knowledge-file-part1.md — active phases with full stack specsartifacts/knowledge-file-part2.md — coming-soon phases + resourcesartifacts/tool-configs.json — machine-readable tool registryThe compiler generates prompt templates for any stacks that lack explicit templates (common — Samuel had 30/36 without explicit templates). These are flagged as "generated" vs "extracted" in the output.
Skill: lead-magnet-builder
Output: artifacts/lead-magnet.html — standalone assessment page with questions, scoring, result types, email capture
Skill: onboarding-builder
Output: artifacts/onboarding-flow.html — Netflix-style card-based onboarding with discovery questions and tool routing
Skill: clone-tester
What it does: Runs the compiled clone through 25-50 simulated scenarios across 5 categories. Scores against 8 dimensions. Produces deployment decision.
test-results/test-results.json — full scored resultstest-results/audit-sheet.md — human-readable audit documenttest-results/simulation-log.json — all test scenarios with responses| Decision | Score | Action |
|---|---|---|
| GREEN | >= 85% | Proceed to human audit |
| YELLOW | 70-84% | Apply fix instructions, re-test |
| RED | < 70% or governance violation | Block. Trace failure to extraction. Re-extract. |
What: Generate an interactive audit form from the test results and expert review questions. Publish to NowPage for the expert and team to fill out.
| Screen | Purpose | Questions |
|---|---|---|
| Welcome | Introduction + resume capability | Session detection, start/resume |
| 1. Expert Voice | Expert validates clone's voice | 5 open-ended Qs from audit-sheet.md |
| 2. Team Review | Team validates product decisions | 5 open-ended Qs from audit-sheet.md |
| 3. Crisis Handling | Approve crisis voice patterns | Radio select + notes on crisis responses |
| 4. Terminology | Map sensitive terms to preferred language | Table of term mappings (e.g., spiritual → secular alternatives) |
| 5. Feature Decisions | Prioritize capabilities for launch | Card-grid selection (community features, persona choices) |
| 6. Persona Definition | Define AI persona if applicable | Defer or define personality traits |
| 7. Pricing/Business | Lock operational decisions | Alpha pricing model, tier pricing confirmation |
| Review | See all answers, edit before submit | Editable summary with section-level editing |
| Submit | Process and publish results | Auto-generates report, emails team, publishes to NowPage |
Frontend: Single-page HTML with screen-by-screen wizard, localStorage auto-save, card-select components, editable review tables.
Backend: POST to /api/a360/audit-submit route — saves to database (partnership_applications, status: 'audit'), publishes styled report to NowPage, emails team via Resend.
Reference: align360.asapai.net/audit-form (Samuel's live form)
Send the audit form link to the expert and team. They review:
The form auto-saves progress. Expert can stop and resume anytime.
When audit form is submitted:
Action: Load compiled artifacts into the Mastery OS platform.
artifacts/system-prompt.md as the system promptartifacts/knowledge-file-part1.md and part2.md as knowledge filesartifacts/tool-configs.jsonartifacts/lead-magnet.html)artifacts/onboarding-flow.html)Publish to NowPage:
/{brand}-pipeline-run-{N} — Pipeline run detail (scores, heatmap)/{brand}-build-registry — Chapter table for this expert's builds/factory-build-registry with new expert entry/boarding-pack with deployment statusAfter deployment, monitor for 48 hours before public launch:
| Element | Convention | Example |
|---|---|---|
| Expert slug | lowercase, hyphens | samuel-ngu, tony-gaskins |
| Workspace path | _workspaces/{expert-slug}/ | _workspaces/samuel-ngu/ |
| Pipeline run page | /{brand}-pipeline-run-{N} | /a360-pipeline-run-1 |
| Build registry | /{brand}-build-registry | /a360-build-registry |
| Boarding pack | /boarding-pack (versioned) | /boarding-pack (v2.0) |
| Test result versions | test-results-v{N}.json | test-results-v2.json |
| Action | Command | When |
|---|---|---|
| Cold start (full recon) | /boarding-orchestrator init {name} --recon | New expert, no source files |
| Warm start (files exist) | /boarding-orchestrator init {name} | Source files already uploaded |
| JV demo (fast-lane) | /boarding-orchestrator init {name} --demo | Need a demo for a sales meeting |
| Check status | /boarding-orchestrator status | Any time during pipeline |
| Run next step | /boarding-orchestrator next | After completing current step |
| Score an artifact | /boarding-orchestrator score {artifact} | After compilation |
| Full report | /boarding-orchestrator report | Before human review |
Read fix_instructions in test-results.json. Apply each fix to the system prompt or knowledge files. Re-run clone-tester. Must reach GREEN before human audit.
Trace failures to source extractions. Usually means insufficient source material. Collect more sources (especially raw transcripts), re-run relevant extractors, then re-compile and re-test.
Automatic RED regardless of overall score. Check which guardrail was violated in test-results.json. Fix the system prompt section that handles that scenario. Re-test the full suite.
The 3 extraction paths disagree. Review each path's output. Often means the expert has inconsistent messaging across contexts. Flag for human clarification.
Expert has 15+ tools/stacks. The extractor should auto-chunk. If truncated, re-run with explicit phase boundaries: extract Phase 0-1 first, then Phase 2-4 separately, then merge.
Common for new experts. The pipeline still works — deep-research + raw content provide enough signal. Expert-framework-creator may need more sources to reach HIGH confidence. Expect a slightly lower clone score (80-88%).
| Metric | Value | Notes |
|---|---|---|
| Expert | Samuel Ngu | Align360 / FLC / CBI |
| Date | March 26-27, 2026 | 2 sessions |
| Mode | --recon (cold boarding) | 24 cards |
| Sources | 8 files, ~48K words | 2 raw interviews, 3 transcripts, 2 system prompts, intel report |
| Deep research | 197K chars, 47 sources | 5 Perplexity queries |
| Expert framework | FORGE+SHIFT (dual) | 98% convergence, Path B won |
| Extractions | 7 files | All HIGH confidence except voice (MED-HIGH) |
| Gaps | 14 (0 critical) | 7 important, 7 nice-to-have |
| Compiled | v7.0, 4 files | system-prompt 49KB, KF1 54KB, KF2 20KB, configs 28KB |
| Test scenarios | 27 | 5 categories |
| Score | 91.9% GREEN | 0 violations, 0 anti-pattern matches |
| Strongest | Crisis handling (9.78) | Governance PERFECT (10/10) |
| Weakest | FORGE input quality (7.74) | Structural — tactical responses can't be narrative |
Full details: Pipeline Run Report | Build Registry | Factory Registry | Boarding Pack v2
Clone Factory Playbook v1.0 | Athio Agentic Boarding System
18 skills | 8 layers | 24 cards | Paint-by-numbers for any expert
Built from Samuel Ngu reference run (GREEN 91.9%) | March 27, 2026