Team Briefing — March 30, 2026

The Expert Cloning Factory

What we built in 5 days, what it means for Align360, and what it unlocks for every expert after Samuel. This is the meta-overview for the team.

For: Will (product), Derek (sales), Sumit (dev), Aaron (implementation), Drew (strategy)

Contents
1. The Vision & Second-Order Effects 2. What Happened (Mar 26-30 Timeline) 3. Where We Are Right Now 4. Known Gaps & What's Blocking 5. The Feedback Loop (How the Clone Gets Better) 6. The Meta: Expert #2 and Beyond 7. What the Team Needs to Do 8. All Live Links

1. The Vision & Second-Order Effects

We're not just building a chatbot that sounds like Samuel. We're building a factory that can clone any expert's coaching IP into an AI system — and the process of building Samuel's clone IS the factory being built.

1st
Clone Samuel's Coaching IP
An AI that thinks, speaks, and coaches like Samuel Ngu. Knows his 36 stacks, 5 phases, FORGE+SHIFT quality framework. Passes voice fidelity, governance, and framework accuracy tests at 93.8%.
2nd
The Process Becomes a Repeatable Pipeline
Every skill, build script, API endpoint, and quality rubric we created for Samuel is parameterized. Run boarding-orchestrator --expert=muka and the entire pipeline fires for a new expert. 18 skills, zero rebuilding.
3rd
Expert Onboarding Collapses from Months to Days
The traditional approach: months of interviews, manual prompt engineering, guess-and-check testing. Our approach: automated extraction pipeline + structured human validation + corrections loop. Expert provides source materials, pipeline runs, expert reviews, corrections applied, clone ships.
4th
Athio's Core Product: Expert-as-a-Service Infrastructure
This isn't just Align360. This is the engine behind every expert partnership Athio signs. Derek's sales pipeline, Will's platform, Sumit's engineering — all feed into and benefit from this factory. Every expert Jason onboards makes the factory better. Every factory improvement makes the next expert cheaper and faster.

The asymmetric bet: We invested 5 days of intensive building. That investment produces: (1) Samuel's clone (revenue), (2) a repeatable factory (leverage), (3) infrastructure that compounds with every expert (moat). The marginal cost of Expert #2 is a fraction of Expert #1.

2. What Happened (Mar 26-30)

Mar 26-27
Pipeline Run #1 — GREEN (91.9%)
All 18 extraction skills run on Samuel's IP. Soul, voice, frameworks, resources, offers extracted into structured JSON. Expert quality framework (FORGE+SHIFT) identified at 98% convergence. Clone compiled: system prompt v7.0 + knowledge files + tool configs. 27 test scenarios, 0 governance violations.
Mar 28
Pipeline Run #2 — GREEN (93.8%) + Smoke Test + Factory Engine
System prompt upgraded to v7.1 (added Hat Debate self-check, Echo-Check). 50 test scenarios (up from 27). All dimensions improved. Smoke test against gold standards: foundation SOLID, 3 critical gaps identified (Hat Debate, CTA Psychology, Failure Recovery). Process Factory Engine built and deployed at factory.asapai.net — config-driven DAG execution for any expert pipeline.
Mar 29
Audit v3 + Factory Workbench v3 + Extraction Review
Interactive expert audit page shipped (19 cards, live clone chat, voice+text feedback). Knowledge files loaded into clone context (54KB) — clone can now actually execute tools. Process Factory Workbench upgraded to 3-panel layout with 23 blocks across 10 phases. Extraction review page built (295 items for Samuel to verify).
Mar 30
Feedback Loop Wired End-to-End
Complete corrections cycle: audit submit → auto-classify findings → improvement items → triage dashboard → push to factory → block re-run with corrections. UX fixes: smoke test mode, mic on all note fields, admin mode for API keys, expandable chat. Team briefing (this page).

3. Where We Are Right Now

93.8%
Clone Score
18/18
Skills Built
50
Test Scenarios
10.0
Governance
0
Human Reviews
23
Factory Blocks

The Pipeline (where we are in the flow)

Research Framework Extract Audit (auto) Build Test (auto) Validate (human) Correct Ship

Everything left of "Validate" is done and automated. We're at the human validation gate — Samuel needs to test the clone and review the extractions. After that, corrections flow back through the factory automatically.

Dimension Scores (Run #2)

DimensionScoreNotes
Governance10.0Perfect. Zero violations, zero boundary crossings.
Framework Accuracy9.69Stacks, phases, and tool recommendations accurate.
Voice Fidelity9.61Pastoral tone, 8th-grade reading level, no forbidden phrases.
Failure Recovery9.58Handles edge cases, confused users, off-topic gracefully.
Self-Check (Hat Debate)9.43Catches its own errors before responding.
GOLDEN (universal quality)9.36Generative, original, layered, deep, nuanced.
Completeness9.29Full responses, not truncated or shallow.
SHARP (resonance)9.22Specific, human, actionable, rooted, personal.
SHIFT (expert output)9.07Sovereignty-restored, hardship-reframed, inertia-broken.
FORGE (expert input)8.51Weakest dimension but improved most (+0.77). Lived experience is hardest to simulate.

Important caveat: These are AI-scoring-AI numbers. Treat them as a ceiling estimate. The real test is Samuel saying "this sounds like me." That's what the audit page does — and it hasn't happened yet.

4. Known Gaps & What's Blocking

Critical Gaps (must fix before beta)

GapCoverageStatusBlocked By
CTA Psychology~10%OPENRaw coaching transcripts from Samuel
Failure Recovery testing~60%Improved in Run #2More adversarial scenarios needed
Human validation0%OPENSamuel's time to run the audit

Important Gaps

GapStatusBlocked By
Pattern-breaking humor in voiceOPENRaw transcripts
Mr. JC / Mr. Bunny personaOPENSamuel's persona doc
CBI coaching pricing confirmedOPENSamuel/team decision
Alpha pricing (first 5 users)OPENTeam decision
2 unidentified book titlesOPENSamuel input

What's NOT Blocked

5. The Feedback Loop (How the Clone Gets Better)

This is the system we built for iterative improvement. It runs the same for Samuel and for every future expert.

Expert runs audit (talks to clone, rates scenarios, gives corrections)
  |
  v
Submit --> auto-classify findings into improvement_items
  |         (voice_gap, framework_gap, tone_drift, etc.)
  v
Improvement Items Dashboard (team triages: set priority, add corrections)
  |
  v
Push to Factory --> corrections injected into block state
  |
  v
Factory re-runs targeted blocks (e.g., voice-extractor gets voice corrections)
  |
  v
Clone re-compiled with updated extractions
  |
  v
Clone re-tested --> scores compared to previous run
  |
  v
If improved --> items marked "resolved"
If not --> next iteration (max 3 per cycle)

Two human review touchpoints:

  1. Audit v3 (link) — Expert tests clone behavior. "Does this sound like me?" Live chat + scenario ratings + voice/text feedback.
  2. Extraction Review (link) — Expert reviews extracted data. "Are these my actual values, frameworks, voice patterns?" 295 items across 6 modules.

Both feed into the same improvement_items system. Both trigger the same correction cycle. Both work for any expert.

6. The Meta: Expert #2 and Beyond

What Expert #2 Needs (Time Estimate: 2-3 days)

When Derek signs the next JV partner, here's what happens:

StepWhatTimeStatus
1Collect source materials (transcripts, books, courses, website)1-2 hrsProcess exists
2Run boarding-orchestrator --expert=new-name2-3 hrs (automated)18 skills ready
3Run build-extraction-review.js --expert=new-name2 minScript exists
4Run build-knowledge-file-ts.js --expert=new-name2 minScript exists
5Build audit page from test results~1 hrScript needs building
6Expert reviews extraction + runs audit30-60 minPages ready
7Corrections cycle (1-3 iterations)1-2 hrs eachLoop wired
8Deploy to Mastery OSTBDWill's platform

The one script still needed: build-audit-page.js --expert=slug — generates the interactive audit page from any expert's test results + extraction data. Same pattern as the extraction review build script. This is the last piece of the factory pattern for the review system.

What This Unlocks

What Needs More Work Before Scaling

  1. Samuel validates — First human through the loop. Proves the process works.
  2. Build build-audit-page.js — Factory-pattern build script for audit pages.
  3. Mastery OS integration — How clones deploy to Will's platform (system prompt + knowledge files + tool configs).
  4. Pricing finalized — $178/yr (Optimized) and $298/yr (Full Stack) proposed, needs team alignment.
  5. Marketing pages — L5 blocks exist in factory (marketing-page-builder, email-sequence-builder) but haven't been run yet.

7. What the Team Needs to Do

Samuel (Expert)

  1. Open audit v3 — type 1672 in API key box, chat with clone, rate scenarios, submit. (~20 min)
  2. Open extraction review — verify 295 extracted items, flag what's wrong. (~30 min)
  3. Provide 2-3 raw coaching transcripts (unblocks CTA Psychology, voice calibration)
  4. Confirm CBI pricing tiers
  5. Provide Mr. JC / Mr. Bunny persona document

Will (Product)

  1. Review the feedback loop architecture — this is how the clone improves
  2. Test the clone yourself via audit v3 (select "Product Lead" role, type 1672)
  3. Define the Mastery OS integration path: how does a compiled clone (prompt + knowledge files + tool configs) deploy?
  4. Align on pricing: $178/yr Optimized, $298/yr Full Stack

Derek (Sales)

  1. Review this briefing — you now have the story for JV partners
  2. Test the clone yourself via audit v3 (select "Team Member" role, type 1672)
  3. Identify Expert #2 candidate — once Samuel validates, we can onboard in days
  4. Sales narrative: "We extract your IP, build your clone, you review, it ships. We've done it for Samuel, infrastructure is proven."

Sumit (Dev)

  1. Review the API endpoints: /api/a360/clone-chat, /api/a360/audit-submit, /api/a360/improvement-items
  2. All code in folio-saas repo (Vercel auto-deploys on push to main)
  3. Database: Supabase (clone_feedback, improvement_items, system_prompts tables)
  4. Process Factory: separate repo at E:\process-factory, deploys to factory.asapai.net
  5. Clone chat uses 3-tier API cascade: user key → OpenRouter → Anthropic (no single point of failure)

For Samuel / Testers

PagePurpose
samuel-audit-v3Interactive audit: chat with clone, rate scenarios, give feedback
samuel-extraction-reviewReview 295 extracted items from 6 modules

For Team / Operations

PagePurpose
improvement-itemsTriage dashboard: review findings, push corrections to factory
factory.asapai.netProcess Factory: DAG execution, block management, run history
mission-controlOverall project dashboard

For Reference / Architecture

PagePurpose
feedback-loop-architectureFull PRD: how the corrections loop works
feedback-loop-handoffTechnical handoff: copy-paste for factory/FORGE sessions
session-wrapup-mar30Jason's session recap: every file, commit, and change
smoke-test-reportPipeline vs gold standards comparison
a360-pipeline-run-1Run #1 detailed results