Paint by Numbers — Mar 30, 2026

Feedback Loop Handoff

What Jason does now, what to paste into the Factory session, and what FORGE needs to know.

Audit v3 SHIPPED Improvement Items API DEPLOYED Push to Factory Button BUILT Migration 005 PENDING Corrections-Ingest NOT BUILT YET PRD PUBLISHED

1. Jason's Actions (Do These Now)

1
Run Migration 005 in Supabase
Supabase Dashboard → SQL Editor → paste the SQL below → Run. Adds target_block, consensus_score, flagged_by, tester_role columns + indexes.
Migration 005 SQL
-- Feedback Loop Infrastructure (Migration 005) -- Run in Supabase SQL Editor -- 1. Add tester_role to clone_feedback ALTER TABLE clone_feedback ADD COLUMN IF NOT EXISTS tester_role TEXT DEFAULT 'team'; -- 2. Add target_block + consensus columns to improvement_items ALTER TABLE improvement_items ADD COLUMN IF NOT EXISTS target_block TEXT; ALTER TABLE improvement_items ADD COLUMN IF NOT EXISTS consensus_score NUMERIC DEFAULT 0; ALTER TABLE improvement_items ADD COLUMN IF NOT EXISTS flagged_by JSONB DEFAULT '[]'; ALTER TABLE improvement_items ADD COLUMN IF NOT EXISTS prompt_version TEXT; ALTER TABLE improvement_items ADD COLUMN IF NOT EXISTS extraction_version TEXT; -- 3. Backfill target_block from existing skill_target UPDATE improvement_items SET target_block = skill_target WHERE target_block IS NULL AND skill_target IS NOT NULL; -- 4. Performance indexes CREATE INDEX IF NOT EXISTS idx_improvement_items_run_status ON improvement_items(factory_run_id, status) WHERE factory_run_id IS NOT NULL; CREATE INDEX IF NOT EXISTS idx_improvement_items_expert_scenario ON improvement_items(expert_slug, source_scenario) WHERE source_scenario IS NOT NULL; CREATE INDEX IF NOT EXISTS idx_improvement_items_target_block ON improvement_items(target_block, status) WHERE target_block IS NOT NULL;
2
Verify the Dashboard Loads
Open align360.asapai.net/improvement-items. Should show the triage table. Orange "Push to Factory" button in header (disabled until items exist with target_block).
3
Test the Full Flow (Optional)
Open samuel-audit-v3 → Enter 1672 for admin mode → Chat with clone → Rate a few scenarios → Submit. Wait 10s, then check improvement-items dashboard for auto-classified items. Each should have target_block set.
4
Share with Factory Session
Paste the Factory Session Handoff block (Section 2 below) into the Process Factory Claude session. That session needs to build POST /api/corrections-ingest.
5
Share with FORGE Session
Paste the FORGE Session Handoff block (Section 3 below) into FORGE. It needs to know how to monitor the loop and trigger re-runs.

2. Factory Session Handoff

Copy this entire block and paste it into the Process Factory Claude session.

Paste into Factory Session
## Feedback Loop Integration — From Audit Session (Mar 30) ### What's Built (folio-saas, deployed on Vercel) The audit flow now creates structured `improvement_items` that need to flow into factory block re-runs. Here's what exists: **Pages:** - `align360.asapai.net/samuel-audit-v3` — Interactive expert audit (19 cards, live chat, voice+text feedback) - `align360.asapai.net/samuel-extraction-review` — Expert reviews 6 extraction JSONs (295 items across 9 cards) - `align360.asapai.net/improvement-items` — Triage dashboard with "Push to Factory" button - `align360.asapai.net/feedback-loop-architecture` — Full PRD for the corrections loop **API Endpoints (all live at folio-saas on Vercel):** - `POST /api/a360/audit-submit` — Saves audit, publishes report, emails team, auto-triggers analyze - `POST /api/a360/audit-analyze` — Classifies findings into improvement_items (voice_gap, framework_gap, tone_drift, etc.) - `POST /api/a360/extraction-review-submit` — Saves extraction review, creates improvement_items per flagged item - `GET /api/a360/improvement-items?expert_slug=samuel-ngu&status=pending` — Query items with filters - `PATCH /api/a360/improvement-items/{id}` — Update status, priority, expert_correction, target_block, factory_run_id **Database (Supabase):** - `improvement_items` table — columns: id, feedback_id, expert_slug, finding_type, skill_target, target_block, priority, source_scenario, source_text, expert_correction, status, consensus_score, flagged_by, factory_run_id, factory_block_id, prompt_version, extraction_version - `clone_feedback` table — audit submissions with chat_transcript, chat_feedback, tester_role - Every improvement_item has `target_block` set (maps to factory block IDs) ### What YOU Need to Build **`POST /api/corrections-ingest`** — receives grouped corrections from the improvement-items dashboard. The "Push to Factory" button on the dashboard calls: ``` POST factory.asapai.net/api/corrections-ingest Content-Type: application/json { "run_id": "uuid-of-factory-run", "corrections": [ { "block_id": "voice-extractor", "source": "improvement-items-dashboard", "instruction": "Apply 3 corrections from expert feedback", "items": [ { "finding": "Clone uses 'leverage' which Samuel never says", "correction": "Replace with 'use' or 'lean into'", "priority": "critical", "item_id": "uuid-of-improvement-item" } ] }, { "block_id": "soul-extractor", "source": "improvement-items-dashboard", "items": [ { "finding": "Soul extraction missed the 'second chance' narrative", "correction": "Core narrative should center on redemption through service", "priority": "critical", "item_id": "uuid" } ] } ] } ``` **Expected Response:** ```json { "success": true, "run_id": "uuid", "results": [ { "block_id": "voice-extractor", "items_count": 3, "status": "stored" }, { "block_id": "soul-extractor", "items_count": 1, "status": "stored" } ], "total_items": 4 } ``` **How it works:** 1. Store corrections in the block's state (`block.state.corrections[]`) 2. When block is re-run via `/api/execute`, inject corrections into LLM context as high-priority instructions 3. After block completes, clear corrections from state so they don't re-apply **Block ID mapping (these are the `target_block` values the dashboard sends):** | target_block | Factory Block | |---|---| | voice-extractor | Voice Extractor | | soul-extractor | Soul Extractor | | framework-extractor | Framework Extractor | | resource-extractor | Resource Extractor | | offer-extractor | Offer Extractor | | clone-compiler | Clone Compiler | | gap-analyzer | Gap Analyzer | | clone-tester | Clone Tester | **After building corrections-ingest:** - The dashboard button will work end-to-end (it already has the JS to call your endpoint) - Operator resets targeted blocks in factory GUI, blocks re-run with corrections injected - After re-run completes, dashboard can update items to "resolved" **Reference:** - Full PRD: align360.asapai.net/feedback-loop-architecture (11 sections) - Your own v5 handoff doc: memory/factory-v5-corrections-api.md (you wrote this — it aligns)

3. FORGE Session Handoff

Copy this entire block and paste it into the FORGE session.

Paste into FORGE Session
## Feedback Loop — FORGE Integration Brief (Mar 30) ### What Exists A complete feedback loop is being wired between three systems: 1. **Audit Pages** (folio-saas/Vercel) — Expert tests the clone, rates scenarios, provides corrections 2. **Improvement Items** (folio-saas API + Supabase) — Structured findings with target_block mapping 3. **Process Factory** (factory.asapai.net) — DAG engine that re-runs extraction/build blocks with corrections ### FORGE's Role: Monitor + Trigger FORGE should know about these endpoints for monitoring and automation: **Read improvement items:** ``` GET https://align360.asapai.net/api/a360/improvement-items?expert_slug=samuel-ngu&status=pending → Returns: { items: [...], total: N, summary: { by_status, by_block, by_priority } } ``` **Read improvement items by status:** ``` GET https://align360.asapai.net/api/a360/improvement-items?status=in_progress GET https://align360.asapai.net/api/a360/improvement-items?status=resolved GET https://align360.asapai.net/api/a360/improvement-items?target_block=voice-extractor ``` **Update an improvement item:** ``` PATCH https://align360.asapai.net/api/a360/improvement-items/{id} Body: { "status": "resolved", "resolved_in_version": "v7.2", "resolution_notes": "Fixed in re-run" } ``` **Push corrections to factory (once factory builds the endpoint):** ``` POST https://factory.asapai.net/api/corrections-ingest Body: { "run_id": "uuid", "corrections": [{ "block_id": "voice-extractor", "items": [...] }] } ``` ### What FORGE Can Automate (Future) 1. **Nightly check**: Poll improvement_items for new pending items, alert Jason via dashboard 2. **Auto-push**: When items accumulate past threshold (e.g., 5+ pending for same block), auto-push to factory 3. **Convergence tracking**: After factory re-run completes, trigger clone-tester on flagged scenarios, compare scores to previous run 4. **Status sync**: When factory block completes with corrections, mark corresponding improvement_items as resolved ### Key URLs - Dashboard: align360.asapai.net/improvement-items - Audit v3: align360.asapai.net/samuel-audit-v3 - Extraction Review: align360.asapai.net/samuel-extraction-review - Factory: factory.asapai.net - PRD: align360.asapai.net/feedback-loop-architecture ### Database Tables (Supabase) - `improvement_items` — findings from audits/reviews, with target_block, status, priority, expert_correction - `clone_feedback` — raw audit submissions (ratings, chat transcripts, feedback notes) - `system_prompts` — versioned system prompts (for tracking which version fixed what)

4. Architecture Recap

The complete flow once Factory builds corrections-ingest:

Expert runs audit (audit-v3)
  │
  ▼
audit-submit → Supabase (clone_feedback) + NowPage report + email
  │
  ▼ (auto-trigger, fire-and-forget)
audit-analyze → classifies findings → improvement_items rows
  │                                   (each has target_block set)
  ▼
Improvement Items Dashboard (triage — set priority, add corrections)
  │
  ▼ (Jason clicks "Push to Factory")
POST factory.asapai.net/api/corrections-ingest
  │                   ← THIS IS WHAT FACTORY SESSION BUILDS
  ▼
Factory stores corrections in block state
  │
  ▼ (Operator resets block in factory GUI)
Block re-runs with corrections as LLM context
  │
  ▼
New extraction output replaces old → clone-compiler re-builds
  │
  ▼
clone-tester re-validates → scores compared to previous run
  │
  ▼
If improved → improvement_items marked "resolved"
If not → next iteration (max 3 per PRD)

What's Built vs What's Not

ComponentStatusOwner
Audit v3 (collect)SHIPPEDThis session
Extraction Review (collect)SHIPPEDThis session
audit-analyze (classify)SHIPPEDThis session
improvement_items API (query + update)DEPLOYEDThis session
Improvement Items Dashboard (triage)PUBLISHEDThis session
Push to Factory button (JS)BUILTThis session
Migration 005PENDING — Jason runs in SupabaseJason
corrections-ingest endpointNOT BUILTFactory session
Block re-run with correctionsNOT BUILTFactory session
Auto re-test triggerNOT BUILTFactory session
Convergence trackingNOT BUILTFactory session
FORGE monitoring hooksFUTUREFORGE session

Key Links

WhatURL
Improvement Items Dashboardalign360.asapai.net/improvement-items
Audit v3 (interactive)align360.asapai.net/samuel-audit-v3
Extraction Reviewalign360.asapai.net/samuel-extraction-review
Feedback Loop PRDalign360.asapai.net/feedback-loop-architecture
Factory Enginefactory.asapai.net
This Handoff Pagealign360.asapai.net/feedback-loop-handoff