← Back to file list
Workflow Spec
The business workflow definition from intake through approval and revision handling.
Workflow Rules · /home/jonas/social-carousel-codex/docs/workflow-spec.md
Last modified: 2026-03-29T12:51:19.578Z
# Workflow Spec Status: Phase 2 business workflow definition This document defines the desired carousel business workflow in tool-agnostic terms. It describes what the workflow must do regardless of whether the execution surface is Codex, Slack, Airtable, or another interface. ## Purpose Produce one evidence-based social carousel from a human brief through final preview approval while keeping: - research scope explicit - revision handling explicit - approvals explicit - preview generation deterministic - file or artifact state recoverable ## Core Business Objects | Object | Meaning | |---|---| | Intake | The normalized brief: topic, benchmark/reference, business goal, constraints, and requester notes | | Benchmark diagnosis | The explicit extraction of what the benchmark is doing, why it engages, where it is weak, and how the new post should improve on it | | Angle set | Exactly three candidate approaches derived from the intake | | Selected angle | The approved concept that research and drafting must follow | | Research package | Evidence summary, supporting sources, allowed claims, forbidden inferences, and intended slide architecture | | Research lock | The approved evidence boundary that later copy must stay inside unless research is explicitly reopened | | Draft version | One canonical authoring JSON version of the carousel | | Preview payload | The render-ready JSON sent to the preview endpoint | | Preview artifact | The stored payload plus verified preview URL and artifact path | | Decision record | The explicit human outcome at each approval gate | ## End-To-End Workflow ### 1. Intake Input: - human brief - optional benchmark post, screenshot, or example - optional constraints such as tone, audience, slide count, or CTA direction Output: - normalized intake record - unique idea/run identifier Rules: - intake must be stored before angle generation begins - benchmark screenshot plus short operator context should be the default social-carousel start mode when available - ambiguity should be resolved only when it would materially affect angle quality or evidence scope - the benchmark should be stored as reference material, not treated as copy to paraphrase ### 1A. Benchmark Diagnosis Input: - benchmark screenshot - optional benchmark URL - short operator context - target audience - constraints Output: - explicit diagnosis of what the benchmark hook is doing - why it may be engaging - what feels weak, vague, repetitive, under-evidenced, or under-explained - how the new post should improve on it Rules: - benchmark-first runs should diagnose the benchmark before angle generation - the diagnosis must be stored durably - the diagnosis should frame the benchmark as inspiration for a stronger replacement, not as text to paraphrase - in this workflow, `10x better` means a stronger hook, more insight, better clarity, better evidence quality, and better practical usefulness ### 2. Angle Generation Input: - normalized intake Output: - exactly three angle options - enough structure to let a human choose between them Rules: - for benchmark-first runs, the angle set should be generated from the benchmark diagnosis plus the operator context - each option should be distinct enough that selection matters - the options should aim to outperform the benchmark rather than restate it - the workflow should not advance to research until one angle is explicitly selected ### 3. Angle Approval Human decision: - select one angle - select one angle plus remix notes - reject all angles and request a second set - abandon the idea Rules: - the selected angle must be durably recorded before research begins - the workflow must not infer angle approval from vague acknowledgements ### 4. Research Input: - approved angle - intake context Output: - evidence-backed research package - source list - slide architecture - allowed claims and required qualifications - explicit unsupported or forbidden claims Rules: - research should be resumable by phase when practical - research output must be structured enough to support later integrity checks ### 5. Research Approval Human decision: - approve research - request research revision - abandon Rules: - approval freezes the current evidence boundary into a research lock - drafting may begin only after the research lock exists ### 6. Content Structuring Input: - approved research package - research lock Output: - slide plan that matches the currently supported template contract Rules: - the content structure must remain inside the approved angle and evidence scope - any requested structural change that introduces a new claim is not just formatting; it changes research scope - default to at least 4 content slides unless a shorter format is explicitly justified by the brief, template, or delivery context - keep the cover concise enough to carry one clear promise - the cover may be more curiosity-driven than the full research framing if later slides add the missing nuance inside the approved lock - each content slide should make sense on its own, carry one distinct editorial role, and help build a logical story from slide to slide - avoid repetitive restatements of the same point across adjacent content slides - translate evidence caveats into natural consumer language instead of academic-sounding body-slide disclaimers ### 7. JSON Generation Input: - structured content plan - research lock Output: - one canonical authoring JSON draft version - version lineage metadata Rules: - every draft version must be identifiable and reproducible - the draft must carry enough lineage to prove which research lock it belongs to - deterministic contract validation must fail closed on component character-limit violations and surface field-specific feedback for revise-and-retry ### 8. Endpoint Validation Input: - canonical draft JSON Output: - one render-ready preview payload - validation result against the active template contract Required checks: - structure and order - field-level limits and fixed literals - lineage match against the active research lock - evidence integrity / allowed claim envelope - preview-endpoint payload validity Rules: - design/preview must not start from an unverified draft - schema-only repair may happen locally if it does not change meaning, claims, or citations - character-limit or contract-shape failures must produce explicit revise-and-retry feedback, not silent continuation ### 8A. Final Copy Review Expectation Before preview generation, the workflow should perform a bounded copy review that checks: - whether a curiosity-led cover still stays honest once the later slides carry the nuance - slide-to-slide non-repetition - one distinct editorial role per content slide - logical story progression across the carousel - standalone readability for each content slide - natural consumer-language translation of evidence caveats This is a workflow expectation and review checklist first. It does not need to be a brittle hard validator unless a safe deterministic implementation is later approved. ### 9. Preview Generation Input: - validated preview payload Output: - preview URL - stored preview artifact - proof that the preview is retrievable Rules: - preview success requires both endpoint success and preview/artifact verification - preview generation is not complete until the workflow has durable proof of the generated artifact ### 10. Feedback Handling All human feedback after preview generation must be classified before the workflow moves. Allowed classifications: - design-only revision - copy revision within the locked research scope - research-required revision - concept restart - final approval The workflow must not default vague feedback to a full restart. Classification must be explicit. Default `copy_within_lock` cases include: - shorten the cover while keeping the same claim - make the cover more curiosity-driven while keeping the same claim envelope - rewrite slides into clearer consumer-facing language - translate evidence caveats into more natural consumer language without changing the lock - make content slides stand on their own - expand to more content slides without introducing a new claim - reduce repetition or improve story flow inside the current lock ### 11. Design-Only Revision Typical changes: - wording refinement - emphasis or sequencing tweaks - CTA phrasing changes - pill-label cleanup - design-linked readability fixes - theme or visual variant changes Rules: - does not reopen research by default - does not require research rerun - may skip scientific QC if meaning and evidence scope are unchanged - still requires preview regeneration and verification ### 12. Copy Revision Within The Existing Research Lock Typical changes: - shorter cover headline - stronger curiosity-led cover without a broader claim - clearer consumer-facing wording - translating evidence caveats into more natural consumer language - stronger standalone readability - better sequencing or story flow - expanding from 2 content slides to 4 or more without adding a new claim - CTA refinement without a new promise Rules: - keeps the current research lock - defaults to at least 4 content slides unless a shorter format is explicitly justified - requires QC rerun whenever meaning-bearing text changes - requires preview regeneration and verification after the revised draft passes QC ### 13. Research-Required Revision Typical triggers: - add a new claim not supported by the approved research lock - materially change the scientific conclusion - add a new intervention, mechanism, or comparison - replace or expand the source set - dispute the validity of the current evidence package - ask for a different angle or concept Rules: - invalidates the current research lock - requires research to rerun from the appropriate upstream point - requires a new or revised research approval before new drafting continues ### 14. Final Approval Human decision: - approve the verified preview version for publication/export Rules: - final approval must bind to one concrete preview version - later revisions must open a new revision lane from that approved base version ## Decision Rules For Research Reruns ### Research Must Not Rerun For | Change type | Research rerun? | Notes | |---|---|---| | Theme/color changes | No | Pure presentation | | Typography / layout adjustments | No | Pure presentation | | Wording cleanup that preserves claim meaning | No | Still regenerate preview | | Reordering existing approved points | No | Re-run integrity/QC only if meaning changes | | CTA wording changes without new promise or claim | No | Keep evidence scope locked | | Removing a supported point while keeping the remaining claims intact | Usually no | Requires integrity check and possibly QC | | Small post-design wording revisions | No by default | May still require QC if meaning changed | ### Research Must Rerun For | Change type | Research rerun? | Notes | |---|---|---| | New scientific claim | Yes | New claim exceeds approved lock | | New supporting source not already approved | Yes | Evidence boundary changed | | New slide whose core point is unsupported by the approved package | Yes | Scope expansion | | Meaningful change to the conclusion or recommendation | Yes | Scientific meaning changed | | Jonas asks to revisit evidence or sources | Yes | Explicit research reopen | | Angle/concept change | Yes | Restart from angle or research stage | ### Middle Case: Copy Revision Inside The Existing Research Lock Some large revisions do not require new research, but they do require renewed integrity and QC. Examples: - merging two approved points into one slide - splitting one approved point into two slides - tightening or expanding explanation while staying inside existing claims - dropping one approved claim while preserving the rest of the lock Required path: - revise copy - re-run integrity checks - re-run QC - regenerate preview ## Required Human Gates The workflow should have exactly these mandatory human approvals: - angle approval - research approval - final preview approval Everything else should be automated unless the run degrades, blocks, or is explicitly paused. ## Required Failure Outcomes Every non-happy-path branch must resolve to one of: - automatic retry - automatic repair - degraded continuation with named risk - parked run awaiting human input - blocked run requiring a human decision The workflow must never rely on implicit chat memory as the recovery mechanism.
Save
Ready