Quality Bar & Review Checklist
Use this checklist to ensure your Boa workflows produce high-quality, actionable insights. Whether you’re running Chips or using Conversational Mode, these quality standards help you get the most value.Pro teams use this checklist before sharing Reports or Creative Briefs with stakeholders. It takes 2 minutes and dramatically improves output quality.
Before you start: Input quality
✅ Clear question or goal stated
Check:- I know what I’m trying to learn or achieve
- My question is specific, not vague
- I’ve chosen the right workflow (Discover, Diagnose, Generate, Forecast, Monitor)
- ✅ “What fantasy RPG ads are performing well in the last 60 days?”
- ✅ “Why did our campaign underperform vs genre benchmarks?”
- ✅ “Create a brief for a new cozy puzzle campaign.”
- ❌ “What’s working?” (too vague)
- ❌ “Tell me about creatives.” (no clear goal)
If you can’t articulate your question in one sentence, start with a Chip — it will structure the question for you.
✅ Sufficient context provided
Check:- Genre or category specified
- Time window defined (if relevant)
- Competitors or references included (if relevant)
- Theme or pattern focus stated (if relevant)
| Workflow | Required Context | Optional Context |
|---|---|---|
| Discover | Genre, Time window | Theme, Competitors, Market |
| Diagnose | Creative (link or upload) | Benchmark for comparison |
| Generate | Goal, Deliverable type | Source insights, References |
| Forecast | Genre, Time window (90+ days) | Theme, Competitors |
| Monitor | What to track (competitors or theme) | Alert frequency, Thresholds |
Skipping context is the #1 reason for generic or irrelevant results. Always provide at least the minimum.
✅ Correct goal/workflow chosen
Check:- I’ve matched my question to the right workflow
- “What’s working?” → Discover
- “Why did this perform?” → Diagnose
- “Create a brief or report.” → Generate
- “What’s coming next?” → Forecast
- “Alert me when…” → Monitor
If you’re unsure, start with Discover — it’s the most versatile workflow and often reveals what question you should really be asking.
After you get results: Output quality
✅ Evidence cited in the result
Check:- Results include specific creative examples
- Performance data is provided (e.g., “Top 10% in genre”)
- Patterns are backed by multiple examples, not single creatives
- Visual references are included (thumbnails, links, or screenshots)
- ✅ “Top performers use hero’s journey narratives 3x more than bottom performers. Examples: [Creative A], [Creative B], [Creative C].”
- ❌ “Hero’s journey works well.” (no evidence, no examples)
If results lack evidence, ask follow-up questions: “Show me examples” or “What’s the performance data?”
✅ Patterns identified, not just individual creatives
Check:- Results highlight what’s consistent across multiple creatives
- Patterns are described clearly (tone, theme, pacing, etc.)
- Outliers are noted separately, not confused with patterns
- ✅ “80% of top performers use fast hooks (0-3 seconds) and epic orchestral music.”
- ❌ “This creative is good.” (no pattern, just a single example)
Single examples are interesting. Patterns are actionable. Always look for what’s consistent across multiple top performers.
✅ Actionable next step included
Check:- Results suggest a specific action (test, create, avoid, monitor)
- Next step is concrete, not vague
- Recommended action is grounded in the evidence
- ✅ “Test hero’s journey narratives in your next campaign (proven pattern).”
- ✅ “Avoid slow intros — they correlate with underperformance in this genre.”
- ✅ “Create a brief incorporating epic tone and fast hooks from top performers.”
- ❌ “Consider these insights.” (what should I do with them?)
- ❌ “Interesting patterns.” (okay, but what’s the action?)
Every Boa session should end with a clear answer to: “What should I do next?”
✅ Relevance to your brand and goals
Check:- Insights are aligned with your brand positioning
- Patterns are executable given your resources
- Recommendations match your campaign goals
- You’re not chasing trends that don’t fit your identity
| If the insight is… | And your brand is… | Then… |
|---|---|---|
| Epic, cinematic tone | Casual, playful brand | ❌ Skip or adapt |
| Fast-paced, urgent CTA | Premium, patient brand | ❌ Skip or adapt |
| Proven pattern, strong fit | Well-aligned | ✅ Test confidently |
| Emerging trend, good fit | Aligned + agile | ✅ Test cautiously |
Not every winning pattern is right for your brand. Filter insights through your positioning and values.
Before you share: Deliverable quality
If you’re creating a Report or Creative Brief, check:✅ Clear executive summary or objective
Check:- Reports: 2-3 sentence summary of key finding and recommendation
- Creative Briefs: Clear objective and hypothesis stated upfront
✅ Visual references included
Check:- At least 3-5 visual examples provided
- Each reference is annotated (what to notice, why it’s relevant)
- Performance context included (e.g., “Top 5% in genre”)
✅ Specific creative direction (Briefs only)
Check:- Tone described specifically (not “exciting” — “epic, cinematic, aspirational”)
- Theme stated clearly
- Pacing guidance included (hook timing, rhythm)
- Character and framing direction provided
- Must-haves and guardrails listed
✅ Rationale and evidence (Briefs and Reports)
Check:- Recommendations are backed by data
- Performance metrics cited
- Patterns explained, not just stated
✅ Clear next steps
Check:- Actionable recommendations provided
- Prioritization included (quick wins vs long-term bets)
- Success metrics defined (how you’ll know it worked)
Before sharing with your team, ask: “Could someone unfamiliar with Boa understand this and take action?” If not, add clarity.
Quality standards by workflow
Discover quality standards
Discover quality standards
Minimum bar:
- Top / Mid / Bottom performers clearly identified
- At least 3-5 patterns described (tone, theme, pacing, etc.)
- Visual examples provided for each tier
- Performance context included
- Patterns compared across tiers (what’s different?)
- Outliers noted and explained
- Time-based or competitive context provided
- Actionable recommendation (what to test or avoid)
Diagnose quality standards
Diagnose quality standards
Minimum bar:
- Creative analyzed across multiple dimensions (tone, pacing, visuals, audio)
- Performance drivers identified (what correlates with success)
- Gaps vs benchmarks highlighted (what’s missing)
- Specific improvement suggestions provided
- Element-by-element comparison to top performers
- Rationale for each improvement (why it matters)
- Prioritized recommendations (high-impact vs low-impact)
- Next test hypothesis stated
Generate quality standards
Generate quality standards
Minimum bar (Briefs):
- Clear objective and hypothesis
- Specific creative direction (tone, theme, pacing, character)
- 3-5 visual references with annotations
- Must-haves and guardrails listed
- “What Good Looks Like” benchmark included
- Rationale for each direction choice (data-backed)
- Production specs provided (format, duration, aspect ratio)
- Success metrics defined
- Executive summary (2-3 sentences)
- Key patterns and evidence
- Recommendations and next steps
- Context and comparison (competitive, temporal, market)
- Prioritized recommendations (quick wins first)
- Success metrics and timeline
- Visual examples and charts
Forecast quality standards
Forecast quality standards
Minimum bar:
- Rising and declining trends identified
- Velocity metrics provided (how fast trends are moving)
- Saturation level indicated (low, medium, high)
- Early-stage examples provided
- Trends compared over time (acceleration or plateau)
- Brand fit assessment for each trend
- Test prioritization (what to test now vs later)
- Risk assessment (confidence level for each trend)
Monitor quality standards
Monitor quality standards
Minimum bar:
- Alert triggers clearly defined
- Relevant competitors or themes tracked
- Alert frequency set appropriately
- Performance thresholds configured
- Noise reduction (thresholds tuned to reduce irrelevant alerts)
- Follow-up workflows configured (auto-Diagnose on high-priority alerts)
- Team notification setup (Slack, email integration)
- Weekly review routine established
Common quality issues and fixes
| Issue | Fix |
|---|---|
| Results are too generic | Add more context (genre, theme, time window) |
| No actionable next steps | Ask follow-up: “What should I do with this?” |
| Missing visual references | Request examples: “Show me creatives that demonstrate this pattern.” |
| Unclear patterns | Ask for clarification: “What’s consistent across top performers?” |
| Insights don’t fit brand | Filter through positioning: “Does this align with our values and capabilities?” |
| Deliverable lacks evidence | Add performance data and annotated references |
| Brief is too vague | Be more specific with tone, theme, pacing guidance |
| Report has no “so what” | Add executive summary and clear recommendations |
Quality culture: team standards
For teams using Boa regularly:Weekly quality review
- Review 1-2 Reports or Briefs as a team
- Discuss what made them strong or weak
- Update this checklist with team learnings
Shared quality bar
- Agree on minimum standards for deliverables
- Create templates that enforce quality
- Share strong examples internally
Continuous improvement
- Track which insights led to successful campaigns
- Refine prompt patterns based on what works
- Update team workflows quarterly
The best teams treat quality as a muscle — the more you use this checklist, the faster quality becomes automatic.