Skip to main content

Anti-Patterns & Troubleshooting

Learn from the most common mistakes teams make when using Boa — and how to fix them fast. This guide helps you avoid pitfalls and troubleshoot issues when results aren’t quite right.
Even experienced teams fall into these traps occasionally. Bookmark this page and reference it when something feels off.

Anti-Pattern 1: Overly broad prompts

The mistake:

Asking questions that are too vague or general. Examples:
  • “What’s working?”
  • “Show me creatives.”
  • “Tell me about ads.”

Why it’s a problem:

Boa needs context to give you relevant results. Broad prompts lead to generic, unfocused insights that are hard to act on.

How to fix it:

Add specific context: genre, time window, theme, or competitor focus. Fixed examples:
  • “What fantasy RPG ads are performing well in the last 60 days?”
  • “Show me top-performing cozy puzzle creatives from Q3 2024.”
  • “Tell me about [Competitor A]‘s recent action shooter ads.”
If you’re not sure how to add context, use a Chip instead — Chips ask the right questions for you.

Anti-Pattern 2: Skipping context inputs

The mistake:

Leaving context fields blank or providing minimal information. Examples:
  • Running Discover without specifying time window
  • Running Diagnose without a benchmark for comparison
  • Using Conversational Mode without mentioning genre or theme

Why it’s a problem:

Boa’s insights are relative to context. Without it, you’ll get results that may not apply to your specific situation.

How to fix it:

Always provide minimum context for your workflow:
WorkflowMinimum Context
DiscoverGenre + Time window
DiagnoseCreative + Optional benchmark
GenerateGoal + Deliverable type
ForecastGenre + Time window (90+ days)
MonitorWhat to track (competitors or theme)
Skipping context is the #1 reason for irrelevant results. Take 10 seconds to fill in the details — it makes all the difference.

Anti-Pattern 3: Confusing deliverable types

The mistake:

Not understanding the difference between Reports and Creative Briefs, or using the wrong one. Common confusion:
  • Using a Report when you need a production-ready brief
  • Creating a Creative Brief when you need stakeholder alignment
  • Mixing strategic insights with execution details in one document

Why it’s a problem:

Reports and Briefs serve different purposes. Confusing them leads to documents that don’t meet your audience’s needs.

How to fix it:

Use this decision guide:
NeedUseAudiencePurpose
Strategic insightsReportLeadership, stakeholdersAlignment, decision-making
Production directionCreative BriefCreative team, agenciesExecution, hand-off
BothCreate bothDifferent audiencesReport for stakeholders, Brief for team

Learn the Difference

See detailed guidance on Reports vs Briefs

Anti-Pattern 4: Chasing every trend

The mistake:

Testing every emerging pattern without filtering for brand fit or saturation. Examples:
  • Seeing a rising trend and immediately testing it without considering alignment
  • Chasing saturated patterns because competitors are using them
  • Testing trends that don’t match your brand positioning

Why it’s a problem:

Not every trend is right for your brand. Chasing misaligned or saturated trends wastes resources and dilutes your positioning.

How to fix it:

Filter trends through this framework: Test now (high priority):
  • ✅ Rising trend + Low saturation + Strong brand fit
Test cautiously (medium priority):
  • ⚠️ Rising trend + Medium saturation + Good brand fit
  • ⚠️ Stable trend + Low saturation + Strong brand fit
Skip (low priority):
  • ❌ Declining trend (regardless of fit)
  • ❌ High saturation (already played out)
  • ❌ Poor brand fit (forced or inauthentic)
Use Forecast workflows to assess saturation and velocity before testing trends.

Anti-Pattern 5: Analyzing in isolation

The mistake:

Looking at creatives or insights without comparison or benchmarks. Examples:
  • Running Diagnose on your creative without comparing to top performers
  • Reviewing Discover results without checking competitor performance
  • Judging performance without genre or time context

Why it’s a problem:

Creative performance is relative. A “good” creative in one genre might be weak in another. Comparison provides the context you need to interpret results correctly.

How to fix it:

Always include a comparison point: For Diagnose:
  • Compare your creative to genre top performers
  • Compare your creative to specific competitors
  • Compare version A vs version B in A/B tests
For Discover:
  • Compare your genre to adjacent genres
  • Compare time period 1 to time period 2
  • Compare your work to competitor benchmarks
Insights without comparison are just data. Comparison turns data into actionable intelligence.

Anti-Pattern 6: Over-interpreting single examples

The mistake:

Treating one high-performing creative as a pattern and building strategy around it. Examples:
  • Seeing one breakout creative and assuming the theme/tone is a trend
  • Copying a single competitor’s creative without understanding broader patterns
  • Testing an outlier approach without validation

Why it’s a problem:

Single examples can be outliers — they succeed despite their approach, not because of it. Building strategy around outliers is risky.

How to fix it:

Look for patterns across multiple examples: Strong signal (act on it):
  • ✅ 80%+ of top performers share this pattern
Medium signal (test cautiously):
  • ⚠️ 50-80% of top performers share this pattern
Weak signal (monitor only):
  • ❌ Less than 50% of top performers share this pattern
If you find an interesting outlier, run Discover to see if it’s part of a broader pattern or just a lucky exception.

Anti-Pattern 7: Ignoring genre expectations

The mistake:

Applying patterns from one genre to another without adaptation. Examples:
  • Using cozy puzzle pacing in action shooter ads
  • Applying RPG epic tone to casual puzzle games
  • Testing strategy game complexity in hyper-casual genres

Why it’s a problem:

Genre expectations matter. What works in fantasy RPG (slow-burn, epic) often fails in casual puzzle (fast hook, clear benefit). Violating genre norms requires exceptional execution.

How to fix it:

Before testing a pattern:
  1. Check if it’s common in your genre (safe bet)
  2. Check if it’s rare but successful in your genre (differentiation opportunity)
  3. If it’s from another genre, assess risk and adapt carefully
Cross-genre testing framework:
Pattern originYour genreRisk levelAction
Same genreSame genre✅ LowTest confidently
Adjacent genreYour genre⚠️ MediumTest cautiously, adapt
Distant genreYour genre❌ HighSkip or validate heavily first
Breaking genre norms can work — but it’s high-risk. Make sure you have strong evidence before betting big on cross-genre patterns.

Anti-Pattern 8: Forgetting about recency

The mistake:

Using outdated insights or not specifying time windows. Examples:
  • Running Discover without a time window (gets all-time results, not current)
  • Applying Q1 insights in Q4 without checking for changes
  • Ignoring that creative trends move fast (90 days can be outdated)

Why it’s a problem:

Creative trends shift quickly. What worked 6 months ago may be saturated or ineffective now.

How to fix it:

Always specify time windows:
  • Discover: Last 30-90 days for current patterns
  • Diagnose: Compare to recent benchmarks (last 60 days)
  • Forecast: 90-180 days to identify trends
  • Monitor: Real-time or daily for rapid response
Re-run key workflows regularly:
  • Monthly: Competitive Discover (stay current)
  • Quarterly: Trend Forecast (spot shifts)
  • Weekly: Monitor alerts (never miss breakouts)
Creative intelligence has a shelf life. Re-validate insights every 60-90 days.

Anti-Pattern 9: Skipping the “why”

The mistake:

Using Discover to find what’s working but not running Diagnose to understand why. Examples:
  • Seeing that hero’s journey works and copying it without understanding the execution details
  • Noticing fast hooks but not analyzing what makes a hook effective
  • Testing a competitor’s theme without diagnosing their specific approach

Why it’s a problem:

Knowing what works is useful. Understanding why it works lets you replicate and adapt it effectively.

How to fix it:

Pair Discover with Diagnose:
  1. Run Discover to find top performers
  2. Run Diagnose on 2-3 top performers to understand why they work
  3. Use Generate to create a brief incorporating those “why” insights
Example workflow:
  1. Discover: “What fantasy RPG ads are performing well?”
  2. Diagnose: “Why does this specific top performer work?”
  3. Generate: “Create a brief incorporating hero’s journey + fast hook + cinematic tone.”

Learn Diagnose Workflow

Understand the “why” behind performance

Anti-Pattern 10: Not saving or documenting insights

The mistake:

Running workflows, getting great insights, but not saving them for future reference. Examples:
  • Finding valuable patterns but not creating a Report or Brief
  • Running the same Discover workflow multiple times because you forgot the results
  • Losing insights when team members leave

Why it’s a problem:

Insights are only valuable if they’re accessible and actionable over time. Not documenting them means you lose institutional knowledge.

How to fix it:

Always save valuable insights:
  • Click Save → Report for strategic findings
  • Click Save → Creative Brief for execution-ready insights
  • Build a library of saved insights for your team
Create a regular rhythm:
  • Weekly: Save competitive Discover results
  • Monthly: Create trend Reports
  • Quarterly: Generate strategic summaries
Share with your team:
  • Use a shared folder (Notion, Confluence, Google Drive)
  • Tag insights by genre, theme, and date
  • Make insights searchable and referenceable
Great teams treat insights like assets. Build a library, and you’ll never lose valuable patterns.

Troubleshooting: when results aren’t right

Issue: Results feel generic or irrelevant

Likely cause: Insufficient context Fix:
  • Add genre, time window, theme, or competitor focus
  • Use a Chip instead of Conversational Mode
  • Be more specific in your prompt

Issue: No actionable next steps

Likely cause: You stopped too early in the workflow Fix:
  • Ask follow-up questions: “What should I do with this?”
  • Run Generate to create a brief or report
  • Pair Discover with Diagnose for deeper insights

Issue: Insights don’t match expectations

Likely cause: Your expectations may be based on outdated patterns or assumptions Fix:
  • Check time window (are results from the right period?)
  • Compare to benchmarks (are your expectations realistic?)
  • Run Diagnose on your own work to understand gaps

Issue: Too many results, hard to prioritize

Likely cause: Query was too broad or threshold was too low Fix:
  • Narrow your query (add theme or competitor focus)
  • Filter by performance tier (focus on top 10-20%)
  • Run Forecast to prioritize emerging vs saturated patterns

Issue: Missing visual references

Likely cause: Didn’t explicitly request examples Fix:
  • Ask follow-up: “Show me examples of this pattern.”
  • Use Discover to surface visual examples
  • Save results with visual references for future use

Quick reference: anti-pattern checklist

Before finalizing your work, check:
  • My prompts are specific, not vague
  • I’ve provided sufficient context (genre, time, theme)
  • I’m using the right deliverable type (Report vs Brief)
  • I’m filtering trends through brand fit and saturation
  • I’m comparing to benchmarks, not analyzing in isolation
  • I’m looking for patterns across multiple examples
  • I’m respecting genre expectations (or knowingly breaking them)
  • My insights are recent (last 60-90 days)
  • I understand the “why,” not just the “what”
  • I’ve saved and documented valuable insights
If you can check all these boxes, you’re avoiding the most common Boa pitfalls.