How We Think

Models, frameworks, and feedback loops powering every Cresva recommendation. No black boxes. If an agent recommends something, trace why.

Principles

What We Believe

Four principles that shape every model, recommendation, and feature on the platform.

Truth Over Convenience

If a platform says 4.2x ROAS but the real number is 2.8x, we show 2.8x. Accurate data makes uncomfortable reading, but it makes better decisions.

Test Before You Spend

Every budget recommendation is backed by scenario modeling. We simulate outcomes before committing real dollars. No gut-feel allocations.

Feedback Loops, Not Snapshots

Static dashboards show what happened. Cresva's models learn from what happened to improve what happens next. Every outcome makes the system smarter.

Confidence Over Precision

A forecast of $180K-$220K with 90% confidence is more useful than a single number of $200K with no error bounds. We always show uncertainty.

Pillar 1Powered by Parker

Attribution De-Biasing

Finding the truth behind platform-reported numbers

The Problem

Every ad platform overclaims conversions. Meta, Google, and TikTok each take credit for the same purchase. Platform-reported ROAS is inflated by 15-30% on average, leading to misallocated budgets and inflated expectations.

Our Approach

Cresva applies a layered de-biasing approach. Rather than trusting any single platform's numbers, we cross-reference platform reports against incrementality test results, apply channel-specific correction factors, and continuously recalibrate as new data comes in.

Avg Platform Overclaim Detected

23%

Across Meta, Google, and TikTok combined

How It Works

1

Cross-Platform Deduplication

When Meta and Google both claim the same conversion, someone is wrong. Cresva identifies overlap using purchase timestamps, user paths, and last-meaningful-touch logic to assign credit once.

2

Correction Factor Modeling

Each channel gets a de-bias multiplier derived from holdout tests and geo-lift studies. Meta's correction factor differs from Google's, which differs from TikTok's. These factors update continuously.

3

Incrementality Calibration

Correction factors are regularly validated against real incrementality tests. When a brand runs a holdout test, the results feed back into the model, tightening accuracy over time.

4

True ROAS Output

The result is a de-biased, incremental ROAS per channel that reflects actual causal impact. This is the number that should drive budget decisions, not what the platform dashboard shows.

Read the full guide
Pillar 2Powered by Felix

Predictive Forecasting

From historical patterns to forward-looking confidence

The Problem

Most ecommerce brands forecast with spreadsheets, gut feeling, or simple trendlines. This produces 60-70% accuracy at best. Wrong forecasts lead to overspending in bad months and underspending in good ones.

Our Approach

Cresva's forecasting engine combines time-series decomposition, cross-brand pattern matching, and real-time signal processing. It learns from every brand on the platform, recognizing patterns no single-brand model could detect.

Forecast Accuracy (90 days)

91%

vs. 60-70% industry average with manual methods

How It Works

1

Time-Series Decomposition

Revenue and performance metrics are broken into trend, seasonality, and residual components. This isolates underlying growth from cyclical patterns and random noise.

2

Cross-Brand Pattern Matching

Patterns discovered across brands in similar verticals (e.g., fashion brands all see the same Q4 curve) are transferred to improve predictions for newer brands with less historical data.

3

Signal Integration

Real-time signals like creative fatigue velocity, CPM trends, competitive intensity, and macro-economic indicators are layered in to adjust forecasts as conditions change.

4

Confidence Banding

Every forecast comes with confidence intervals. Narrow bands mean high certainty. Wide bands mean more variables are in play. Budget decisions should account for the range, not just the midpoint.

Read the full guide
Pillar 3Powered by Olivia

Creative Intelligence

Detecting what works, what's dying, and what to test next

The Problem

Most brands detect creative fatigue 4-7 days too late, wasting budget on ads that have already peaked. A/B testing is too slow for modern creative volume. Without a system, creative decisions are subjective.

Our Approach

Cresva models creative performance as a lifecycle curve. Every creative has an introduction phase, a peak phase, and a decay phase. By modeling the curve shape early, Olivia predicts fatigue before performance drops.

Avg Early Fatigue Detection

4.2 days

Before visible performance decline

How It Works

1

Performance Curve Modeling

Each creative's metrics (CTR, CPA, hook rate, hold rate) are plotted over time. The curve shape is compared against historical patterns to identify which lifecycle phase the creative is in.

2

Fatigue Prediction

Early indicators of fatigue (rising frequency with declining engagement) trigger alerts 3-5 days before performance visibly drops. This window is the difference between scaling and wasting.

3

Winner Identification

Multi-armed bandit logic shifts traffic toward better-performing creatives while still exploring new options. This finds winners 3x faster than traditional A/B testing by optimizing during the test.

4

Element-Level Insights

Beyond whole-creative analysis, Olivia identifies which elements drive performance: hooks, messaging angles, visual styles, CTAs. These insights feed the next round of creative production.

Read the full guide
Pillar 4Powered by Sam

Budget Allocation

Distributing spend where incremental return is highest

The Problem

Most brands allocate budgets quarterly using last quarter's performance. This ignores that optimal allocation changes weekly based on creative health, audience saturation, competitive pressure, and seasonality.

Our Approach

Cresva models each channel's response curve (the relationship between spend and incremental return) and runs thousands of allocation scenarios to find the optimal distribution for each budget level.

Scenarios Modeled per Decision

1,000+

Before any budget recommendation is made

How It Works

1

Response Curve Estimation

Each channel's spend-to-return relationship is modeled as a curve with diminishing returns. The shape reveals where each channel is efficient vs. where additional spend produces declining marginal return.

2

Scenario Simulation

Sam tests 1,000+ budget allocation scenarios: what happens if you shift 20% from Meta to Google? What if you increase TikTok by 30%? Each scenario produces a projected outcome with confidence bands.

3

Constraint Integration

Real-world constraints matter. Minimum spend thresholds, platform learning phases, creative inventory, and business goals (growth vs. efficiency) are all factored into recommendations.

4

Dynamic Rebalancing

Recommendations update weekly as new performance data arrives. Static quarterly plans are replaced with living allocations that adapt to what's actually happening.

Read the full guide
Pillar 5

Compound Learning

The model that gets smarter with every decision

The Problem

Traditional analytics tools give you the same quality of insight on day 365 as day 1. They're static. The data changes, but the intelligence doesn't improve.

Our Approach

Every decision on Cresva creates a feedback loop. Attribution corrections feed into forecasting. Forecasting accuracy feeds into budget allocation. Creative insights feed into everything. The entire system learns from outcomes, not just inputs.

Accuracy Improvement

Month over month

Models continuously improve with every decision and outcome

How It Works

1

Decision Logging

Every recommendation, user action, and outcome is logged as a decision record. Did the budget shift improve ROAS? Did the creative flagged as fatigued actually decline? These outcomes are ground truth.

2

Outcome Matching

Predictions are compared against actual results. Where the model was wrong, the error pattern is analyzed. Systematic errors (always overestimating Meta ROAS) are corrected. Random errors are absorbed.

3

Cross-Agent Learning

Insights from one agent improve others. Parker's attribution corrections make Felix's forecasts more accurate. Olivia's creative fatigue signals make Sam's budget allocations more timely. The agents don't work in silos.

4

Compounding Accuracy

The result is a system where month 6 is dramatically more accurate than month 1. Every week of data, every decision outcome, and every correction makes the entire platform smarter.

Read the full guide

The Pipeline

From Raw Data to Actionable Intelligence

How data flows through the system, with each agent adding a layer of intelligence.

Stage 1

Data Ingestion

Meta, Google, TikTok, Shopify, GA4. All platforms unified into a single data layer by Dana.

Stage 2

Attribution De-Biasing

Platform overclaims removed. Cross-channel deduplication. True incremental ROAS calculated by Parker.

Stage 3

Forecasting

De-biased data feeds into Felix's forecasting engine. Predictions with confidence intervals.

Stage 4

Creative Analysis

Olivia models creative lifecycle curves, detects fatigue, and identifies element-level patterns.

Stage 5

Budget Optimization

Sam runs 1,000+ scenarios using de-biased attribution and forecasts to find optimal allocation.

Stage 6

Compound Learning

Outcomes feed back into every model. Corrections compound. The system improves with each cycle.

Transparency

What We Can and Can't Do

Honest about our capabilities. No marketing hype, no inflated claims.

What Our Models Do Well

Detect platform overclaiming with high accuracy
Forecast revenue within tight confidence bands after 90 days
Identify creative fatigue 3-5 days before visible decline
Model diminishing returns curves per channel
Improve continuously through compound learning
Surface cross-brand patterns invisible to single-brand analysis

Honest Limitations

Forecasting accuracy is lower in the first 30 days with limited data
Attribution de-biasing improves with incrementality test data, which takes time
Models work best with consistent ad spend; sporadic campaigns reduce accuracy
Creative analysis requires sufficient impression volume per creative
Cross-brand patterns are most useful within similar verticals
AI recommendations are inputs to human decisions, not replacements for judgment

See the Methodology in Action

Read about how it works, or watch the agents apply it to your data. Your call.