How We Think
Models, frameworks, and feedback loops powering every Cresva recommendation. No black boxes. If an agent recommends something, trace why.
Principles
What We Believe
Four principles that shape every model, recommendation, and feature on the platform.
Truth Over Convenience
If a platform says 4.2x ROAS but the real number is 2.8x, we show 2.8x. Accurate data makes uncomfortable reading, but it makes better decisions.
Test Before You Spend
Every budget recommendation is backed by scenario modeling. We simulate outcomes before committing real dollars. No gut-feel allocations.
Feedback Loops, Not Snapshots
Static dashboards show what happened. Cresva's models learn from what happened to improve what happens next. Every outcome makes the system smarter.
Confidence Over Precision
A forecast of $180K-$220K with 90% confidence is more useful than a single number of $200K with no error bounds. We always show uncertainty.
Attribution De-Biasing
Finding the truth behind platform-reported numbers
The Problem
Every ad platform overclaims conversions. Meta, Google, and TikTok each take credit for the same purchase. Platform-reported ROAS is inflated by 15-30% on average, leading to misallocated budgets and inflated expectations.
Our Approach
Cresva applies a layered de-biasing approach. Rather than trusting any single platform's numbers, we cross-reference platform reports against incrementality test results, apply channel-specific correction factors, and continuously recalibrate as new data comes in.
Avg Platform Overclaim Detected
23%
Across Meta, Google, and TikTok combined
How It Works
Cross-Platform Deduplication
When Meta and Google both claim the same conversion, someone is wrong. Cresva identifies overlap using purchase timestamps, user paths, and last-meaningful-touch logic to assign credit once.
Correction Factor Modeling
Each channel gets a de-bias multiplier derived from holdout tests and geo-lift studies. Meta's correction factor differs from Google's, which differs from TikTok's. These factors update continuously.
Incrementality Calibration
Correction factors are regularly validated against real incrementality tests. When a brand runs a holdout test, the results feed back into the model, tightening accuracy over time.
True ROAS Output
The result is a de-biased, incremental ROAS per channel that reflects actual causal impact. This is the number that should drive budget decisions, not what the platform dashboard shows.
Predictive Forecasting
From historical patterns to forward-looking confidence
The Problem
Most ecommerce brands forecast with spreadsheets, gut feeling, or simple trendlines. This produces 60-70% accuracy at best. Wrong forecasts lead to overspending in bad months and underspending in good ones.
Our Approach
Cresva's forecasting engine combines time-series decomposition, cross-brand pattern matching, and real-time signal processing. It learns from every brand on the platform, recognizing patterns no single-brand model could detect.
Forecast Accuracy (90 days)
91%
vs. 60-70% industry average with manual methods
How It Works
Time-Series Decomposition
Revenue and performance metrics are broken into trend, seasonality, and residual components. This isolates underlying growth from cyclical patterns and random noise.
Cross-Brand Pattern Matching
Patterns discovered across brands in similar verticals (e.g., fashion brands all see the same Q4 curve) are transferred to improve predictions for newer brands with less historical data.
Signal Integration
Real-time signals like creative fatigue velocity, CPM trends, competitive intensity, and macro-economic indicators are layered in to adjust forecasts as conditions change.
Confidence Banding
Every forecast comes with confidence intervals. Narrow bands mean high certainty. Wide bands mean more variables are in play. Budget decisions should account for the range, not just the midpoint.
Creative Intelligence
Detecting what works, what's dying, and what to test next
The Problem
Most brands detect creative fatigue 4-7 days too late, wasting budget on ads that have already peaked. A/B testing is too slow for modern creative volume. Without a system, creative decisions are subjective.
Our Approach
Cresva models creative performance as a lifecycle curve. Every creative has an introduction phase, a peak phase, and a decay phase. By modeling the curve shape early, Olivia predicts fatigue before performance drops.
Avg Early Fatigue Detection
4.2 days
Before visible performance decline
How It Works
Performance Curve Modeling
Each creative's metrics (CTR, CPA, hook rate, hold rate) are plotted over time. The curve shape is compared against historical patterns to identify which lifecycle phase the creative is in.
Fatigue Prediction
Early indicators of fatigue (rising frequency with declining engagement) trigger alerts 3-5 days before performance visibly drops. This window is the difference between scaling and wasting.
Winner Identification
Multi-armed bandit logic shifts traffic toward better-performing creatives while still exploring new options. This finds winners 3x faster than traditional A/B testing by optimizing during the test.
Element-Level Insights
Beyond whole-creative analysis, Olivia identifies which elements drive performance: hooks, messaging angles, visual styles, CTAs. These insights feed the next round of creative production.
Budget Allocation
Distributing spend where incremental return is highest
The Problem
Most brands allocate budgets quarterly using last quarter's performance. This ignores that optimal allocation changes weekly based on creative health, audience saturation, competitive pressure, and seasonality.
Our Approach
Cresva models each channel's response curve (the relationship between spend and incremental return) and runs thousands of allocation scenarios to find the optimal distribution for each budget level.
Scenarios Modeled per Decision
1,000+
Before any budget recommendation is made
How It Works
Response Curve Estimation
Each channel's spend-to-return relationship is modeled as a curve with diminishing returns. The shape reveals where each channel is efficient vs. where additional spend produces declining marginal return.
Scenario Simulation
Sam tests 1,000+ budget allocation scenarios: what happens if you shift 20% from Meta to Google? What if you increase TikTok by 30%? Each scenario produces a projected outcome with confidence bands.
Constraint Integration
Real-world constraints matter. Minimum spend thresholds, platform learning phases, creative inventory, and business goals (growth vs. efficiency) are all factored into recommendations.
Dynamic Rebalancing
Recommendations update weekly as new performance data arrives. Static quarterly plans are replaced with living allocations that adapt to what's actually happening.
Compound Learning
The model that gets smarter with every decision
The Problem
Traditional analytics tools give you the same quality of insight on day 365 as day 1. They're static. The data changes, but the intelligence doesn't improve.
Our Approach
Every decision on Cresva creates a feedback loop. Attribution corrections feed into forecasting. Forecasting accuracy feeds into budget allocation. Creative insights feed into everything. The entire system learns from outcomes, not just inputs.
Accuracy Improvement
Month over month
Models continuously improve with every decision and outcome
How It Works
Decision Logging
Every recommendation, user action, and outcome is logged as a decision record. Did the budget shift improve ROAS? Did the creative flagged as fatigued actually decline? These outcomes are ground truth.
Outcome Matching
Predictions are compared against actual results. Where the model was wrong, the error pattern is analyzed. Systematic errors (always overestimating Meta ROAS) are corrected. Random errors are absorbed.
Cross-Agent Learning
Insights from one agent improve others. Parker's attribution corrections make Felix's forecasts more accurate. Olivia's creative fatigue signals make Sam's budget allocations more timely. The agents don't work in silos.
Compounding Accuracy
The result is a system where month 6 is dramatically more accurate than month 1. Every week of data, every decision outcome, and every correction makes the entire platform smarter.
The Pipeline
From Raw Data to Actionable Intelligence
How data flows through the system, with each agent adding a layer of intelligence.
Data Ingestion
Meta, Google, TikTok, Shopify, GA4. All platforms unified into a single data layer by Dana.
Attribution De-Biasing
Platform overclaims removed. Cross-channel deduplication. True incremental ROAS calculated by Parker.
Forecasting
De-biased data feeds into Felix's forecasting engine. Predictions with confidence intervals.
Creative Analysis
Olivia models creative lifecycle curves, detects fatigue, and identifies element-level patterns.
Budget Optimization
Sam runs 1,000+ scenarios using de-biased attribution and forecasts to find optimal allocation.
Compound Learning
Outcomes feed back into every model. Corrections compound. The system improves with each cycle.
Transparency
What We Can and Can't Do
Honest about our capabilities. No marketing hype, no inflated claims.
What Our Models Do Well
Honest Limitations
See the Methodology in Action
Read about how it works, or watch the agents apply it to your data. Your call.