72 terms across 7 categories
Every metric, model, and method in ecommerce advertising. Plain language, real benchmarks.
72 terms
A controlled experiment comparing two ad variations (A vs B) with equal traffic split to determine which performs better on a specific metric. Requires statistical significance before declaring a winner, which typically means enough conversions to be confident the difference isn't random. Simple and reliable, but slow. At typical ecommerce conversion rates, an A/B test needs 1,000-5,000 impressions per variant to reach significance. For brands testing 20+ creatives monthly, sequential A/B testing is too slow.
Ad spend divided by ad-attributed revenue, expressed as a percentage. The inverse of ROAS. A 25% ACOS means you spend $0.25 in ads for every $1 in revenue (equivalent to 4x ROAS). Commonly used in Amazon advertising but applicable across channels. ACOS below your contribution margin means advertising is profitable; above it means you're losing money on ad-driven sales. Target ACOS should be set relative to margin, not arbitrary benchmarks.
The average number of times each person in your target audience has seen your ad. Calculated as impressions divided by reach. Frequency above 3-4x per week on Meta typically triggers creative fatigue for prospecting audiences. Retargeting can tolerate higher frequency (5-8x) because the audience already has purchase intent. Monitoring frequency at the ad set level is critical - account-level frequency averages hide individual audience oversaturation.
Automatically identifying statistically unexpected changes in marketing metrics: sudden CPA spikes, unusual CTR drops, spend anomalies, or conversion volume changes that deviate from expected patterns. Catches problems hours or days before manual review would. Uses statistical models to establish 'normal' ranges for each metric and flags deviations beyond a confidence threshold. Critical for brands managing high daily spend where a day of undetected problems can waste thousands.
The average dollar amount spent per transaction. Calculated as total revenue divided by total orders. Higher AOV means you can afford higher CPA while maintaining profitability. AOV varies by channel (Google Shopping often has higher AOV than Meta because of purchase intent), by creative (product bundles drive higher AOV than single-product ads), and by audience (returning customers typically have 20-30% higher AOV than first-time buyers).
The time period after an ad interaction (click or view) during which a conversion is credited to that ad. Meta defaults to 7-day click, 1-day view. Google Ads defaults vary by campaign type. Longer windows capture more conversions but also capture more that would have happened anyway. Shortening attribution windows is one of the simplest ways to reduce overclaiming and get closer to true incremental performance. Testing different windows and comparing results reveals how much of your reported performance depends on generous window settings.
Total revenue divided by total ad spend across all channels. A top-level health metric that shows overall advertising efficiency but hides which channels are driving incremental value and which are free-riding. A brand with a 4x blended ROAS might have Meta at 6x and Google Display at 1.2x, but blended ROAS won't tell you that. Useful as a directional metric for month-over-month trending, but should never be used for channel-level budget allocation decisions.
The process of distributing ad spend across channels, campaigns, and audiences based on expected incremental return. Should be dynamic and updated weekly based on current performance data, not fixed in quarterly planning cycles. Optimal allocation requires understanding the diminishing returns curve for each channel and finding the spend level where marginal CPA across all channels is equalized. A well-optimized allocation typically improves blended ROAS by 15-25% without increasing total spend.
The rate at which your daily or monthly budget is being spent relative to plan. Underpacing means you're leaving potential revenue on the table. Overpacing means you'll run out of budget before the period ends, missing late-period opportunities. Platform algorithms handle daily pacing, but monthly and quarterly pacing requires manual oversight. Budget pacing should account for day-of-week and time-of-month performance patterns.
Total cost to acquire one new customer, including ad spend, creative production, agency fees, and marketing tools. Calculated as total marketing spend divided by new customers acquired. Lower CAC means more efficient growth. Must be compared against LTV to ensure profitability: if CAC exceeds LTV, you're losing money on every customer. Benchmarks vary wildly by vertical: fashion DTC brands often see $30-60 CAC while premium skincare can run $80-150.
The ratio of customer acquisition cost to lifetime value. A 3:1 ratio (LTV is 3x CAC) is the common benchmark for healthy unit economics. Below 3:1 suggests you're either overspending on acquisition or your product doesn't retain well. Above 5:1 suggests you're underinvesting in growth and could scale faster. The ratio should be calculated at the channel level to identify which acquisition channels produce the most valuable customers, not just the cheapest ones.
The percentage distribution of ad spend across advertising platforms. A typical DTC ecommerce channel mix might be 50% Meta, 25% Google, 15% TikTok, 10% other. The optimal mix varies by brand, product category, AOV, and customer demographics. The right channel mix changes over time as channels mature, costs shift, and audience behavior evolves. Brands that lock into a static channel mix leave money on the table.
Grouping customers by acquisition date (or other shared characteristic) and tracking their behavior over time. Reveals whether newer cohorts are more or less valuable than older ones. A declining cohort LTV curve means your targeting is getting less efficient or product-market fit is weakening. An improving curve means your acquisition strategy is finding better customers. Essential for accurate LTV calculation and for detecting problems before they show up in top-line metrics.
The architecture by which AI models improve continuously as they learn from every marketing decision and its outcome. Named after compound interest because the improvement rate accelerates: each cycle of prediction, observation, and model update makes the next prediction more accurate. A compound learning system at month 6 is dramatically better than at month 1 because it has observed thousands of decision-outcome pairs specific to your brand. This is Cresva's core differentiator - static AI degrades over time while compound learning systems improve.
A statistical range around a forecast that quantifies uncertainty. A 90% confidence interval means the actual result will fall within that range 90% of the time. Wider intervals signal higher uncertainty and should make you more cautious about committing budget. A forecast of '$500K revenue, 90% CI: $420K-$580K' is far more useful than '$500K revenue' alone because it tells you how much to trust the prediction. Narrower confidence intervals over time indicate the model is learning and improving.
Revenue minus variable costs (COGS, shipping, payment processing, returns) expressed as a percentage. The true margin available to cover fixed costs and marketing spend. A product with 70% contribution margin can afford much higher CAC than one at 30%. Performance marketers should optimize toward contribution-margin-adjusted ROAS rather than raw ROAS to ensure every dollar of ad spend is generating actual profit, not just revenue.
A platform-run experiment (available on Meta, Google, and TikTok) that measures incremental conversions by randomly splitting audiences into test and control groups at the platform level. More statistically rigorous than basic A/B tests because it uses platform-level randomization. However, still controlled by the platform, which creates potential conflicts of interest. Best used as a validation tool alongside independent incrementality measurement rather than as the sole source of truth.
The percentage of visitors or ad clickers who complete a desired action (purchase, signup, lead form). Calculated as conversions divided by clicks or sessions times 100. Ecommerce average is 2-3% but varies wildly by traffic source (3-5% for branded search, 0.5-1.5% for cold social traffic), device (desktop converts 2x higher than mobile for most categories), and price point (sub-$50 products convert 2-3x higher than $200+ products).
Meta's server-side tracking solution that sends conversion events directly from your server to Meta's ad system. Improves data accuracy, signal quality, and match rates compared to pixel-only tracking. Should run alongside the Meta Pixel in a redundant setup with deduplication to maximize event coverage. Properly implemented CAPI typically improves Meta's reported ROAS by 10-20% through better event matching and reduces CPA by improving the algorithm's optimization signal quality.
The average cost to generate one conversion, whether that's a purchase, signup, lead, or other defined action. Calculated as total ad spend divided by total conversions. The core efficiency metric for performance marketing. Important to distinguish between platform-reported CPA (based on attributed conversions, often inflated) and true incremental CPA (based on conversions your ads actually caused). A 'low' CPA that's based on overclaimed conversions is actually a mirage.
The cost per 1,000 ad impressions. A media cost metric that reflects how expensive it is to reach your target audience. Varies significantly by platform (TikTok generally cheapest, LinkedIn most expensive for B2B), audience targeting (broad is cheaper, narrow is pricier), seasonality (Q4 CPMs are 30-60% higher than Q1), and competitive intensity. Rising CPMs without rising conversion rates is a clear signal to either improve creative, adjust targeting, or reallocate budget.
The decline in ad performance that occurs when a target audience sees the same creative asset too many times. Detected through rising frequency paired with declining CTR, increasing CPA, and dropping ROAS. The fatigue curve varies by format: static images fatigue in 7-14 days, video ads in 14-21 days, and UGC-style content lasts 20-30% longer than polished studio content. Early detection is critical because performance degrades exponentially once fatigue sets in. Most brands detect fatigue 4-7 days too late.
How frequently new creative assets are introduced to replace fatigued ones. Most DTC brands need to refresh primary creatives every 2-4 weeks and have a pipeline of 10-20 new assets per month to maintain performance. The required refresh rate increases with spend level: a brand spending $500K/month needs significantly more creative variety than one spending $50K/month because higher spend drives higher frequency against the same audiences.
Insights and model priors derived from analyzing anonymized, aggregated performance data across many brands. What works for one fashion brand often applies to others in the category. AI models trained on cross-brand data can make informed predictions for new brands from day one rather than starting from scratch. This creates a network effect where every brand on a platform contributes to the collective intelligence, and every brand benefits from it.
The percentage of people who click an ad after seeing it, calculated as clicks divided by impressions times 100. A proxy for creative relevance and audience targeting accuracy. Meta feed ads average 0.9-1.5% CTR, Google Search ads average 3-6%, and display averages 0.3-0.5%. Declining CTR at stable frequency suggests creative fatigue. Declining CTR at rising frequency confirms it. A high CTR with low conversion rate points to a landing page or offer problem, not an ad problem.
A secure environment where two or more parties (typically a brand and an ad platform or publisher) can match and analyze their combined datasets without either party seeing the other's raw data. Used for audience matching, measurement, and attribution in a privacy-safe way. Meta's Advanced Analytics, Google's Ads Data Hub, and Amazon Marketing Cloud are major clean room environments. Increasingly important as user-level tracking becomes restricted.
Combining data from multiple sources (Meta Ads, Google Ads, TikTok Ads, Shopify, GA4, email platforms) into a single, consistent dataset with standardized naming conventions, unified timestamps, and reconciled metrics. Eliminates the discrepancies that occur when each platform reports slightly different numbers. Without unification, comparing Meta ROAS to Google ROAS is comparing apples to oranges because each platform defines conversions, attribution, and revenue differently.
An algorithmic attribution model offered by platforms like Google that uses machine learning to assign conversion credit based on observed path patterns. More accurate than rules-based models but still limited to the platform's own data and biased toward the platform's channels. Google's DDA will naturally favor Google touchpoints. Best used as one input among many rather than as a single source of truth for cross-channel budget decisions.
The phenomenon where additional ad spend produces progressively less incremental revenue. Every channel has a saturation curve, and spending past the optimal point wastes budget. A channel producing $5 in revenue per $1 at $50K/month spend might only produce $2.50 per $1 at $150K/month. The optimal spend level sits at the inflection point of the S-curve where marginal returns start declining. Most brands overspend on their 'best' channel because they don't model diminishing returns.
Automated assembly of ad creatives from component parts (headlines, images, CTAs, body copy) by the ad platform's algorithm. Meta's Advantage+ Creative and Google's responsive ads are DCO systems. Useful for scaling variations without manual design work, but reduces creative control and makes it harder to learn what's actually working because the platform mixes components opaquely. Best for testing broad messaging directions before investing in full production of winning concepts.
Google's privacy-safe tracking solution that supplements existing conversion tags by sending hashed first-party customer data (email, phone, address) from your website to Google. Improves conversion measurement accuracy by matching conversions that would otherwise be lost due to cookie restrictions. Available for both Google Ads and GA4. Implementation requires passing hashed customer data at the point of conversion, either through gtag.js, Google Tag Manager, or the Google Ads API.
The process of selecting, transforming, and creating input variables (features) that help machine learning models make better predictions. In marketing AI, features include spend by channel, day of week, time since last creative refresh, audience saturation level, competitive CPM index, and hundreds more. The quality of features matters more than the complexity of the model. Good feature engineering is why specialized marketing AI outperforms general-purpose tools.
An attribution model that gives 100% of conversion credit to the first touchpoint in the customer journey. Overvalues awareness channels and ignores everything that happens between discovery and purchase. Rarely used as a primary model but useful as a comparison point against last-click to understand the full spectrum of channel contribution.
Data collected directly from your customers through owned touchpoints: purchase history, email engagement, site behavior, loyalty program activity, and customer service interactions. Increasingly the most valuable data asset as third-party cookies disappear, iOS tracking restrictions expand, and platform-provided data degrades. Brands with strong first-party data strategies (email collection, account creation, loyalty programs) have a structural advantage in targeting, personalization, and attribution accuracy.
Google's current analytics platform, replacing Universal Analytics. Built around an event-based data model (every interaction is an event) rather than session-based. Includes machine learning-powered predictive metrics, cross-platform tracking, and BigQuery integration for raw data access. The default attribution model is data-driven. Key limitations: aggressive data sampling at high volumes, limited historical lookback, and a learning curve for teams familiar with Universal Analytics.
An incrementality testing method that uses matched geographic regions as test and control groups. One region receives ads while a statistically similar region does not. Particularly useful when user-level holdout tests aren't feasible (e.g., due to iOS restrictions or cross-device complexity). Requires careful market matching and enough regional volume to be statistically meaningful. Works well for measuring the incremental impact of channel-level spend changes like pausing Meta in one region while keeping it active in another.
The percentage of viewers who watch 50% or more of a video ad. Measures whether your creative sustains attention after the initial hook. A high hook rate with a low hold rate indicates a strong opening but weak middle content. For direct response ads, hold rate correlates with conversion intent - people who watch most of your video are significantly more likely to click and purchase.
A controlled experiment where a portion of the target audience is deliberately excluded from seeing ads, creating a control group. By comparing conversion rates between the exposed group and the holdout group, you can measure the true incremental impact of your advertising. The most straightforward incrementality test to run. Requires enough volume to achieve statistical significance and a long enough test window (typically 2-4 weeks) to capture full purchase cycles.
The percentage of viewers who watch past the first 3 seconds of a video ad. The single most important leading indicator of video creative quality. If people aren't stopping to watch, nothing else matters - your message, offer, and CTA are irrelevant. Benchmark hook rates vary by platform: 25-35% on Meta feed, 40-55% on TikTok, and 20-30% on YouTube pre-roll. Improving hook rate is usually the highest-leverage creative optimization you can make.
The true return on ad spend after removing conversions that would have happened without any advertising. Measured through holdout testing, geo-lift studies, or conversion lift experiments. Always lower than platform-reported ROAS because platforms count organic conversions as ad-driven. For example, if Meta reports a 5x ROAS but 30% of those conversions were organic, your iROAS is actually 3.5x. This metric is the single most important number for budget allocation decisions because it tells you the real incremental value of each dollar spent.
The true causal lift in conversions directly attributable to advertising. It measures what would NOT have happened without ad exposure. The gold standard of attribution accuracy because it answers the fundamental question: did this ad actually cause this sale, or would the customer have bought anyway? Measured through controlled experiments like holdout tests, geo-lift studies, and conversion lift studies. Without incrementality measurement, you're optimizing toward a number that includes conversions your ads didn't cause.
Apple's App Tracking Transparency framework, launched April 2021, requiring apps to get explicit user permission before tracking activity across other apps and websites. Roughly 75-80% of users opt out. Devastated Meta's tracking accuracy, reduced retargeting pool sizes by 50-70% on iOS, and degraded conversion reporting reliability. Forced the entire industry toward server-side tracking, probabilistic modeling, and first-party data strategies. The single largest disruption to digital advertising measurement in the past decade.
An attribution model that gives 100% of conversion credit to the last touchpoint before purchase. Still the default in many analytics setups. Systematically overvalues bottom-funnel channels like branded search and retargeting while undervaluing awareness and consideration channels like paid social prospecting, YouTube, and display. A brand running last-click attribution will consistently underspend on top-of-funnel and overspend on branded search, creating the illusion that brand search is highly efficient when it's actually capturing demand generated elsewhere.
The total revenue (or profit) a customer generates over their entire relationship with the brand. Calculated by multiplying average order value by purchase frequency by average customer lifespan. High-LTV brands (subscription, consumables, fashion with high repeat rates) can afford higher acquisition costs. A brand with $200 LTV and $60 CAC has a 3.3x CAC:LTV ratio, which is considered healthy. LTV should be calculated on a cohort basis to detect trends over time.
The cost of acquiring one additional customer at the current spend level. Different from average CPA because it reflects the cost of the next conversion, not the average of all conversions. As you increase spend, marginal CPA rises due to diminishing returns. The optimal spend level is where marginal CPA equals your target CPA or where marginal CPA across channels is equalized. This is the single most useful metric for budget allocation.
A statistical approach that uses regression analysis on historical data to estimate the contribution of each marketing channel to business outcomes. Works at an aggregate level (not user-level) making it privacy-safe and resilient to tracking changes. Takes into account external factors like seasonality, promotions, and economic conditions. Typically requires 2-3 years of historical data and works best for brands spending $1M+ annually across multiple channels. Slower to implement than MTA but provides a more holistic and unbiased view of channel effectiveness.
Total revenue divided by total marketing spend, including non-advertising costs like email platform fees, SEO tools, creative production, and agency retainers. Provides a holistic view of marketing ROI that ROAS misses. Sometimes called the 'Bezos metric' because it reflects the true cost of customer acquisition across all channels. A declining MER with stable ROAS suggests rising non-ad costs are eating into overall marketing efficiency.
The gradual degradation of a machine learning model's accuracy over time as real-world conditions change. In marketing, model drift happens when consumer behavior shifts, competition changes, platform algorithms update, or seasonal patterns evolve. A model trained on Q1 data will perform increasingly poorly through Q2-Q4 without retraining. Compound learning systems counteract drift through continuous feedback loops. Static models suffer from drift silently until performance degrades noticeably.
A system design where multiple specialized AI agents collaborate on complex tasks, each bringing domain expertise. Unlike a single monolithic model that tries to do everything, multi-agent systems assign specialized agents to specific domains (attribution, forecasting, creative analysis, budget optimization) while sharing context through a unified memory layer. This architecture mirrors how high-performing marketing teams work: specialists who communicate and build on each other's insights.
An adaptive testing method that dynamically shifts traffic toward better-performing creative variants while still exploring new options. Unlike A/B testing which splits traffic 50/50 until the test ends, bandit testing automatically reduces exposure to underperformers in real-time. Finds winners up to 3x faster than traditional A/B tests by balancing exploitation (showing what works) with exploration (trying new things). Named after the slot machine problem in statistics.
An attribution model that distributes conversion credit across multiple touchpoints in the customer journey rather than giving all credit to a single interaction. Common models include linear (equal credit to all touches), time-decay (more credit to recent touches), position-based (40% to first and last touch, 20% split across middle), and data-driven (algorithmically weighted). MTA is better than last-click but still relies on trackable digital touchpoints, meaning it misses offline influence, word-of-mouth, and impressions that don't result in clicks.
The number of days until a customer's cumulative purchases exceed their acquisition cost. A 30-day payback period means you recover CAC within one month. Critical for cash flow planning: a brand with a 90-day payback period and $100K/month in new customer spend needs $300K in working capital just to fund acquisition. Shorter payback periods enable faster scaling because you can reinvest recovered CAC into acquiring more customers sooner.
The gap between what an ad platform reports as conversions and the true incremental conversions your ads actually caused. Typically 15-30% inflated across Meta, Google, and TikTok. Happens because platforms use broad attribution windows, count view-through conversions generously, and take credit for conversions that were already going to happen. A brand spending $100K/month with a 25% overclaim rate is effectively misallocating $25K/month based on phantom conversions. The only way to quantify overclaim is through controlled incrementality testing.
Asking customers 'how did you hear about us?' after purchase to capture self-reported channel influence. Captures channels that digital attribution misses entirely: podcast ads, word-of-mouth, influencer content, TikTok organic, and offline touchpoints. Biased by recency and salience (customers remember what's top of mind, not what actually influenced them), but directionally valuable as a complement to click-based and incrementality-based attribution. Best implemented as a required field at checkout with a well-designed dropdown.
The total number of unique people who saw your ad at least once. Different from impressions, which counts total views including repeats. Reach divided into impressions gives average frequency. Monitoring reach alongside spend reveals whether increased budget is finding new people or just hitting the same audience harder. Flattening reach at rising spend is an early indicator of audience saturation.
Predicting future revenue based on historical patterns, current trends, seasonality, channel performance, and external factors. AI-powered forecasting can reach 91% accuracy compared to 60-70% with manual spreadsheet methods. Accurate forecasting is foundational for budget planning, inventory management, and cash flow decisions. The best forecasting models account for channel-level saturation curves, creative fatigue rates, competitive dynamics, and macroeconomic indicators rather than simple trend extrapolation.
Revenue generated per dollar spent on advertising. A 4x ROAS means $4 in revenue for every $1 in ad spend. Platform-reported ROAS is typically inflated 15-30% because ad platforms take credit for conversions they didn't cause. True ROAS can only be measured through incrementality testing, which compares results against a holdout group that received no ads. Most ecommerce brands discover their real ROAS is 20-40% lower than what Meta or Google reports.
The typical shape of ad spend efficiency when plotted on a graph. At low spend, returns are minimal (the learning phase where the algorithm has insufficient data). At mid-range spend, efficiency peaks (the sweet spot). At high spend, diminishing returns set in as the audience saturates. Every channel, campaign, and audience has its own S-curve with a different optimal spend level. Finding and staying in the sweet spot of each curve is the core challenge of budget allocation.
The spend level at which a channel or audience can no longer produce meaningful incremental returns. Beyond this point, additional spend primarily drives frequency against the same users rather than reaching new potential customers. Saturation varies by channel, audience size, creative variety, and seasonality. A niche audience of 500K people saturates much faster than a broad audience of 20M. Detecting saturation early prevents wasted spend.
Simulating different budget allocation scenarios to predict outcomes before committing real spend. For example: 'What happens if I shift $20K from Google Search to Meta prospecting?' or 'What if I increase total spend 30% for Black Friday?' Reduces risk by testing decisions mathematically against historical patterns and forecasting models before any money moves. The difference between reactive optimization and proactive strategy.
Recurring patterns in ad performance tied to time periods. Includes annual patterns (Black Friday, Q4 surge, January slump, summer slowdown), monthly patterns (payday effects, end-of-month budget flushes), weekly patterns (higher conversion on weekdays for B2B, weekends for impulse purchases), and even intra-day patterns. Must be accounted for in both forecasting and budget allocation. Ignoring seasonality leads to panic during predictable dips and overconfidence during predictable peaks.
Sending conversion data directly from your server to ad platforms, bypassing browser-level limitations like ad blockers, cookie restrictions, and iOS privacy features. More reliable than pixel-based tracking because it's not affected by client-side interference. Required for accurate data in the post-iOS 14.5 era. Implementations include Meta's Conversions API, Google's Enhanced Conversions, and TikTok's Events API. Should run alongside client-side pixels for maximum data coverage.
Apple's privacy-preserving attribution framework for iOS app install campaigns. Provides aggregated, delayed conversion data without user-level identifiers. Limited to 64 possible conversion values and imposes random time delays on postbacks. Makes granular optimization difficult but is the only sanctioned attribution method for iOS app campaigns. Requires careful conversion value schema design to extract maximum signal from limited data slots.
The threshold at which experimental results are unlikely to have occurred by random chance. Conventionally set at 95% confidence (p-value < 0.05). Running A/B tests or budget changes before reaching statistical significance leads to false conclusions and wasted spend. The required sample size depends on the expected effect size and baseline conversion rate. Small differences in performance require much larger samples to detect reliably. Rushing to conclusions is one of the most expensive mistakes in performance marketing.
A statistical method used in geo-lift testing that creates a mathematically constructed 'control' region by weighting a combination of non-test regions to match the test region's pre-test behavior. More accurate than simply comparing one city to another because it accounts for unique regional characteristics. The synthetic control effectively answers 'what would have happened in the test region if we hadn't changed anything?' by creating a virtual counterfactual from real data.
Small data files placed on users' browsers by domains other than the website being visited, historically used to track users across the web for ad targeting and attribution. Being phased out by browser restrictions (Safari and Firefox already block them, Chrome is implementing restrictions). Their deprecation has degraded retargeting accuracy, reduced attribution reliability, and increased the importance of first-party data and server-side tracking.
The percentage of people who stop scrolling when your ad appears in their feed. Measured as 3-second video views divided by impressions on Meta, or similar metrics on other platforms. Distinct from hook rate in that it measures the initial attention grab before the viewer has processed any content. High thumb-stop, low hook rate means your thumbnail or first frame is compelling but the content immediately disappoints.
A statistical method that analyzes sequential data points (ad spend, revenue, conversions) over time to identify trends, seasonal patterns, and cyclical behavior. The foundation of most forecasting models. Common techniques include ARIMA, Prophet, and LSTM neural networks. For ecommerce advertising, time series analysis can reveal hidden patterns like the 3-day lag between Meta spend increases and Shopify revenue impact, or the 10-day creative fatigue cycle for video ads.
Ad creative that looks and feels like organic content created by real users rather than polished brand advertisements. Includes customer testimonials, unboxing videos, product reviews, and 'day in my life' style content. Consistently outperforms studio-shot creative on social platforms because it matches the native content format. Typically 30-50% lower CPA than traditional brand creative on Meta and TikTok. UGC outperforms in prospecting but often underperforms polished creative in retargeting.
Tags added to destination URLs to track traffic sources in analytics tools. The five standard parameters: utm_source (platform), utm_medium (channel type), utm_campaign (campaign name), utm_content (ad variation), and utm_term (keyword). Inconsistent UTM naming is the number one cause of messy attribution data. Establish a naming convention upfront and enforce it with templates. Missing or incorrect UTMs create 'direct/none' traffic in analytics that makes attribution impossible.
Credits a conversion to an ad that was viewed but not clicked. Common on display, video, and social platforms. The attribution window varies by platform: Meta defaults to 1-day view-through, Google Display can go up to 30 days. Often overcounts because simply viewing an ad in a feed doesn't mean it caused the purchase. A user who was already going to buy might see your ad, not click it, and buy directly - that gets counted as an ad-driven conversion. Narrowing view-through windows or excluding them entirely gives a more conservative (and more accurate) picture.
The practice of adjusting budget allocation across channels every week based on current performance data rather than waiting for monthly or quarterly reviews. Performance shifts constantly due to competitive dynamics, creative fatigue, audience saturation, and seasonal patterns. Brands that rebalance weekly capture opportunities 3-4 weeks faster than those on monthly cycles. Even small weekly shifts of 5-10% between channels compound into significant efficiency gains over a quarter.
Want to see these concepts in action? Read the guides or explore the methodology.