# FAQs

## Platform Integrations

### Data Onboarding

1. **What types of data does Bonsai need to build Business Reporting?**\
   Bonsai requires digital marketing data (ad platform performance and spend) and business / point-of-sale (POS) data (orders, revenue, customers). These datasets form the foundation of the Business Reporting product and enable accurate measurement of business outcomes alongside marketing activity.
2. **What types of data does Bonsai need to build Multi-Touch Attribution (MTA)?**\
   Bonsai requires digital marketing data, analytics data (e.g., GA4 or Adobe Analytics), Google Merchant Center data, and business / point-of-sale (POS) data. Together, these sources enable Bonsai to construct customer journeys and assign fractional credit to marketing touchpoints that contributed to business outcomes.
3. **What types of data does Bonsai need to build Incrementality Modeling?**\
   Bonsai requires digital marketing data, offline marketing data (e.g., TV, radio, OOH), business / POS data, and Google Search Console data. Incrementality modeling relies on a combination of marketing exposure signals and business outcomes to estimate causal impact beyond what can be observed through tracking-based attribution alone.
4. **What types of data does Bonsai need to build Algorithms?**\
   Bonsai’s bidding algorithms require the same data inputs as Multi-Touch Attribution because algorithm training depends on attributed buyer behavior and outcomes. Bonsai requires digital marketing data, analytics data, and business / POS data in order to build the training audience and model the features associated with valuable customers.
5. **Do I have to pay per connector?**\
   Pricing depends on your Bonsai plan and the integrations required to support your use case. Some plans include a standard set of connectors, while others may vary based on the number of platforms, data volume, refresh cadence, and optional integrations. Your Bonsai account team can confirm the pricing model for your deployment.

***

## Measurement

### Business Reporting

1. **Can I see both offline and digital sales on this page?**\
   Yes. Bonsai can support visibility into both online (digital) and offline (in-store / POS) sales, based on what is included in your business source data. Sales are reflected according to the transaction records provided in your point-of-sale or ecommerce systems.
2. **Would you consider this a C-suite level report?**\
   Yes. Business Reporting is designed to provide an executive-ready view of core business performance, including revenue, customer growth, and high-level marketing efficiency. It is intended to support leadership reporting and decision-making with trusted, business-outcome-based metrics.
3. **Are my analytics metrics shown on this page?**\
   No. Business Reporting focuses on point-of-sale/order outcomes and marketing platform performance. Analytics data is primarily used for customer journey development, attribution, and other measurement products rather than being a core component of the Business Reporting view.
4. **Why are “new customers” at a business level more important than “new users” in analytics or ad platforms?**\
   Business-defined new customers are based on first-party purchase behavior and represent the most reliable source of truth. Analytics and ad platforms rely heavily on cookie/device identifiers that can reset or expire (often within 30–90 days), which makes long-term identity and new customer measurement less accurate—especially as privacy constraints continue to limit tracking.
5. **Can I export this data?**\
   Yes. Bonsai supports export functionality and allows you to download reporting data as a CSV file for offline analysis or internal reporting workflows.
6. **How long does it take to see business data populated once I onboard my data?**\
   Business data is typically the first dataset configured because it is the foundation for Bonsai measurement products and often requires the most customization. In many cases, Business Reporting can be live within approximately five business days after required access and data delivery are in place.
7. **Can I see lifetime value (LTV) and purchase frequency?**\
   Yes. Bonsai supports configurable business metrics, including lifetime value, repeat purchase rate, and purchase frequency. These metrics can be added and configured through the Business Metrics Configuration settings based on the fields available in your business data.
8. **Is there any data Bonsai cannot or will not configure?**\
   Yes. Bonsai does not populate critical business KPIs using third-party conversion tracking when it conflicts with validated first-party outcomes. Business KPIs should be derived from authoritative business systems (POS, ecommerce, ERP, CRM) rather than ad platform conversion estimates.
9. **Can my team own the data and query it for our own reporting?**\
   Yes. The Bonsai platform is powered by a data warehouse in Google Cloud Platform (GCP). Clients can either allow Bonsai to own and manage the warehouse while granting query access, or ownership can be transferred to the client at any time.

### Multi-Touch Attribution (MTA)

1. **What do I use MTA for?**\
   Multi-Touch Attribution is best used to evaluate the effectiveness of digital marketing channels. Bonsai builds customer journeys and assigns fractional attribution across marketing touchpoints that contributed to outcomes, allowing teams to understand the relative contribution of channels beyond last-click reporting.
2. **How do I configure MTA KPIs?**\
   MTA KPIs are configured using the business metrics defined in Business Reporting. Any business metric configured in the platform (e.g., orders, revenue, new customers) can be selected as an attribution KPI.
3. **Can I still use Bonsai if I don’t get 100% traffic consent to analytics?**

   Yes. Like all modern analytics and attribution platforms, Bonsai can only directly measure and attribute activity from users who have provided the required consent. In practice, most businesses see less than 100% consent rates, which means a portion of traffic will not be available for user-level attribution.

   Bonsai’s measurement products are built to maximize data quality and matching within the consented and observable traffic across channels and brands. While the non-consented portion of traffic cannot be directly attributed due to privacy requirements—and there is no technical workaround for this across the industry—Bonsai enables full-funnel measurement through complementary statistical methods such as incrementality testing.

   This approach allows teams to understand true marketing impact across their entire business, even when direct attribution is limited by consent.
4. **How does Bonsai join advertising, analytics, and business/point-of-sale data to build attribution?**\
   Bonsai uses analytics data as a linking layer between advertising platforms and business or POS systems. Ad identifiers such as click IDs are first matched to analytics events, and a separate unique identifier is then used to connect analytics data to downstream revenue or POS records. Bonsai does not join advertising data directly to point-of-sale systems — analytics serves as the intermediary that enables accurate attribution and measurement.
5. **Why is my attributed sales number lower than my actual sales reported by finance?**\
   Not every sale can be attributed back to a measurable digital marketing touchpoint. This is expected and typically results from customers purchasing without clicking an ad, limited tracking coverage, incomplete analytics history, or privacy-related loss of identity signals.
6. **How can I view more granular results?**\
   You can drill down into attribution using customizable categories based on your campaign mapping and grouping configuration. Bonsai supports grouping campaigns into meaningful buckets so that performance can be interpreted at a level aligned with your internal reporting structure.
7. **How do I read attributed sales?**\
   Attributed sales represent the outcomes credited back to ad clicks that occurred on a specific date. For example, if the platform shows $338K attributed sales on August 25, 2025, this means users who clicked ads on that date eventually generated purchases totaling $338K in fractional credit.
8. **How long is the attribution window?**\
   Bonsai MTA is windowless and looks back across all available history. The primary constraint is how far back analytics data is available, because Bonsai relies on first-party analytics data to build customer journeys.
9. **Is this the same as Google’s data-driven attribution (DDA)?**\
   No. Bonsai MTA differs from Google DDA because Bonsai supports true business / POS outcomes, is not biased toward any one advertising platform, and is designed to unify attribution across multiple channels. Google DDA is constrained to the Google ecosystem and does not provide the same cross-channel, business-outcome-driven view.
10. **Why does it look like attribution is going down while business cost is increasing?**\
    This can occur when incremental efficiency declines due to audience saturation, competition, spend shifting into lower-performing tactics, or tracking coverage changes. It can also indicate an increasing share of spend going to placements that do not produce measurable click journeys. In these scenarios, Incrementality Modeling is often the best method to validate true causal impact.
11. **How do I use Detailed Order Attribution?**\
    Detailed Order Attribution is used to view attribution based on when an actual sale occurred rather than when a click occurred. This is useful for answering questions such as which marketing channels contributed to the sales that occurred in a specific reporting month.
12. **How granular can I view MTA data?**\
    Granularity depends on the attribution view. In the Attribution page, you can typically view results at the campaign level. In Detailed Order Attribution, you can view campaign-level performance and further drill down by landing page, creative, or any other analytics tag available in the dataset.
13. **How is the attribution weighting scheme determined? Does it change over time or by customer?**\
    Bonsai employs a linear attribution model that allows for custom weighting based on touchpoint type, touchpoint number, and other factors. However, Bonsai always starts with a purely linear (unweighted) model in order to maintain MTA as a truly deterministic tool — meaning we don't inject assumptions about what's impactful and what isn't. Any weighting customization is applied on top of that foundation and is configured per client based on their specific needs.
14. **Does journey length affect how attribution credit is allocated?**\
    Bonsai's attribution model is windowless. An ad click that first brought a new prospect to your site will continue to receive fractional credit for all subsequent purchases in that customer's journey, regardless of how much time has passed. The only practical limitation is data availability — how far back analytics history exists.
15. **Are high-consideration products treated differently in MTA?**\
    The windowless nature of Bonsai's MTA makes it particularly well-suited for measuring marketing's impact on products with long consideration times and many touchpoints. Because there is no attribution window cutoff, early-funnel touchpoints are not penalized for purchases that occur months later.
16. **Are there any adjustments made to avoid retargeting bias?**\
    Bonsai's MTA inherently reduces retargeting bias by only crediting touchpoints for purchases that occurred *after* the click. We find that the very first click in a customer's journey is more accurately valued in Bonsai's MTA results compared to models like Google's DDA, which tend to devalue the initial "acquisition" touchpoint and overvalue the final "conversion" touchpoint due to attribution window constraints.
17. **How does Bonsai handle channel correlations in MTA?**\
    The customer journey dataset that powers MTA can be queried to understand whether certain channels tend to work together to drive outcomes. More importantly, Bonsai uses the results of its ridge-regression marketing mix models — which measure and account for channel collinearity — to populate the iROI values displayed on the MTA page.
18. **How does Bonsai's MTA approach compare to logistic regression or Markov-chain attribution models?**\
    Bonsai takes a two-pronged approach rather than choosing a single model type: MTA serves as the deterministic, campaign-level (and more granular) optimization tool, while Marketing Mix Modeling serves as the probabilistic measure of marketing's full incremental impact. These two methods complement each other. MTA also has an additional powerful application as a training dataset for Bonsai's predictive buying algorithms.
19. **Has Bonsai empirically validated its MTA results with incrementality testing?**\
    Yes. Each time Bonsai deploys its MTA-powered predictive buying algorithm for a new customer, an incrementality test (Matched Market Test) can be run to measure lift on business outcomes using a Difference-in-Differences methodology. This provides an empirical proof point for the algorithm's impact and, by extension, the quality of the MTA signals used to train it.

### Marketing Mix Modeling

1. **What is a Marketing Mix Model?** \
   Bonsai's marketing-mix models are built using ridge regression models, which are statistical analysis tools that measure the causality between many independent variables (including all marketing tactics, plus external factors like seasonality, category demand, promotions, etc) and one dependent variable (for example, sales, new customers, etc). The output of the model is an answer to how causal each independent variable is on the dependent variable, and how much business would have been lost if that particular element were turned off.
2. **What do I use Incrementality Modeling for, and how is it different from MTA?**\
   Incrementality Modeling estimates causal impact at an aggregate level, answering what would have happened if a marketing channel were turned off. It does not use individual customer journey data or rely on attribution of any kind. Incrementality is especially important for offline media and upper-funnel channels where click-based tracking is incomplete.
3. **What questions can I answer with Bonsai’s Incrementality Model?**\
   Bonsai’s Incrementality Model helps identify which channels truly drove revenue, how much lift each channel generated, and what the true ROI is by tactic. It also supports questions around seasonality, diminishing returns, spend optimization, budget shifting scenarios, undervalued or overinvested channels, and forecasting outcomes under different spend levels.
4. **What is a feature?**\
   A feature is a configurable subset of marketing activity. Features can represent a channel subset, tactic type, campaign grouping, or a highly granular campaign-level segment depending on how your taxonomy is configured.
5. **What does “incremental” mean?**\
   Incremental outcomes are results that would not have occurred without the marketing activity. This represents the true lift driven by marketing beyond baseline demand.
6. **What does “base” mean?**\
   Base represents the expected outcomes with no marketing investment, driven by underlying demand such as brand equity, seasonality, category demand, and external macro factors. Over time, base can decay if marketing remains off for an extended period.
7. **What does “base+” mean?**\
   Base+ represents lift from drivers that influence demand but are not scalable with spend, such as Brand Search or Email. These drivers can elevate results beyond baseline, but they cannot be increased linearly through budget increases.
8. **Why do my results on MTA look different from Incrementality?**\
   Differences are expected because the methodologies measure different concepts. MTA attributes results based on observed click journeys, while Incrementality predicts causal lift and estimates what would have happened without marketing. It is common for results to differ at campaign or feature level.<br>

### Incrementality Testing

1. **What is a Matched Market Test?** \
   A matched market test is accomplished by deploying a certain marketing or loyalty tactic in one set of markets, and continuing business-as-usual in a set of historically similarly performing markets.
2. **How do you measure success?**\
   Bonsai measures test success using lift percentage derived from a Difference-in-Differences (DiD) method. This compares changes in test markets pre/post against changes in control markets pre/post, helping isolate the impact of a strategy even when markets are not perfectly matched and external factors influence performance.
3. **How does Bonsai measure the return on an incrementality test?**\
   Bonsai calculates incremental ROI (iROI) by comparing the revenue driven by the tested strategy against the spend required to generate it — both measured using the same Difference-in-Differences methodology. This gives a clear, apples-to-apples view of whether the strategy delivered a positive return, independent of baseline performance differences between markets.
4. **Does seasonal demand (like holidays) affect incrementality test results?**\
   No. External demand factors like holiday seasonality affect all markets equally. If a test shows +10% lift during the holiday season, it means sales increased *even more* in the test markets relative to control markets — the lift figure isolates the strategy's impact above and beyond any broad demand changes.
5. **What if my test and control markets have different baseline demand levels?**\
   The Difference-in-Differences method accounts for this. If one market naturally outperforms another year-round, the DiD calculation still isolates lift by measuring *change* in each market relative to its own pre-test baseline — not by comparing absolute sales levels between markets.
6. **How should I interpret the confidence range shown on the Incrementality Testing page?**\
   The confidence range reflects the range within which the true lift likely falls, not a pass/fail threshold. For example, a result showing 70% confidence that lift is between 26% and 54% means you can be reasonably confident that a meaningful positive lift exists — even if the precise number is uncertain. A wide range typically indicates more data or a longer test period would sharpen the estimate.
7. **How do I calculate incremental ROI (iROI) from a test?**\
   iROI is calculated by dividing incremental revenue by incremental spend. Incremental revenue is the lift percentage multiplied by pre-test baseline outcomes and their value (e.g., LTV). Incremental spend is calculated the same way as lift — applying the DiD method to spend rather than sales, then multiplying the spend lift percentage by the pre-test baseline spend.
8. **How is it possible to see positive lift when test market sales actually declined during the test?**\
   Lift is relative, not absolute. If control market sales declined 20% while test market sales only declined 10%, the DiD calculation yields a positive lift — because without the tested strategy, the test market would have also declined 20%. The strategy effectively "saved" sales that would have otherwise been lost.
9. **What if test markets received more spend than control markets — does that invalidate the results?**\
   Not at all. In a matched market test, the test markets are intentionally run with a new strategy or investment level — that's the point of the test. The Difference-in-Differences method isolates the lift attributable to the strategy by comparing each market's *change* relative to its own baseline, so differences in spend level are accounted for. What matters is whether the incremental spend generated a positive return, and that's exactly what the test is designed to measure.
10. **What goes into designing a matched market test?**\
    A well-designed test defines six things upfront: the test and comparison time periods, a clear hypothesis and decision criteria for what you'll do based on results, the success metric you're measuring, which markets will serve as test and control groups, the investment level being tested, and the specific campaigns involved. Getting these right before the test begins is what makes results actionable.
11. **How long should an incrementality test run, and what should I measure?**\
    It depends on your conversion timeline. For businesses where marketing drives an immediate or near-term action (like an online purchase), a 30-day test measuring that outcome directly is often sufficient. For businesses with longer sales cycles — such as leads that convert to sales over 30–45 days — you can either measure the leading indicator (leads) over 30 days, or measure actual sales over a longer window. Bonsai works with you to select the approach that fits your business.

***

## Activation

### Predictive Buying Algorithms

1. **What is a Predictive Buying Algorithm?**

   A Predictive Buying Algorithm is a machine learning model that scores the predicted value of ad clicks in real time and feeds those scores back into ad platforms like Google Ads, Microsoft, Meta, TikTok, etc to guide bidding toward higher-value users. Rather than optimizing toward conversions equally, the algorithm helps ad platforms prioritize clicks that are most likely to result in high-value customers — based on your actual first-party business data. The result is a smarter bidding system that reflects what a valuable customer looks like for your specific business, not just who is likely to click or convert.
2. **What data does Bonsai need to build the predictive bidding algorithm?**

   The algorithm draws on three primary data sources: website analytics data, ad platform data (such as Google Ads), Google Merchant Center data, and order or transaction data. These are unified to reconstruct the full customer journey from purchase back through marketing interactions, which becomes the training dataset for the model.
3. **How does the algorithm determine which clicks are most valuable?**

   Bonsai begins by unifying your analytics, ad platform, and transaction data to reconstruct the full customer journey from purchase back through every marketing interaction. From there, a multi-touch attribution model evaluates more than 40 signals related to customer behavior and value — including purchase frequency, order value, repeat purchases, and long-term customer lifetime behavior. Each click is then scored based on how closely it resembles the patterns associated with your most valuable customers historically. Those scores are sent back to ad platforms daily as predicted dollar values, training the ad platforms's smart bidding system to find more users who match that high-value profile.
4. **How does Bonsai send its signals back to ad platforms?**\
   On a daily basis, Bonsai sends ad platforms a feed containing each Click ID paired with a predicted dollar value based on the algorithm's scoring model. Each ad platform's smart bidding system then uses those signals to find and prioritize similar high-value clicks going forward.
5. **How does this differ from Google’s native smart bidding?**\
   Bonsai's algorithm runs alongside Google's bidding system, not instead of it. Rather than replacing Google's smart bidding, Bonsai enhances it by supplying better training signals. Where Google's native smart bidding optimizes around conversion events, Bonsai assigns predictive value to clicks using your first-party customer data and feeds those values back through click IDs — allowing Google to optimize toward higher-quality users, not just conversions.

   The typical process: existing campaigns are cloned for the test, a short learning period runs on CPC bidding, campaigns then transition to tROAS bidding, and Bonsai's algorithm continuously feeds value signals back into Google to guide optimization.
6. **Can the model increase or decrease the value of conversions?**\
   Yes. The algorithm can amplify the value signal of conversions that indicate higher customer value. For example, a conversion may be assigned a higher value if the system predicts that user is particularly valuable.

   Earlier approaches attempted to send zero-value signals for low-value clicks, but testing showed stronger performance when focusing only on positive value signals.
7. **Can the algorithm factor in business constraints like inventory or pricing?**\
   Yes. Additional business inputs can be incorporated into the model, including product inventory levels, pricing, out-of-stock status, seasonality signals, and product feed data. These signals help the algorithm adjust bidding aggressiveness based on real-time business conditions and product availability.
8. **Will implementing the algorithm require restructuring our campaigns?**\
   No. The algorithm is designed to work with your existing account structure — we clone your current campaigns and configure them for the test markets. If structural issues are identified during the process (such as overly restrictive negative keyword lists or constraints limiting audience discovery), any recommended changes would be discussed and agreed upon collaboratively with your team.
9. **How often does the algorithm change bidding targets?**\
   Adjustments are most frequent during the first few days while the system stabilizes. After the initial learning period, changes become much less frequent.

   If performance indicators suggest missed opportunity or inefficiency, the system provides dashboard guidance to adjust targets in collaboration with the marketing team.
10. **How long does it take to get the algorithm up and running?**\
    The typical timeline is approximately 30 days to ingest data and train the model, followed by approximately 30 days running the live test to measure lift. Most tests reach statistical significance within that 30-day window, though some may run longer depending on data volume.
11. **What kind of performance improvement can we expect?**\
    Clients implementing Bonsai's predictive bidding algorithm typically see around a 30% performance lift, though results vary depending on the account, data quality, and inputs. Every new algorithm deployment is validated with an incrementality test so results are measured against a true control, not just a before-and-after comparison.
12. **What is a feature?**\
    A feature is a measurable attribute of a click or user interaction. Examples include device type, search query characteristics, hour of day, geography, landing page behavior, operating system version, and other signals available through your marketing and analytics datasets.
13. **What does fit score mean?**\
    Fit score represents the relative predicted value of a feature segment. In general, a higher fit score indicates that traffic with that feature profile is expected to be more valuable based on historical buyer patterns.
14. **How do I read the scatter plots for each feature?**\
    Scatter plots show how expected value changes across feature values. They are used to interpret which segments of a feature correlate with stronger outcomes and where performance varies across distributions.
15. **Is this a replacement for conversion data?**\
    Bonsai algorithm conversion data is intended for ad buying optimization rather than serving as official purchase conversion reporting. In many cases, teams can reduce reliance on platform conversion tracking for bidding while still using Bonsai measurement products (MTA and Incrementality) to validate business lift and channel impact.
16. **Can I test the algorithm before scaling across the whole channel?**\
    Yes. Bonsai validates new algorithms using Incrementality Testing (Matched Market Testing). A subset of markets are selected as test markets where the algorithm runs, while matched control markets remain unchanged, and lift is measured using a Difference-in-Differences method.
17. **What is `bds_pcv_conversion` and why is it higher than official purchase conversions in Google Ads?**\
    `bds_pcv_conversion` is a conversion goal used for ad buying purposes to support Bonsai’s algorithm training and optimization. It identifies an attributed buyer audience and trains a model to score new clicks (GCLIDs) based on similarity to historical valuable buyers using signals such as search queries, landing page views, time-of-day patterns, and geography. The score represents predicted downstream value (often modeled as predicted LTV), which is why it may not align with official purchase conversion counts and values.
18. **Why are we seeing reduced spending on listing group placements in PMax?**\
    This can occur when Performance Max reallocates spend toward placements and inventory it predicts will meet the optimization goal more efficiently. Common drivers include feed eligibility changes, asset performance shifts, competitive auction dynamics, or algorithm learning that prioritizes other placement types based on predicted conversion likelihood.
19. **Why do we see such a high proportion of brand traffic in Google campaigns like PMax?**\
    Google’s systems naturally optimize toward high-intent users, which frequently includes people searching for brand terms. Since brand traffic is more likely to convert, PMax often prioritizes these auctions, increasing the share of brand-driven traffic. Bonsai’s modeling and campaign structure are designed to rebalance this behavior and drive more incremental non-brand growth over time.

### Budget Planner

1. **What is the Budget Planner?**

   The Budget Planner is a forecasting tool that helps you model performance and business outcomes across different spend scenarios — before committing budget. You can use it to identify the allocation that maximizes profit, estimate expected revenue at a given spend level, and understand how shifting budget across channels may impact next-month performance. It transforms your historical marketing data into forward-looking guidance, so budget decisions are driven by predicted outcomes rather than intuition.
2. **How much data does a channel need before it can be used in the Budget Planner?**

   For reliable Budget Planner forecasting, twelve months of data is ideal — enough to capture seasonality and long-term performance patterns. A minimum of three months is recommended before including a new channel in Incrementality Modeling, which feeds into the planner's spend-response curves. If you've recently launched a new channel and don't yet have sufficient history, Bonsai's Incrementality Testing can provide early signal on that channel's lift and impact while the longer data history builds.
