Title: Metrics and Analysis Locale: en URL: https://sensorswave.com/en/docs/experiments/metrics-and-analysis/ Description: Learn to analyze experiment results and determine experiment outcomes Correctly analyzing experiment results is key to data-driven decision-making. This article introduces how to use Sensors Wave to analyze experiment data, evaluate results, avoid misinterpreting data, and write experiment reports. ## Key Metrics ### Cumulative Exposures **Definition**: The number of users who participated in the experiment, i.e., users who have a recorded `$ABImpress` Event. **Importance**: - Exposure count is the denominator for calculating conversion rate - Insufficient sample size leads to unreliable results **How to view**: Query the `$ABImpress` Event in Segmentation: ``` Event: $ABImpress Filter: experiment_key = 'cart_button_color_test' Group by: variant (Variant name) Metric: Unique users ``` **Expected result**: | Variant | Exposed Users | Percentage | |------|----------|------| | control | 5,000 | 50% | | treatment | 5,000 | 50% | ### Conversion Rate **Definition**: The proportion of users who completed the target behavior out of total exposed users. **Formula**: ``` Conversion rate = Converted users / Exposed users ``` **Example**: | Variant | Exposed Users | Converted Users | Conversion Rate | |------|----------|----------|--------| | control | 5,000 | 1,200 | 24.0% | | treatment | 5,000 | 1,400 | 28.0% | ### Lift **Definition**: The percentage improvement of the Test Group relative to Control. **Formula**: ``` Lift = (Test Group conversion rate - Control conversion rate) / Control conversion rate × 100% ``` **Example**: ``` Lift = (28.0% - 24.0%) / 24.0% × 100% = 16.7% ``` **Interpretation**: The Test Group's conversion rate is 16.7% higher than Control. ### Statistical Significance (P-Value) **Definition**: The probability that the result is due to chance. **Criteria**: - **p **Segmentation** 2. Select the conversion Event (e.g., `AddToCartClicked`) 3. Add filter: `experiment = 'cart_button_color_test'` 4. Group by **variant** 5. View Metrics for each Variant **Example query**: ``` Event: AddToCartClicked Filter: experiment = 'cart_button_color_test' Group by: variant Metrics: Unique users, Total event count Time range: Last 14 days ``` **Result**: | Variant | Users | Event Count | Events per User | |------|--------|---------|---------| | control | 1,200 | 1,350 | 1.13 | | treatment | 1,400 | 1,680 | 1.20 | ### Method 2: Funnel **Applicable scenarios**: Analyzing multi-step conversion flows. **Steps**: 1. Navigate to **Insights** > **Funnel** 2. Define funnel steps: - Step 1: Product detail page view (`PageView`, `page_title = 'Product Detail'`) - Step 2: Click add to cart (`AddToCartClicked`) - Step 3: Successfully added to cart (`AddToCart`) - Step 4: Enter checkout page (`PageView`, `page_title = 'Checkout'`) - Step 5: Complete payment (`Purchase`) 3. Add filter: `experiment = 'cart_button_color_test'` 4. Group by **variant** to compare **Result**: | Step | Control Conversion | Test Group Conversion | Lift | |------|------------|------------|---------| | Step 1 → Step 2 | 24.0% | 28.0% | +16.7% | | Step 2 → Step 3 | 85.0% | 87.0% | +2.4% | | Step 3 → Step 4 | 60.0% | 62.0% | +3.3% | | Step 4 → Step 5 | 75.0% | 76.0% | +1.3% | | Overall conversion | 9.18% | 11.27% | +22.8% | ### Method 3: SQL Query (Advanced) For users familiar with SQL, you can query experiment data directly: ```sql -- Calculate conversion rate for each Variant SELECT u_variant AS variant, COUNT(DISTINCT CASE WHEN event = '$ABImpress' THEN ssid END) AS exposure_users, COUNT(DISTINCT CASE WHEN event = 'AddToCartClicked' THEN ssid END) AS conversion_users, conversion_users * 1.0 / exposure_users AS conversion_rate FROM events WHERE e_experiment = 'cart_button_color_test' AND time >= '2026-01-15' AND time Control - ✅ Lift meets expectations (e.g., > 10%) - ✅ Statistically Significant: p 1,000 users per group) **Decision**: Roll out the Test Group solution to all users. **Actions**: 1. Stop the experiment 2. Update code to apply the winning solution 3. Monitor Metrics after full release 4. Archive the experiment and record conclusions ### Failure: No Significant Difference or Test Group is Worse **Conditions**: - ❌ Test Group primary Metric ≤ Control - ❌ Lift is below expectations (e.g., 0.05, indicating this improvement may be due to chance. **Correct approach**: - Consider both lift and P-Value - Only consider results Significant when p < 0.05 ### Mistake 3: Looking at Only One Metric **Problem**: Only focusing on the primary Metric, ignoring secondary and guardrail Metrics. **Example**: ``` Primary Metric (click-through rate): +15% (Success) Secondary Metric (add-to-cart rate): -5% (decline) Guardrail Metric (total revenue): -10% (severe decline) ``` **Analysis**: Although click-through rate improved, add-to-cart rate and total revenue both declined, indicating the new solution attracted more clicks but had worse conversion performance. **Correct approach**: - Observe all Metrics holistically - Pay attention to secondary and guardrail Metrics - Avoid "vanity Metrics" (Metrics that look good but aren't useful) ### Mistake 4: Ignoring Negative Impacts **Problem**: Only seeing positive effects, ignoring negative impacts. **Example**: ``` Primary Metric (registration rate): +20% (Success) Guardrail Metric (user complaint rate): +50% (severe increase) Guardrail Metric (7-day retention): -15% (decline) ``` **Analysis**: Although registration rate improved, user experience worsened, leading to increased complaints and decreased retention. **Correct approach**: - Set guardrail Metrics - Monitor negative impacts - Evaluate comprehensively --- ## Experiment Report Template ### Basic Information - **Experiment name**: Cart Button Color Experiment - **Experiment Key**: `cart_button_color_test` - **Experiment period**: 2026-01-15 to 2026-01-29 (14 days) - **Experiment owner**: Zhang San ### Experiment Hypothesis Changing the add-to-cart button from blue to red can increase click-through rate by 10%. ### Experiment Design - **Control**: Blue button (current design) - **Test Group**: Red button (new design) - **Allocation**: 50% vs 50% - **Split ID**: Login ID (preferred) / Anonymous ID ### Key Data | Metric | Control | Test Group | Lift | P-Value | |------|--------|--------|---------|---------| | Exposed users | 5,000 | 5,000 | - | - | | Clicking users | 1,200 | 1,400 | +16.7% | 0.002 | | Click-through rate | 24.0% | 28.0% | +16.7% | 0.002 | | Add-to-cart conversion | 18.0% | 20.5% | +13.9% | 0.015 | | Payment conversion | 9.2% | 10.1% | +9.8% | 0.08 | ### Conclusion **Success**: The Test Group's click-through rate improved by 16.7%, exceeding the expected 10%, and is statistically Significant (p = 0.002 < 0.05). Add-to-cart conversion rate also showed Significant improvement (+13.9%). Although payment conversion rate improvement is not Significant (p = 0.08), there is no negative impact. ### Recommendation Roll out the red button to all users. --- ## Related Documentation - [Experiment Design](experiment-design.mdx): Learn how to select appropriate Metrics - [Quick Start](quick-start.mdx): See a complete data analysis example - [Lifecycle Management](lifecycle-management.mdx): Understand the post-experiment workflow --- **Last updated**: March 3, 2026