Title: Create and Configure Locale: en URL: https://sensorswave.com/en/docs/experiments/create-and-configure/ Description: Master the complete workflow for creating and configuring experiments in the console This article provides a detailed guide on how to create and configure A/B experiments in the Sensors Wave console, including filling in basic information, configuring Variant groups, setting up dynamic variables, Targeting Rules, and experiment Metrics. ## Create Experiment Basic Information ### Navigate to the Experiment Management Page 1. Log in to the Sensors Wave console 2. Click **A/B Experiments** in the left menu 3. Click the **New Experiment** button in the top-right corner ### Fill in Basic Information #### Experiment Key **Definition**: Unique identifier used in code. **Naming convention**: - Use lowercase English letters and underscores - Format: `module_feature_purpose` - Example: `cart_button_color_test`, `checkout_flow_optimization` **Considerations**: - Experiment Key cannot be modified once created - Must be unique within the Project - Avoid special characters and spaces **Examples**: ``` cart_button_color_test recommendation_algorithm_test pricing_strategy_test checkout_flow_optimization ``` #### Display Name **Definition**: Friendly name displayed in the console for team members to understand. **Examples**: ``` Cart Button Color Experiment Recommendation Algorithm Comparison Pricing Strategy Test Checkout Flow Optimization ``` **Characteristics**: - Can use any language - Can be modified at any time - Does not affect code #### Experiment Description Provide a detailed explanation of the experiment's background, purpose, and expected outcome: **Example**: ``` Background: The current add-to-cart button uses blue with a 24% click-through rate, below the industry average. Purpose: Test whether a red button can increase click-through rate. Expected outcome: Red button increases click-through rate to 26.4% (10% relative lift). ``` #### Hypothesis A clear experiment Hypothesis to guide design and result evaluation: **Good Hypothesis examples**: - "Red button has 10% higher click-through rate than blue button" - "Simplifying to a 3-step checkout flow can raise conversion rate from 20% to 25%" - "Deep learning recommendation algorithm has 15% higher click-through rate than collaborative filtering" **Poor Hypothesis examples**: - "Red button is better" (lacks quantified Metrics) - "New flow can improve conversion" (no specific improvement target) --- ## Configure Variant Groups ### Add Variants An experiment requires at least 2 Variants: Control and Test Group. #### Control **Definition**: The current solution, serving as the baseline. **Configuration**: - **Variant name**: `control` (recommended) - **Display name**: Control - **Description**: Current design (blue button) #### Test Group (Treatment) **Definition**: The new solution to be validated. **Configuration**: - **Variant name**: `treatment` (recommended) - **Display name**: Test Group - **Description**: New design (red button) ### Allocation Set the user traffic percentage each Variant receives. #### 50/50 Allocation (Recommended) ``` Control: 50% Test Group: 50% ``` **Advantages**: - Balanced sample sizes - Highest statistical power - Most reliable results **Applicable scenarios**: Most A/B experiments #### 70/30 Allocation (Conservative Strategy) ``` Control: 70% Test Group: 30% ``` **Advantages**: - Reduced risk, most users stay on the stable solution - Suitable for high-risk experiments **Disadvantages**: - Smaller Test Group sample size, longer Duration needed **Applicable scenarios**: - New feature validation (higher risk) - Core flow optimization (large impact scope) #### Multi-Variant Allocation For multiple Test Groups: ``` Control: 34% Test Group A: 33% Test Group B: 33% ``` **Note**: - Ensure total equals 100% - Each Variant should have at least 10% traffic ### Variant Naming Convention **Recommended naming**: - Control: `control` - Single Test Group: `treatment` - Multiple Test Groups: `treatment_a`, `treatment_b`, `treatment_c` **Avoid**: - `v1`, `v2` (not semantic enough) - `test`, `new` (easily confused) - Non-English naming (inconvenient for use in code) --- ## Configure Targeting Rules Targeting Rules determine which users participate in the experiment. ### Full Traffic (Default) All users have a chance to participate in the experiment and are randomly assigned to different Variants. **Applicable scenarios**: - Most A/B experiments - Optimizations targeting all users **No additional configuration needed** — full traffic is the default. ### Targeted Split Only experiment on specific user groups, filtering the audience based on User Properties. #### Available Properties Target based on SDK preset properties: **Device properties**: - `$os`: Operating System (iOS, Android, Windows) - `$browser`: Browser Name (Chrome, Safari) - `$model`: Device Model **Geographic location**: - `$country`: Country - `$province`: Province - `$city`: City **Marketing channels**: - `$utm_source`: Ad source - `$utm_medium`: Ad medium - `$utm_campaign`: Ad campaign **App properties**: - `$app_version`: App Version - `$network_type`: Network type For the complete list, see [Preset Events and Properties](../data-integration/preset-events-and-properties.mdx). #### Configuration Examples **Example 1: Experiment only for iOS users** ``` Property: $os Condition: equals Value: iOS ``` **Example 2: Experiment only for Beijing users** ``` Property: $city Condition: equals Value: 北京 ``` **Example 3: Experiment only for users from Google ads** ``` Property: $utm_source Condition: equals Value: google ``` **Example 4: Multiple conditions combined** ``` Condition 1: $os equals iOS Condition 2: $city equals 北京 Logic: AND (both must be met) ``` #### Currently Unsupported Targeting The following targeting methods are not currently supported: ❌ **Server-side user Profile**: - User level (VIP, regular user) - Cumulative spending amount - Account age (days since registration) ❌ **Cohorts**: - "Users who purchased in the last 7 days" - "Dormant users" --- ## Configure Experiment Metrics Experiment Metrics are used to determine experiment Success or Failure. ### Primary Metric The core Metric of focus, used to determine Success or Failure: **Configuration steps**: 1. Click **Add Primary Metric** 2. Select Event: `AddToCartClicked` (Click add to cart) 3. Select Metric type: **Conversion rate** 4. Set target: 10% lift **Examples**: | Metric Name | Event | Type | Target | |---------|------|------|------| | Button click-through rate | `AddToCartClicked` | Conversion rate | 10% lift | | Payment conversion rate | `Purchase` | Conversion rate | 5% lift | | Revenue per user | `Purchase` | Average (`order_amount`) | 8% lift | ### Secondary Metrics Supporting Metrics that help fully understand experiment impact: **Examples**: | Metric Name | Event | Type | |---------|------|------| | Add-to-cart conversion rate | `AddToCart` | Conversion rate | | Page dwell time | `PageView` | Average (`duration`) | | Product views | `ProductView` | Total count | ### Guardrail Metrics Ensure the experiment does not negatively affect critical Metrics: **Examples**: | Metric Name | Event | Type | Threshold | |---------|------|------|------| | Total revenue | `Purchase` | Sum (`order_amount`) | No decline | | Page load time | `PageView` | Average (`load_time`) | < 2 seconds | | Error rate | `Error` | Conversion rate | < 1% | --- ## Save and Launch ### Save as Draft Click the **Save** button to save the experiment as Draft: **Draft characteristics**: - Configuration can be modified repeatedly - Experiment configuration cannot be retrieved in code - No exposure logs are recorded **Applicable scenarios**: - Experiment configuration is incomplete - Needs team review - Waiting for code integration to complete ### Launch Experiment After confirming the configuration, click the **Launch** button: **Pre-launch checklist**: - [ ] Experiment Key is correct and unique - [ ] Hypothesis is clear and testable - [ ] Variant configuration is correct (at least 2 Variants) - [ ] Allocation is reasonable (total equals 100%) - [ ] Metrics are properly selected (primary, secondary, guardrail) - [ ] Code has been integrated and tested (recommend validating via DEBUG mode first) **After launch**: - Experiment status changes to **Running** - Experiment configuration can be retrieved in code - SDK automatically logs exposure events - Data collection begins --- ## Experiment Configuration Best Practices ### 1. Clear Naming ``` ✅ Recommended: - Experiment Key: checkout_flow_optimization - Display name: Checkout Flow Optimization - Variant names: control, treatment ❌ Not recommended: - Experiment Key: test1 - Display name: Experiment - Variant names: a, b ``` ### 2. Detailed Descriptions ``` ✅ Recommended: Background: The current checkout flow has 5 steps with a 60% user drop-off rate. Purpose: Simplify to 3 steps to reduce drop-off and improve payment conversion rate. Expected outcome: Payment conversion rate increases from 20% to 25%. ❌ Not recommended: Optimize checkout flow. ``` ### 3. Reasonable Allocation ``` ✅ Recommended: - Standard experiments: 50/50 - High-risk experiments: 70/30 - Multi-variant experiments: Even distribution or Control with majority ❌ Not recommended: - 90/10 (Test Group sample size too small) - Traffic total not equal to 100% ``` ### 4. Consistent Variable Types ``` ✅ Recommended: Control: { price: 299 } // Number Test Group: { price: 249 } // Number ❌ Not recommended: Control: { price: 299 } // Number Test Group: { price: "249" } // String ``` ### 5. Complete Metric Framework ``` ✅ Recommended: - Primary Metric: Payment conversion rate (target: 10% lift) - Secondary Metrics: Average order value, checkout duration - Guardrail Metrics: Total revenue, user satisfaction ❌ Not recommended: - Only set primary Metric, ignoring secondary and guardrail Metrics ``` --- ## FAQ ### Q: Can the Experiment Key be modified? **A**: No. The Experiment Key cannot be modified once created. If you need to change it, you must delete the experiment and create a new one. ### Q: Can the Allocation be modified while the experiment is running? **A**: Not recommended. Modifying Allocation disrupts sticky assignment and affects experiment results. If changes are necessary, stop the current experiment and create a new one. ### Q: Can dynamic variables be modified while the experiment is running? **A**: Not recommended. Modifying variable values affects the reliability of experiment results. If changes are necessary, stop the current experiment and create a new one. ### Q: How to ensure the experiment configuration is correct? **A**: Use the pre-release checklist to verify each item. Before release, validate code integration in a test environment. --- ## Related Documentation - [Experiment Design](experiment-design.mdx): Learn how to design scientific experiments - [Targeting and Allocation](targeting-and-allocation.mdx): Configure Targeting Rules - [SDK Integration](sdk-integration.mdx): Integrate experiments in your code --- **Last updated**: January 29, 2026