Title: Quick Start Locale: en URL: https://sensorswave.com/en/docs/experiments/quick-start/ Description: Complete your first A/B experiment in 20 minutes With this tutorial, you will complete your first A/B experiment in 20 minutes, experiencing the full workflow from creating an experiment, integrating code, to analyzing data. ## Prerequisites Before you begin, make sure you have: - Completed SDK integration (this guide uses the [JavaScript SDK](../data-integration/client-sdks/javascript.mdx) as an example) - Enabled the A/B testing feature (e.g., set `enableAB: true` in the JavaScript SDK) - Understood the basic concepts of [User Identification](../data-integration/user-identification.mdx) ## Example Scenario **Business context**: You operate an e-commerce website and want to improve the click-through rate of the "Add to Cart" button. The design team has proposed two options: keep the current blue button, or switch to a more prominent red button. **Experiment Hypothesis**: The red button can increase click-through rate by 10% compared to the blue button. **Experiment design**: - **Control**: Blue button (current design) - **Test Group** (Treatment): Red button (new design) - **Allocation**: 50% vs 50% - **Duration**: 2 weeks (to ensure sufficient sample size) **Target metrics**: - **Primary Metric**: Button click-through rate - **Secondary metrics**: Add-to-cart conversion rate, product detail page dwell time ## Step 1: Create the Experiment Create an experiment in the Sensors Wave console: ### 1.1 Navigate to the Experiment Management Page 1. Log in to the Sensors Wave console 2. Click **A/B Experiments** in the left menu 3. Click the **New Experiment** button in the top-right corner ### 1.2 Fill in Basic Information **Experiment Key**: ``` cart_button_color_test ``` **Display name**: ``` Cart Button Color Experiment ``` **Experiment description**: ``` Test whether a red button attracts more user clicks than a blue button, improving add-to-cart conversion rate ``` **Hypothesis**: ``` Red button can increase click-through rate by 10% ``` ### 1.3 Configure Variant Groups **Control**: - Variant name: `control` - Allocation: `50%` **Test Group** (Treatment): - Variant name: `treatment` - Allocation: `50%` ### 1.4 Configure Dynamic Variables Add a variable to control the button color: - **Variable name**: `button_color` - **Variable type**: String - **Control value**: `blue` - **Test Group value**: `red` ### 1.5 Save and Launch 1. Click the **Save** button 2. After verifying the configuration, click the **Launch** button 3. The experiment status changes to **Running** ## Step 2: Code Integration Integrate the experiment code on the product detail page: ```javascript function ProductDetail() { const [buttonColor, setButtonColor] = useState('blue'); // Default blue useEffect(() => { // Get experiment variables async function loadExperiment() { try { const experiment = await sensorswave.getExperiment('cart_button_color_test'); const color = experiment.get('button_color', 'blue'); setButtonColor(color); } catch (error) { console.error('Failed to get experiment:', error); // Use default value on failure setButtonColor('blue'); } } loadExperiment(); }, []); // Handle button click const handleAddToCart = () => { // Track click event with experiment info sensorswave.trackEvent('AddToCartClicked', { experiment: 'cart_button_color_test', button_color: buttonColor, product_id: 'PROD-001', product_name: 'iPhone 15 Pro' }); // Execute add-to-cart business logic addToCartLogic(); }; return ( iPhone 15 Pro 价格:¥7,999 {/* Dynamically set button color based on experiment variable */} 加入购物车 ); } ``` ## Step 3: Verify and Monitor After the experiment is released, verify that the integration is correct and continuously monitor data. ### 3.1 Verify Exposure Logs Check exposure logs in the Sensors Wave console: 1. Navigate to **Insights** > **Segmentation** 2. Select Event: `$ABImpress` 3. Add filter: `Experiment name = cart_button_color_test` 4. View data for the last 1 hour **Verification checklist**: - Are exposure Events being reported normally - Does each user have only one exposure log - Are the Control and Test Group exposure counts roughly equal (50% vs 50%) ### 3.2 Check Group Distribution Verify that the Allocation is even: 1. In Segmentation, select Event: `$ABImpress` 2. Group by **Variant name** 3. View user count for Control and Test Group **Expected results**: - Control: ~50% - Test Group (treatment): ~50% If the group distribution has a large deviation (e.g., 60% vs 40%), there may be a configuration error or insufficient sample size. ### 3.3 Monitor Key Metrics Regularly check the trend of core metrics: **Primary Metric**: Button click-through rate 1. In Segmentation, select Event: `AddToCartClicked` 2. Group by **Variant name** 3. View event count and user count for each Variant **Calculate click-through rate**: ``` Click-through rate = Clicking users / Exposed users ``` **Secondary Metric**: Add-to-cart conversion rate 1. Create a Funnel 2. Define funnel steps: - Step 1: Product detail page view (`PageView`) - Step 2: Click add to cart (`AddToCartClicked`) - Step 3: Successfully added to cart (`AddToCart`) 3. Group by **Variant name** to compare conversion rates ## Step 4: Analyze Results After the experiment has run for 2 weeks (or reached sufficient sample size), analyze the results. ### 4.1 Wait for Sufficient Sample Size **Minimum sample size requirements**: - At least 1,000 exposed users per group - At least 100 converted users per group **Risks of insufficient sample size**: - Unstable experiment results, easily influenced by random factors - Statistical significance testing becomes invalid - May lead to incorrect conclusions ### 4.2 Compare Key Metrics Use Segmentation to compare data between the two groups: **Example data**: | Variant | Exposed Users | Clicking Users | CTR | Lift | |------|----------|----------|--------|---------| | Control (blue) | 5,000 | 1,200 | 24.0% | - | | Test Group (red) | 5,000 | 1,400 | 28.0% | +16.7% | **Analysis**: - Test Group CTR: 28.0%, Control: 24.0% - Relative lift: (28.0% - 24.0%) / 24.0% = 16.7% - Hypothesis validation: Actual lift of 16.7% exceeds the expected 10% ### 4.3 Check Statistical Significance Use Sensors Wave's statistical testing feature (or third-party tools) to calculate the P-Value: **Criteria**: - **p iPhone 15 Pro 价格:¥7,999 {/* Use red button directly */} 加入购物车 ); } ``` ## FAQ ### Q: How long should an experiment run? At least 2 weeks to ensure a complete cycle (weekdays and weekends). If traffic is low, more time may be needed to reach sufficient sample size. ### Q: What if the sample size is insufficient? There are several approaches: - Extend the experiment Duration - Increase experiment traffic (e.g., from 50% to 80%) - Lower the Minimum Detectable Effect (MDE) ### Q: What if the experiment result is not Significant? Possible reasons: - Insufficient sample size: extend the experiment or increase traffic - Incorrect Hypothesis: the new solution has no Significant difference from the old one - External factor interference: holidays, marketing campaigns, etc. Recommendation: Extend the experiment Duration. If results remain insignificant, the new solution has limited impact and you may want to abandon this optimization direction. ### Q: Can the configuration be modified during the experiment? Not recommended. Modifying configuration (e.g., variable values, Allocation) affects the reliability of experiment results. If changes are necessary: - Stop the current experiment - Create a new experiment - Re-collect data ## Next Steps Congratulations on completing your first A/B experiment! Next, you can: 1. **[Core Concepts](core-concepts.mdx)**: Gain a deeper understanding of how A/B experiments work 2. **[Experiment Design](experiment-design.mdx)**: Learn how to design scientifically rigorous experiments 3. **[SDK Integration](sdk-integration.mdx)**: Explore more SDK usage methods 4. **[Metrics and Analysis](metrics-and-analysis.mdx)**: Master data analysis techniques --- **Last updated**: January 29, 2026