Title: Core Concepts Locale: en URL: https://sensorswave.com/en/docs/experiments/core-concepts/ Description: Gain a deep understanding of the working principles and core concepts of A/B experiments A deep understanding of the core concepts of A/B experiments helps you design more scientific experiments, correctly interpret results, and avoid common pitfalls. This article covers the basic components of an experiment, the split mechanism, dynamic configuration, exposure logs, and experiment statuses. ## Basic Components of an Experiment A complete A/B experiment contains the following core elements: ### Experiment Identifier **Experiment Key**: - Unique identifier used in code - Naming convention: Use underscores, e.g., `cart_button_color_test` - Cannot be modified once created — ensures stability **Display name**: - Friendly name shown in the console, e.g., "Cart Button Color Experiment" - Can be modified at any time without affecting code **Description**: - Detailed explanation of the experiment, including background, purpose, and expected outcome - Helps team members quickly understand the experiment ### Hypothesis A clear experiment Hypothesis is the foundation of a scientific experiment: **Good Hypothesis examples**: - "Red button can increase click-through rate by 10% compared to blue button" - "Simplifying to a 3-step checkout flow can raise payment conversion rate from 20% to 25%" - "Deep learning recommendation algorithm has 15% higher click-through rate than collaborative filtering" **Poor Hypothesis examples**: - "Red button is better" (lacks quantified metrics) - "New flow can improve conversion rate" (no specific improvement target) - "Users prefer the new design" (hard to measure) ### Variant Groups Variants are the different solutions in an experiment, typically including: **Control Group**: - The current solution, serving as the baseline - Usually named `control` - Used to compare against the Test Group **Test Group** (Treatment): - The new solution to be validated - Can have one or more Test Groups - Usually named `treatment`, `treatment_v2`, etc. **Multi-variant experiment example**: ``` Pricing strategy experiment: - Control (control): ¥299 - Test Group A (treatment_a): ¥249 - Test Group B (treatment_b): ¥199 ``` ### Allocation Allocation determines how much user traffic each Variant receives: **Two-variant experiment**: ``` Control: 50% Test Group: 50% ``` **Multi-variant experiment**: ``` Control: 34% Test Group A: 33% Test Group B: 33% ``` **Conservative strategy** (for higher-risk new features): ``` Control: 70% Test Group: 30% ``` ### Dynamic Configuration Dynamic configuration defines the specific parameters for each Variant: ```javascript // Control configuration { button_color: "blue", button_text: "加入购物车" } // Test Group configuration { button_color: "red", button_text: "立即购买" } ``` --- ## Split Mechanism The split mechanism determines how users are assigned to different Variants — it is the core of A/B experiments. ### Split ID Sensors Wave supports two types of split IDs: #### Login ID **Definition**: A unique identifier for the user after logging in, typically the user's primary key in the business system. **Characteristics**: - Cross-device consistency: The same user sees the same Variant across different devices - Long-term stability: User sees the same Variant after logging out and back in - Applicable scenarios: Applications requiring user login (e-commerce, SaaS, social, etc.) **Examples**: - User ID: `user_12345` - Custom account name: `zhangsan` **Recommendation**: For products requiring user login, prefer using Login ID for split to ensure a consistent user experience. #### Anonymous ID **Definition**: A unique ID identifying a user's device or browser, automatically generated by the SDK on first visit. **Characteristics**: - Device-level: Identifies a single device or browser - SDK auto-managed: No manual setup required - Regenerated after clearing data: A new Anonymous ID is generated after the user clears cookies or app data **Storage methods**: - Web: Stored in cookies - iOS: Stored in KeyChain or UUID generated - Android: Uses Android ID or UUID generated **Applicable scenarios**: - Non-logged-in users - Public websites or content platforms - Landing page testing #### Mixed Strategy For applications with both logged-in and anonymous users, Sensors Wave automatically adopts a mixed strategy: - **Logged-in users**: Split using Login ID - **Non-logged-in users**: Split using Anonymous ID - **Before and after login**: After login, the experiment assignment may change (switching from Anonymous ID-based to Login ID-based assignment) **Note**: To avoid users seeing different experiment Variants before and after login, we recommend only splitting logged-in users for important experiments. ### Split Algorithm Sensors Wave uses a stable hash algorithm for split: ``` Variant index = hash(user_id + experiment_key) % 100 ``` **Algorithm characteristics**: 1. **Stability**: The same user in the same experiment is always assigned to the same Variant, regardless of when they visit 2. **Uniformity**: Traffic is evenly distributed across Variants 3. **Independence**: Split across different experiments is independent — the same user may be assigned to different Variants in different experiments **Split example**: ``` User A (user_001) in experiment cart_button_color_test: hash("user_001" + "cart_button_color_test") % 100 = 23 23 = 50, assigned to Test Group ``` ### Sticky Assignment **Definition**: Once a user is assigned to a Variant, they remain in that Variant throughout the experiment, even if the experiment configuration changes. **Importance**: - Ensures consistent user experience, avoiding confusion - Guarantees reliability of experiment results - Prevents the same user from switching Variants, which would affect data **Considerations**: - Do not modify Allocation ratios during an experiment - If changes are necessary, stop the current experiment and create a new one --- ## Exposure Logs ($ABImpress) Exposure logs record information about users being assigned to experiments and are the foundational data for experiment analysis. ### Automatic Logging The SDK automatically logs `$ABImpress` Events in the following scenarios: **JavaScript SDK**: ```javascript // Automatically logged when calling getExperiment or checkFeatureGate const experiment = await sensorswave.getExperiment('experiment_key'); // SDK automatically logs $ABImpress Event ``` **Go SDK**: ```go // GetExperiment automatically logs exposure result, err := client.GetExperiment(user, "experiment_key") // SDK automatically logs $ABImpress Event ``` ### Exposure Deduplication **Mechanism**: Only one exposure is logged per user per experiment. **Implementation**: - The SDK caches exposed experiments locally - Repeated calls to `getExperiment` do not log additional exposures **Example**: ```javascript // First call: logs exposure const exp1 = await sensorswave.getExperiment('experiment_key'); // Second call: no exposure logged (deduplicated) const exp2 = await sensorswave.getExperiment('experiment_key'); // User refreshes page and calls again: no exposure logged (deduplicated) const exp3 = await sensorswave.getExperiment('experiment_key'); ``` ### Exposure Properties The `$ABImpress` Event contains the following properties: ```javascript { event: "$ABImpress", properties: { experiment_key: "cart_button_color_test", // Experiment key experiment_name: "购物车按钮颜色实验", // Experiment display name variant: "treatment", // Assigned Variant // User Property snapshot (User Properties at the time of split) $os: "iOS", $browser: "Safari", $city: "北京", // ... }, distinct_id: "user_12345", time: "2026-01-29 10:30:00" } ``` ### Exposure vs Conversion **Exposure**: The user was assigned to an experiment Variant and saw the experiment content. **Conversion**: The user completed a target action within the experiment (e.g., clicking a button, completing a payment). **Conversion rate calculation**: ``` Conversion rate = Converted users / Exposed users ``` --- ## Experiment Status An experiment goes through multiple statuses during its lifecycle: ### Draft **Characteristics**: - Experiment has been created but not released - Configuration can be modified repeatedly (Variants, Allocation, variables, etc.) - Experiment configuration cannot be retrieved in code **Operations**: - Modify experiment configuration - Release experiment - Delete experiment ### Running **Characteristics**: - Experiment has been released and is collecting data - Experiment configuration can be retrieved in code - SDK automatically logs exposure events **Operations**: - Pause experiment - View experiment data - Not recommended to modify configuration (affects experiment results) **Considerations**: - During a running experiment, avoid modifying Allocation or variable values - If changes are necessary, stop the current experiment and create a new one ### Paused **Characteristics**: - Experiment is temporarily stopped - Code may still return experiment configuration (depending on SDK cache) - No new exposure logs are recorded **Operations**: - Restart experiment - End experiment **Use cases**: - Issues discovered in the experiment requiring a temporary stop - Waiting for issue resolution before restarting ### Ended **Characteristics**: - Experiment has completed and stopped collecting data - Experiment configuration cannot be retrieved in code (returns default values) - Historical data is preserved for analysis **Operations**: - View experiment results - Archive experiment - Export experiment report **Note**: - After an experiment ends, update the code to remove experiment logic - Apply the winning solution to all users --- ## Experiment Lifecycle Diagram ``` Create experiment ↓ [Draft] ← Configuration can be modified repeatedly ↓ Release [Running] ← Collecting data ↓ Pause ↓ Sufficient sample size [Paused] → Restart → [Running] ↓ End ↓ End [Ended] ← Analyze results, make decisions ↓ Apply winning solution, update code ``` --- ## FAQ ### Q: Will a user be assigned to a different Variant each time they visit? **A**: No. Sensors Wave uses a stable hash algorithm to ensure the same user is always assigned to the same Variant within the same experiment. This is the essence of sticky assignment. ### Q: Can a user's experiment Variant change before and after login? **A**: It may. If the user is split by Anonymous ID before login and switches to Login ID after login, the assigned Variant may change. We recommend only splitting logged-in users for important experiments. ### Q: Can the same user participate in multiple experiments simultaneously? **A**: Yes. Split across different experiments is independent. The same user can be assigned to Control in Experiment A and Test Group in Experiment B. ### Q: Can the Allocation be modified while the experiment is running? **A**: Not recommended. Modifying Allocation disrupts sticky assignment, affecting the reliability of experiment results. If changes are necessary, stop the current experiment and create a new one. ### Q: Why are exposure logs not being recorded? **A**: Check the following: - Is the A/B testing feature enabled in the SDK (`enableAB: true`) - Has the experiment been released (status is "Running") - Does the user meet the experiment's Targeting Rules - Has an exposure already been recorded (exposure deduplication) --- ## Related Documentation - [A/B Experiment Overview](overview.mdx): Understand the value and core capabilities of A/B experiments - [Quick Start](quick-start.mdx): Complete your first experiment in 20 minutes - [Experiment Design](experiment-design.mdx): Learn how to design scientifically rigorous experiments - [Targeting and Allocation](targeting-and-allocation.mdx): Deep dive into the split mechanism --- **Last updated**: January 29, 2026