Title: A/B Experiment Overview Locale: en URL: https://sensorswave.com/en/docs/experiments/overview/ Description: Understand the value, core capabilities, and typical use cases of A/B experiments A/B experiments are a scientific method for product validation. By randomly dividing users into different groups and comparing the actual results of different solutions, they help you make data-driven decisions. Sensors Wave provides comprehensive A/B experiment capabilities — from experiment design, split tracking, to data analysis — empowering you to continuously optimize product experiences. ## What Problems Do A/B Experiments Solve During product development and operations, we often face various decision challenges: **Avoid the risks of subjective decisions**: - "Gut feeling" decisions may lead to wasted resources or wrong directions - Disagreements within the team make it hard to reach consensus - Discovering that a new feature underperforms after launch, having already invested significant resources **Quantify the value of feature improvements**: - Unable to precisely evaluate the actual benefits of product optimizations - Difficult to choose the best option among multiple optimization plans - Cannot prove the ROI of product improvements, impacting future resource allocation **Reduce the risks of full release**: - Issues after a full-scale release of new features affect a large scope - Unable to detect potential negative impacts in advance - High rollback costs and compromised user experience A/B experiments help you validate hypotheses before committing full resources through **small-scale validation, data comparison, and scientific decision-making** — letting the data speak. ## Core Value ### Scientific Decision-Making Based on statistical significance testing rather than subjective judgment: - **Random grouping**: Ensures fair comparison by eliminating selection bias - **Statistical testing**: Uses P-Value to determine whether results are caused by chance - **Sample size calculation**: Ensures the experiment has sufficient statistical power to detect real differences ### Continuous Optimization Rapid iteration for continuous product improvement: - **Quick validation**: Reach experiment conclusions in 2–4 weeks - **Incremental progress**: Optimize one variable at a time, accumulating optimization experience - **Data accumulation**: Build an experiment knowledge base to guide future decisions ### User Segmentation Discover preference differences across user groups: - **Cohort experiments**: Run experiments targeting specific user groups - **Differentiated strategies**: Provide personalized experiences for different users - **Precision operations**: Develop targeted operational strategies based on experiment results ## A/B Experiment vs Feature Gate A/B experiments and Feature Gates are two related but different capabilities suited for different scenarios: | Dimension | A/B Experiment | Feature Gate | |---------|---------|---------| | **Primary purpose** | Validate feature effectiveness, optimize decisions | Control feature rollout, reduce risk | | **When to use** | After feature stabilization, to validate effectiveness | After feature development, ready for release | | **Grouping method** | Random grouping to ensure fair comparison | Based on user properties, Cohorts, percentage | | **Data analysis** | Complete metric comparison and significance testing | Basic usage statistics | | **Decision basis** | Business metrics (conversion rate, retention rate) | Technical metrics (stability, performance) | > **Our recommendation**: Feature Gates and A/B experiments work best when used together. First use Feature Gates to verify technical stability, then use A/B experiments to validate business impact. For a detailed comparison, see [Feature Gates vs A/B Testing](../feature-gates/gates-vs-experiments.mdx). ## Sensors Wave A/B Experiment Capabilities Sensors Wave provides comprehensive A/B experiment capabilities covering the entire experiment workflow: ### Complete Experiment Workflow **Design phase**: - Define experiment Hypothesis and target metrics - Calculate required sample size and experiment Duration - Design experiment Variants and Allocation **Split phase**: - Stable hash-based split using user IDs - Supports Login ID and Anonymous ID - Supports targeting based on user properties **Tracking phase**: - SDK automatically logs experiment exposure events - Tracks user behavioral Events within experiments - Records conversion and key Metric data **Analysis phase**: - Use Segmentation to compare Variant metrics - Use Funnel to compare conversion rates - Statistical significance testing **Decision phase**: - Select winning solution based on data - Full release or continue iterating ### Flexible Split Strategy **Supports multiple split IDs**: - **Login ID**: Logged-in users, consistent experience across devices - **Anonymous ID**: Anonymous users, covering non-logged-in scenarios - **Mixed strategy**: Automatically adapts to both logged-in and non-logged-in users **Targeting based on user properties**: - Device properties: Operating System, browser, Device Model - Geographic location: Country, City, IP - Marketing channels: UTM parameters, ad source - App properties: App Version, SDK version ### Rich Dynamic Configuration Modify experiment parameters without redeploying code: - **String**: Copy, colors, URLs - **Number**: Prices, discounts, weights - **Boolean**: Feature Gates - **Array**: Recommendation lists, tags - **Object**: Complex configuration (JSON) ### Automatic Exposure Logs SDK automatically logs experiment exposures — no manual instrumentation needed: - Automatically logs `$ABImpress` Event when a user first hits an experiment - Exposure deduplication: Only one log per user per experiment - Exposure property snapshot: Records the Variant the user was assigned to and User Properties ## Typical Use Cases ### UI/UX Optimization **Goal**: Find the design that yields higher conversion rates **Examples**: - Compare two checkout flows (3-step vs 5-step) to verify which has higher conversion - Test the impact of different button colors (red vs blue) on click-through rate - Compare add-to-cart rates for two product detail page layouts **Key metrics**: Click-through rate, conversion rate, average order value ### Algorithm Comparison **Goal**: Validate whether a new algorithm outperforms the old one **Examples**: - Compare click-through rates between collaborative filtering and deep learning recommendation algorithms - Test whether a new search ranking algorithm improves search conversion - Validate the impact of a new pricing algorithm on revenue **Key metrics**: Click-through rate, conversion rate, revenue, user satisfaction ### Pricing Strategy **Goal**: Find optimal pricing to maximize revenue **Examples**: - Test which VIP annual fee (199 / 249 / 299) generates the highest total revenue - Validate the impact of first-order discount levels (20% off vs 10% off) on new user conversion - Compare the impact of different shipping strategies on order volume **Key metrics**: Purchase conversion rate, total revenue, customer lifetime value ### Copy Optimization **Goal**: Improve marketing copy click-through and conversion rates **Examples**: - Compare click-through rates of two banner copies ("Limited-Time Offer" vs "New Arrivals") - Test open rates of different email subject lines - Validate click-through rates of different push notification copy **Key metrics**: Click-through rate, open rate, conversion rate ## Feature Limitations Sensors Wave's A/B experiment capabilities are inspired by Statsig's design philosophy but have been selectively optimized based on product positioning. **Currently supported core features**: - ✅ Experiment split: Login ID, Anonymous ID - ✅ User Property split: SDK preset properties - ✅ Dynamic configuration: String, Number, Bool, Array, Object - ✅ Multi-variant experiments - ✅ Automatic exposure logs ## Next Steps Now that you understand the value and core capabilities of A/B experiments, you can: 1. **[Quick Start](quick-start.mdx)**: Complete your first A/B experiment in 20 minutes 2. **[Core Concepts](core-concepts.mdx)**: Gain a deeper understanding of how A/B experiments work 3. **[SDK Integration](sdk-integration.mdx)**: Integrate A/B experiments in your code --- **Last updated**: January 29, 2026