Title: Use Cases Locale: en URL: https://sensorswave.com/en/docs/experiments/use-cases/ Description: Learn experiment design and implementation through real-world examples Learn how to design, implement, and analyze A/B experiments through real-world business cases. This article provides 4 typical examples covering e-commerce, recommendations, pricing, and marketing scenarios. ## Case 1: E-Commerce Checkout Flow Optimization ### Background An e-commerce platform's checkout flow includes 5 steps: 1. Cart confirmation 2. Shipping address entry 3. Delivery method selection 4. Payment method selection 5. Order confirmation Data shows user drop-off at every step, with an overall payment conversion rate of only 18%, below the industry average of 25%. ### Hypothesis Simplifying the checkout flow by merging 5 steps into 3 can increase the payment conversion rate from 18% to 22% (22% relative lift). ### Experiment Design **Control**: - 5-step checkout flow (current design) - Steps: Cart → Address → Delivery → Payment → Confirmation **Test Group**: - 3-step checkout flow (simplified design) - Steps: Cart → Address + Delivery + Payment → Confirmation - Address, delivery, and payment method merged into one page **Allocation**: 50% vs 50% **Duration**: 3 weeks ### Key Metrics **Primary Metric**: - Payment conversion rate (Cart → Payment success) **Secondary Metrics**: - Average checkout duration - Drop-off rate per step - Average order value **Guardrail Metrics**: - Total revenue - Payment success rate - User complaint rate ### Experiment Results | Metric | Control | Test Group | Lift | P-Value | |------|--------|--------|---------|---------| | Payment conversion rate | 18.0% | 22.5% | +25.0% | 0.001 | | Average checkout duration | 4.5 min | 3.2 min | -28.9% | 0.003 | | Cart → Address step drop-off | 25% | 15% | -40.0% | 0.002 | | Average order value | ¥285 | ¥290 | +1.8% | 0.15 | | Total revenue | - | - | +27.3% | 0.001 | ### Conclusion **Success**: The Test Group's payment conversion rate improved by 25%, exceeding the expected 22%, and is statistically Significant (p = 0.001 < 0.05). Average checkout duration was Significantly reduced, and drop-off rates at each step decreased notably. Guardrail Metrics showed no negative impact, with total revenue increasing by 27.3%. ### Actions 1. Roll out the 3-step checkout flow to all users 2. Monitor conversion rate and user feedback after full release 3. Continue optimizing: test one-click checkout functionality (single step only) --- ## Case 2: Recommendation Algorithm Comparison ### Background A content platform uses collaborative filtering for content recommendations, with a recommendation click-through rate of 8%, below expectations. The engineering team developed a deep learning-based recommendation algorithm and wanted to validate its effectiveness. ### Hypothesis The deep learning recommendation algorithm has a 20% higher recommendation click-through rate than collaborative filtering (from 8% to 9.6%). ### Experiment Design **Control**: - Collaborative filtering algorithm - Parameters: k = 20, similarity threshold = 0.5 **Test Group**: - Deep learning algorithm - Model: Two-tower model (User Tower + Item Tower) - Features: User historical behavior, content tags, real-time context **Allocation**: 50% vs 50% **Duration**: 4 weeks ### Key Metrics **Primary Metric**: - Recommendation click-through rate (clicks / impressions) **Secondary Metrics**: - Content consumption duration - User engagement rate (likes, comments, shares) - Content diversity (distribution of recommended content categories) **Guardrail Metrics**: - User retention (next-day retention, 7-day retention) - User satisfaction (NPS) - Algorithm latency (p99 < 100ms) ### Experiment Results | Metric | Control | Test Group | Lift | P-Value | |------|--------|--------|---------|---------| | Recommendation CTR | 8.0% | 10.2% | +27.5% | < 0.001 | | Content consumption duration | 25 min | 28 min | +12.0% | 0.005 | | User engagement rate | 3.5% | 3.8% | +8.6% | 0.08 | | Next-day retention | 42% | 43% | +2.4% | 0.12 | | 7-day retention | 28% | 28.5% | +1.8% | 0.45 | | Algorithm latency (p99) | 45ms | 85ms | +88.9% | - | ### Conclusion **Success with concerns**: The Test Group's recommendation click-through rate improved by 27.5%, exceeding expectations and statistically Significant. Content consumption duration also showed Significant improvement. However, algorithm latency increased from 45ms to 85ms. While still within acceptable range (< 100ms), optimization is needed. ### Actions 1. Roll out the deep learning algorithm to all users 2. Optimize algorithm performance to reduce latency: - Model quantization and compression - Feature caching optimization - GPU acceleration 3. Continue monitoring retention and user satisfaction --- ## Case 3: Pricing Strategy Test ### Background A SaaS product's VIP annual membership costs ¥299, with a 5% conversion rate. The product team believes the price is too high and wants to test whether lowering the price can increase conversion rate and total revenue. ### Hypothesis Reducing the price to ¥249 can increase conversion rate, and total revenue will increase (conversion rate lift exceeds price reduction). ### Experiment Design This is a **multi-variant experiment** comparing 3 price points: | Variant | Price | Traffic | |------|------|------| | Control | ¥299 | 34% | | Test Group A | ¥249 | 33% | | Test Group B | ¥199 | 33% | **Duration**: 4 weeks ### Key Metrics **Primary Metrics**: - Purchase conversion rate - Total revenue (conversion rate × price) **Secondary Metrics**: - Customer lifetime value (LTV) - Repurchase rate **Guardrail Metrics**: - User satisfaction - Churn rate - Brand perception (too low a price may damage brand image) ### Experiment Results | Metric | Control (¥299) | Test Group A (¥249) | Test Group B (¥199) | |------|---------------|-----------------|-----------------| | Exposed users | 3,400 | 3,300 | 3,300 | | Purchasing users | 170 | 231 | 330 | | Purchase conversion rate | 5.0% | 7.0% | 10.0% | | Conversion rate lift | - | +40% | +100% | | Total revenue | ¥50,830 | ¥57,719 | ¥65,670 | | Total revenue lift | - | +13.5% | +29.2% | | P-Value (conversion rate) | - | 0.003 | < 0.001 | ### Analysis **Conversion rate comparison**: - ¥249: Conversion rate increased by 40%, total revenue increased by 13.5% - ¥199: Conversion rate increased by 100%, total revenue increased by 29.2% **Long-term value (LTV) analysis** (6 months later): | Metric | ¥299 | ¥249 | ¥199 | |------|------|------|------| | Initial purchase conversion rate | 5.0% | 7.0% | 10.0% | | Renewal rate | 60% | 58% | 52% | | 2-year LTV | ¥359 | ¥289 | ¥207 | ### Conclusion **Short-term Success, but long-term caution needed**: - **Short-term**: ¥199 pricing has the highest total revenue (+29.2%) and highest conversion rate (10%) - **Long-term**: ¥199 pricing has a lower renewal rate (52% vs 60%) and lowest 2-year LTV (¥207 vs ¥359) - **Overall consideration**: ¥249 is the better choice — Significant conversion rate lift (+40%), increased total revenue (+13.5%), with minimal renewal rate impact (58% vs 60%) ### Actions 1. Adopt ¥249 pricing and roll out to all users 2. Monitor renewal rate and user satisfaction 3. Design membership benefit improvements to increase renewal rate --- ## Case 4: Marketing Copy Optimization ### Background An e-commerce platform displays a banner on the homepage. The current copy reads "New Arrivals" with a 2.5% click-through rate, below expectations. ### Hypothesis Emphasizing urgency with "Limited-Time Offer" can increase banner click-through rate by at least 30% (from 2.5% to 3.25%). ### Experiment Design **Control**: ``` Title: New Arrivals Subtitle: Curated picks waiting for you Button: View Now ``` **Test Group**: ``` Title: Limited-Time Offer Subtitle: 20% off everything, today only Button: Shop Now ``` **Allocation**: 50% vs 50% **Duration**: 1 week ### Key Metrics **Primary Metric**: - Banner click-through rate **Secondary Metrics**: - Landing page dwell time - Add-to-cart conversion rate - Purchase conversion rate **Guardrail Metrics**: - User complaint rate (avoid clickbait) - Return rate ### Experiment Results | Metric | Control | Test Group | Lift | P-Value | |------|--------|--------|---------|---------| | Banner exposed users | 50,000 | 50,000 | - | - | | Banner clicking users | 1,250 | 1,875 | +50% | < 0.001 | | Banner CTR | 2.5% | 3.75% | +50% | < 0.001 | | Landing page dwell time | 35s | 42s | +20% | 0.015 | | Add-to-cart conversion rate | 8% | 10% | +25% | 0.008 | | Purchase conversion rate | 3% | 3.8% | +26.7% | 0.01 | | User complaint rate | 0.1% | 0.15% | +50% | 0.25 | ### Conclusion **Success**: The Test Group's banner click-through rate improved by 50%, exceeding the expected 30%, and is statistically Significant. Landing page dwell time, add-to-cart conversion rate, and purchase conversion rate all showed Significant improvement. Although user complaint rate increased slightly (0.15% vs 0.1%), the difference is not Significant (p = 0.25) and the absolute value remains very low and within acceptable range. ### Actions 1. Roll out the "Limited-Time Offer" copy to all users 2. Ensure promotional offers are genuine to avoid false advertising 3. Regularly update copy to maintain freshness 4. Test other copy directions (e.g., "Free Shipping", "Buy More Save More") --- ## Case Summary ### Success Factors From the above cases, we summarize the key factors for successful experiments: 1. **Clear Hypothesis**: Formulate testable Hypotheses based on data and Insights 2. **Appropriate Metrics**: Choose primary Metrics directly related to business goals 3. **Sufficient sample size**: Ensure at least 1,000 users per group and run for at least 1 week 4. **Holistic observation**: Pay attention to secondary and guardrail Metrics, avoid "vanity Metrics" 5. **Long-term thinking**: Consider long-term effects (e.g., LTV, renewal rate) rather than just short-term Metrics ### Common Pitfalls 1. **Ending too early**: Case 1 ran 3 weeks, Case 2 ran 4 weeks — ensuring sufficient sample size 2. **Ignoring guardrail Metrics**: Case 2 monitored algorithm latency, Case 3 monitored renewal rate and LTV 3. **Single-Metric focus**: All cases observed secondary and guardrail Metrics 4. **Short-term thinking**: Case 3 analyzed long-term LTV rather than just short-term conversion rate --- ## Related Documentation - [Experiment Design](experiment-design.mdx): Learn how to design scientific experiments - [Metrics and Analysis](metrics-and-analysis.mdx): Learn how to analyze experiment results - [Best Practices](best-practices.mdx): Master experiment design and execution best practices --- **Last updated**: January 29, 2026