How I saved my experiment from outliers
Blog post from Statsig
Experimentation, while a powerful tool, can be prone to errors, necessitating the importance of health checks within experimentation platforms to preemptively identify potential issues. In a case study involving the Statsig homepage, an A/B/C test on CTA text revealed high observed lift but lacked statistical significance due to insufficient power. The initial conclusion to extend the test duration was later found to be flawed, as outlier data skewed the results. Statsig's automated checks identified these outliers, attributed to the absence of winsorization in the click metric setup. By using Metrics Explorer and filtering erroneous users, the experiment's integrity was restored, highlighting the importance of robust analytics and understanding root causes in experimentation. This experience underscores the value of employing comprehensive metrics management and analytical tools to ensure accurate, actionable insights in product analytics.