Home / Companies / Statsig / Blog / Post Details
Content Deep Dive

How to calculate statistical significance

Blog post from Statsig

Post Details
Company
Date Published
Author
Tim Chan
Word Count
779
Language
English
Hacker News Points
-
Summary

Analyzing A/B test results involves using hypothesis testing to determine whether observed differences between two groups, A and B, are statistically significant, or merely due to chance. The process involves formulating a null hypothesis, which posits no difference between the groups, and an alternative hypothesis, which suggests a difference exists. The key to hypothesis testing is calculating statistical significance, often determined by a p-value, which indicates the likelihood that the observed differences are due to random variation. A low p-value, below a predetermined threshold (alpha), allows for the rejection of the null hypothesis in favor of the alternative. The test's reliability is influenced by sample size, standard deviation, and effect size, with larger samples and effect sizes typically providing more robust results. Calculating statistical significance involves computing test statistics like Z-scores or T-statistics, which are derived from the observed difference (delta) and standard error, and then comparing the resulting p-value against the significance threshold to make a binary decision on rejecting the null hypothesis.