Why you should "accept" the null hypothesis when hypothesis testing
Blog post from Statsig
The statement "You can never accept the null hypothesis! You can only fail to reject it!" is a misunderstanding rooted in the conflation of Fisher's significance testing with Neyman-Pearson's hypothesis testing. Fisher's framework focuses on using p-values to measure evidence against a null hypothesis without specifying an alternative, while Neyman-Pearson's framework treats both null and alternative hypotheses symmetrically and uses error rates to make clear decisions between them. The misconception arises because Fisher's approach does not support "accepting" the null hypothesis due to the lack of a definitive proof, whereas in Neyman-Pearson's framework, "accepting" a hypothesis is necessary as part of a decision-making process that temporarily assumes one hypothesis as true. The key difference lies in Fisher's emphasis on assessing evidence without fixed significance levels, versus Neyman-Pearson's structured decision-making that controls error probabilities and uses critical regions instead of p-values. Understanding these frameworks as distinct, akin to different sports, clarifies why both "accepting" and "rejecting" hypotheses have their place within statistical analysis depending on the framework employed.