Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> you do have good reason to believe that the choice is harmless.

The issue you will run into here is that 95% confidence means that you will only have a false positive 5% of the time. It does not mean a neutral finding is 95% likely to be neutral. The lever that controls that is statistical power, which is oft-ignored in conversations about A/B testing. Most statisticians use 80% power, which means a full 20% of neutral findings were false negatives.



That is very true. However, you do have much better reason to believe that the option is neutral than you have to believe that the option is beneficial. In an example like the one given in the article, you also likely have enough statistical power to be reasonably confident that the option is close to neutral, so if you're making a negative decision, it's probably not strongly negative.


Right. And you can also ratchet up the statistical power you want, at the cost of increased sample size requirements.


Which is still better than 50%. After a split test, your point estimate, whether significant or not, is still your best guess of the effect.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: