> you do have good reason to believe that the choice is harmless.
The issue you will run into here is that 95% confidence means that you will only have a false positive 5% of the time. It does not mean a neutral finding is 95% likely to be neutral. The lever that controls that is statistical power, which is oft-ignored in conversations about A/B testing. Most statisticians use 80% power, which means a full 20% of neutral findings were false negatives.
That is very true. However, you do have much better reason to believe that the option is neutral than you have to believe that the option is beneficial. In an example like the one given in the article, you also likely have enough statistical power to be reasonably confident that the option is close to neutral, so if you're making a negative decision, it's probably not strongly negative.
The issue you will run into here is that 95% confidence means that you will only have a false positive 5% of the time. It does not mean a neutral finding is 95% likely to be neutral. The lever that controls that is statistical power, which is oft-ignored in conversations about A/B testing. Most statisticians use 80% power, which means a full 20% of neutral findings were false negatives.