What's your name?
This marketer’s guide to understanding statistics isn’t for technical people. We are putting this together to demystify statistics because, like it or not, as a marketer you can’t avoid it.
The core element in statistics for a marketer is the concept of statistical significance.
Statistical significance lets the marketer know if her marketing experiment was a random fluke or a certainty.
For example, if your marketing experiment was tossing a coin and you got three heads in a row you may be tempted to conclude that this is extremely rare. Only, it isn’t. There is actually a 1 in 8 chance of this happening.
Our brains are terribly bad at making predictions so we can’t trust them. But I’m a marketer so I’ll say something most people will not– there is nothing wrong with instincts. In fact, following instincts is a central feature of how we, as a conversion optimization agency, design experiments.
Don’t let the technical people convince you that the marketing you’ve honed after 100s of hours of practice isn’t valuable.
But, what a statical calculation can do is prevent us from making marking conclusions that could have disastrous business implications.
I’m sure you’ve seen case studies that seem to suggest how minor site tweaks led to 30% conversion rate improvements. Many of those tests were interpreted wrong because basic statistical facts were ignored by the marketer. My job is to prevent you from making such mistakes.
Understanding Statistical Significance
Let’s say you are running two Facebook ads that lead to two distinct landing pages. As a marketer, your job is to understand which ad is better so you can stop spending on the poor performer and dedicate your entire budget to the winning formula.
You track your performance every hour.
Hour 1, ad 1, 1 visitor, status = didn’t buy.
Hour 1, ad 2, 1 visitor, status = bought.
Hour 2, ad 1, 1 visitor, status = didn’t buy.
Hour 2, ad 2, 1 visitor, status = didn’t buy.
Hour 3, ad 1, 1 visitor, status = didn’t buy.
Hour 3, ad 2, 1 visitor, status = bought.
You might feel you are seeing a pattern but this is most likely random, just like our coin toss experiment.
What an A/B testing tool like VWO.com does is it takes numbers are plots them across a probability distribution curve like this:
What the testing tool is doing is comparing the performance of those two ads to see if the difference in their performance is statistically significant.
The testing tool considers a number of signals (like how long the test has been running and how many conversions were recorded). If the probability to be the best is greater than or equal to 95% then a winner is declared.
I know this is a simple explanation. But I’m a marketer and I don’t really need to understand terms like standard deviation to do my job because the testing tool has been calibrated to do that for me.
I rely on my watch to tell me the time, I don’t fully understand how my watch is able to accurately calculate time.
Two Statistics Levers
This is important for the marketer to understand. When it comes to A/B testing there are two levers:
– How long the test runs.
– The contrast between test ideas.
How long a test runs isn’t really in my control because a testing tool like VWO.com determines that for me. I just let it run for the longest possible time (always for 2 weeks or more).
But what I can control is the contrast in the ideas being tested. This is an important detail. In the image above you noticed two curves, this was how the testing tool compares the difference between the two ideas. If the differences between the ideas are small we will need a lot more time for the test to collect data. It’s like sip-tasting sugar water where the sugar in the water has been altered a little. To know which batch is truly sweeter you will need to sip taste it many times because it would be hard for your tongue to pick up the difference between sweet and slightly sweeter.
But if I asked you to tell between sugar water and salted water you’d be able to do that on the first sip. This is because the contrast is massive.
If you can incorporate the same type of contrast between your test ideas it will allow the testing tool to declare a winner much faster.
There are two ways to introduce contrast:
1: After your sales pitch has been constructed you can have each test concept focus on different Selling Angles.
2: Once the sales pitch has been constructed you can alter the tone of voice for each concept.