I did a quick calculation in Excel to determine how far an ad server should run before automatically shutting off an ad or making a bid decision. An ad server can shut down due to the URL being down, being scrubbed, the landing page not converting, the particular ad not performing, or another variable.
Let’s say that you have an offer that converts at 1%. Then you’d expect to have 1 conversion in every 100 clicks. What if 200 clicks go by and there are no conversions? Is that a sign of something bad or is that just noise?
The math of calculating statistical significance can be complicated, so I’m going to show you a short cut that comes from probability theory. If you want to go straight to the formula skip to the bottom.
Switching examples, let’s say that you flipped a coin 100 times. What is the probability that you get at least 1 head? You might want to calculate the probability of getting just 1 head in 100 tosses, add that to the probability of getting 2 heads, and so forth– all the way up to 100 heads. This is lots of math.
But did you know that the opposite of at least one is none? If you didn’t get at least 1 head, you got none. Thus, the probability of not getting a head each time you flip is 50%, so the answer is just 50% to the 100th power.
Back to our offer that converts at 1%, the probability of a click NOT converting is 99%– 100 percent minus 1 percent. Thus, the probability of not getting a conversion in X clicks is just 99% to the Xth degree. If 200 clicks go by, you would expect to see 2 conversions– but if there are none, what is the probability it’s just random noise?
Plug those numbers in and you’ll see that if an offer should convert at 1% and you see 200 clicks go by, then it’s a 13.4% chance that something is wrong and therefore a 86.6% chance that things are fine. If you change the conversion rate and number of observations, then the probability changes, too– just plug in the numbers.
Now before you set up a script to alert you to changes in conversion rates, consider that if you set the thresholds for alerts too low, you’ll get inundated with false positives– the equivalent of “crying wolf”.
So in the above case, if there’s a 13.4% chance something is actually wrong– the URL being down, the offer sucking, the ad not performing, or otherwise– and you’re running 20,000 clicks a day, then you are evaluating this test 100 times a day (200 x 100 = 20,000). Thus, you’d get alerted, on average, 13 times a day to check if something is wrong. Is that too many times for you? You decide the balance of sensitivity that’s right for you.
If your expected conversion rate is 5%, for example on dating offer, then you’d expect to see a conversion every 20 clicks. Thus, the probability you don’t have any conversions after 200 clicks is far less than if you expect 1%. In fact, the probability is 0.004%.
If you’re not a math guy or somehow got lost in all the numbers here, just use this rule of thumb. If you don’t have a conversion in 3 times as many clicks as you’d expect to get one conversion, then something is probably wrong.
So if you expect to see 1 conversion every 25 clicks, then shut things down after 75 clicks.
If you expect 1 conversion in 100 clicks, then stop after 300 clicks.
That gives you a 95% confidence interval– another way of saying that you’re reasonably sure that it’s something worth looking at.
Set your confidence interval too low and you get false positives all day.
Set it too high and you’ll burn way more inventory than you should to detect differences in conversion.
See chart below– the percentages there are the chance that the alert is due to just statistical noise. 100% minus each number is, therefore, the chance that there’s a problem. For example, if you are looking at a 2% conversion rate and 200 clicks, then there’s a 1.76% chance nothing is wrong and a 98% chance that something is out of whack.
.
If you want to discuss the formulas in more detail, just reply in the comments and I’ll do my best to get back to you.
Here’s to more profits to you!
Related Reading: