How can the Reverend Bayes help you find out if your campaign worked?

Let’s use BSTS models and causal inference to find out…The R babynames package and causal inferenceThe code to accompany this article is on GitHub, so I’ll spare the details here, skipping to the pertinent pieces of interest.

Using the babynames package in R, we have access to the number of children registered with a particular name each year.

We’re interested in how the number of children called Anya changed after the character was introduced in 1998.

Let’s take a quick look at that:Looks like there’s a spike (no Buffy-related pun intended) around 1980, but things really seem to kick off in the late nineties and into the 21st century.

This first plot looks convincing, but is this pattern significantly different to how we would have expected it to look had Buffy not graced our screens?We can investigate that using the CausalImpact package.

To do this, we specify intervention start and stop dates (we’ll use the year that Anya was introduced and the year the series finished), and compare this to some names that shouldn’t be affected by the intervention.

In this case, we will use ten randomly selected common girls’ names that aren’t lead characters in Buffy, but this is a decision that takes some thought.

If you’re looking at the effect of paid search spend on revenue, what do you use as your controls?While the CausalImpact function doesn’t require them, they are useful, so it’s worth taking the time to think about what you could use.

You might think organic sessions might work, but do your organic clicks go up following increased PPC spend as customers get brand exposure, then search for your brand later?In our case, we have built an xts time series of our data, with Anya, as the first column, being our variable of interest, and the following names being those which the algorithm will use for its calculations.

Using these data, CausalImpact will plot an output that shows the predicted number of girls called Anya in each year in a dashed line (confidence intervals shaded in blue) and the observed number of babies called Anya as a solid line.

The period of intervention is shown as vertical dashed lines.

In our case, we can see that the solid line is above not only the dashed line, but also the confidence interval.

Calling summary on the output gives us this:> summary(buffy_causal)Posterior inference {CausalImpact}Average Cumulative Actual 711 9955 Prediction (s.

d.

) 231 (194) 3237 (2718) 95% CI [-198, 566] [-2778, 7922] Absolute effect (s.

d.

) 480 (194) 6718 (2718) 95% CI [145, 910] [2033, 12733] Relative effect (s.

d.

) 208% (84%) 208% (84%) 95% CI [63%, 393%] [63%, 393%]Posterior tail-area probability p: 0.

01005Posterior prob.

of a causal effect: 98.

995%For more details, type: summary(impact, "report")Telling us that we would expect, on average, 231 girls each year to be called Anya, but we observed 711.

Our upper 95% confidence interval is 566, so, with our observed 711 being above that, we have a p value of 0.

01005 that the uplift of girls being called Anya is due to chance.

Helpfully, CausalImpact includes a useful argument to summary() that we can include to give us a written report:> summary(buffy_causal, "report")Analysis report {CausalImpact}During the post-intervention period, the response variable had an average value of approx.

711.

07.

By contrast, in the absence of an intervention, we would have expected an average response of 231.

21.

The 95% interval of this counterfactual prediction is [-198.

44, 565.

88].

Subtracting this prediction from the observed response yields an estimate of the causal effect the intervention had on the response variable.

This effect is 479.

87 with a 95% interval of [145.

19, 909.

51].

For a discussion of the significance of this effect, see below.

Summing up the individual data points during the post-intervention period (which can only sometimes be meaningfully interpreted), the response variable had an overall value of 9.

96K.

By contrast, had the intervention not taken place, we would have expected a sum of 3.

24K.

The 95% interval of this prediction is [-2.

78K, 7.

92K].

The above results are given in terms of absolute numbers.

In relative terms, the response variable showed an increase of +208%.

The 95% interval of this percentage is [+63%, +393%].

This means that the positive effect observed during the intervention period is statistically significant and unlikely to be due to random fluctuations.

It should be noted, however, that the question of whether this increase also bears substantive significance can only be answered by comparing the absolute effect (479.

87) to the original goal of the underlying intervention.

The probability of obtaining this effect by chance is very small (Bayesian one-sided tail-area probability p = 0.

01).

This means the causal effect can be considered statistically significant.

Summarising marketing cause and effect with CausalImpactOf course, this article is a quick 5 minute introduction to marketing cause and effect attribution using Bayesian structural time series models using the CausalImpact package for R.

The R package has more strings to its bow than I’ve discussed here, so it is well worth reading the paper and the documentation, as well as reading some of the other examples available online.

However, for many marketers who want to understand whether they are spending their money in the right places, this package is a great place to start, without requiring extensive knowledge of time series analysis and Bayesian statistics.

Indeed, I would say that the most challenging aspect to its use is not the mathematics, the syntax or the choice of function arguments, but the selection of appropriate time series that are unlikely to be affected by the intervention under study.

Additionally, chances are that you perform a range of marketing campaigns concurrently, across different media, different demographics and different geographies.

This will, of course, make your data analysis a bit more involved than the simple example given here.

However, while the creation of an appropriate dataset may present more challenges, the following analysis can be quite straightforward.

As a data-driven marketer myself, a huge thanks to the folks at Google who developed this, and now maintain this very useful R package.

If you want to explore this technique and package in more detail, I recommend reading the paper and playing about with the package to get a feel for the additional options using an easy dataset like this one.

There’s still plenty of work to be done here, I haven’t even looked at Willow or Oz…For further introductions into how you can quickly use data science tools in your business analytics, follow Chris on Twitter.

.

. More details

Leave a Reply