29 April, 2020
Quantifying return-on-investment from marketing activity is always a challenge. The classic approach is to use a holdout / control group, but here we discuss an alternative: using accurate forecasts as baseline against which to measure performance uplift
Probably the hardest conversation for any marketing director is the one with the CFO. It goes something like this:
CFO: "What's the return on investment on campaign X?"
MD: "Well, you have to understand it's very complicated to measure..."
And it is. Measuring ROI on marketing activity, promotions and advertising is notoriously hard. Very, very few campaigns have such a large impact on performance that their effects are undeniable. Most campaigns either have only a modest impact, or their effects are felt over an extended period and are therefore only small at any given point in time. There is also the issue of effects being uneven across an organisation - perhaps certain products, services or geographies respond better than others.
Fundamentally, the question the CFO is asking is: what would have happened without the marketing campaign? The classical way to answer this is to use a control group - for example, not running adverts in a certain part of the country. This can be hard if a campaign is advertised on your organisation's website or in certain media (e.g. national newspapers), and has the obvious drawback that a sizeable chunk of your target audience don't get to see the message.
There is also the problem of trying to find a representative group to act as a control. This is particularly hard for offline media, where it is usually done geographically. This introduces some unavoidable bias: is region X really identical to the rest of the country? (Answer: probably not).
Online it is easier to remove a representative segment in a given channel (e.g. Google), but hard to replicate that segment across other channels (e.g. Facebook, Twitter). Not to mention the challenge of identifying the same person across different devices, or the interaction between online and offline...
An alternative approach is to use your performance forecast as a baseline and compare it to actual sales. If your forecast was created assuming NO marketing activity, and the only thing that has changed is the presence of the marketing campaign, then any over-performance vs. forecast can reasonably be attributed to the campaign.
Here is what the output might look like (based on a real-life example, even if it looks too good to be true):
For this to work, you clearly need a consistently accurate forecast that you trust. Only then will you have confidence that over-performance is 'genuine' and not just down to a dodgy prediction.
Such a forecast needs to take into account all the factors that have a material impact on performance - be that product range, staffing numbers, weather, high street footfall or whatever. It also needs to be clear what assumptions it has made in creating the forecast, so that you can compare those assumptions to what actually occurred.
This is exactly what Skarp provides to our customers - with the added benefit that it is a fully-managed service that needs zero in-house data science capability. Our algorithm will also tell you the relative contribution of each factor on the forecasted performance, so if the model expects your new marketing campaign to increase sales you will know when and by how much. You can find out more here.
If you'd like to learn more about the ways Skarp could help your organisation (besides marketing ROI measurement), click here.
If you'd like to learn more about demand forecasting in general, these articles might be of interest:
Thanks for reading.
Skarp uses machine learning-powered predictive analytics to generate accurate, automated demand forecasts - and an explanation of what is actually driving performance.
By removing uncertainty and quantifying the impact of factors affecting performance, Skarp can reduce costs and improve customer satisfaction.
We offer a fully-managed service, designed for organisations with limited in-house data science resources.
There is no setup fee or minimum contract term with Skarp, and we offer all new clients a proof of concept free of charge. We believe the accuracy of our forecasts will speak for itself.