Effective marketing measurement is critical to assess whether your strategy is delivering in the short- and the long-term. But it is rarely a simple capability to stand up, with many players in the mix, different data sources, and the challenge of bridging analytics and business language. Here we explore eight pain points and how to approach them.
See Ekimetrics Marketer’s Guide to Measurement, here.
1. Data quality, accessibility and scalability
Strong measurement relies on data that comes from multiple sources; from channel and marketing data (e.g. TV, radio, search, CRM, social price and promotions), to customer and sales (e.g. brand, purchase history, behavioural, or attitudinal data) and external data (e.g. inflation, weather, GDP, footfall or competitor data).
For those in the earlier stages of a measurement journey, data is often hard to access, has not been properly tracked, is of poor quality, or comes from manual processes. But paying attention to setting a strong data foundation can be transformational in the value of the analysis. Especially at scale.
The route to better quality, more accessible data relies on a programme of data transformation. From automating inputs and ensuring data comes directly from source systems, to implementing a clear, consistent, global taxonomy.
The more you can remove ‘interventions’ the better. Such as through standardisations that avoid human-introduced randomness in naming conventions, or reducing the number of data extracts that are manually transferred and translated to other systems. By spending the right effort at the beginning to deliver a strong data layer you will significantly increase the scalability of your solution, reduce the resources needed to keep it running, and increase the frequency and speed of your measurement.
2. Frequency of measurement and the time to insight
Deciding on how often to want to measure outcomes is tricky; measuring the impact of all your marketing on a daily basis embeds short-termism to your thinking, but only measuring to prepare for your annual budget cycle leaves a huge amount of potential upside to go after.
The key here is in matching the frequency of measurement and time to insight to the task at hand. And more importantly, the decisions you would take as a consequence.
For example, there is little to be gained from measuring brand outcomes on a daily basis. Not only does brand sentiment change slowly, there is a significant question in what action you would take if you did measure it so frequently. Brand tracking can be ‘always on’ in terms of data gathering, but you’re typically looking for trends over a longer period.
Experiments tend to be focused on point-measurement of a limited number of actions. Repeating the same experiment frequently is a waste of time as the learning is unlikely to change. But performing many different tests constantly evolves and optimises activity. Where you test, allow for an appropriate time to gather enough data and see outcomes emerge (depending upon what you are testing), scale the learnings to your marketing plans, and start a new experiment to continuously improve.
Attribution is a prime candidate for daily steering of digital campaigns as it’s highly automated with lots of granular, recent data. Historically, MMM involved lengthy timescales from execution to insight, but with the right data setup and machine learning, you can increase the speed of delivery and frequency of measurement. The faster you can react to new insight, the faster you realise the benefit.
3. Consistency of metrics
Different methods and different campaigns may have measures that sound like they mean the same thing, but they don’t. For example, ROI is often a highly overused term that isn’t comparable if it is being quoted at an individual campaign level vs. the output of an attribution model vs. the results of an in-market experiment.
One solution is to ensure the meaning of the measures used are spelled out each time, though that may not stop marketers and CFOs comparing one with the other and arriving at an incorrect conclusion.
Another is to employ methodologies such as Unified Measurement, or triangulation, which integrate all marketing measurement methods into a single framework, delivering consistency of meaning and a common language to support more effective decision-making. We’ve developed a guide to triangulation that you can find here
4. Complexity of the landscape
The pace of change in the marketing landscape is constantly accelerating; media fragmentation, the continued rise of e-commerce and retail media, short-term vs. long-term, brand vs performance, economic uncertainty, and more.
All of which multiply complexity, both in the questions being asked of measurement and the modelling approaches needed to answer them.
The solution here is to embed flexibility and agility into your measurement. Focus on specific questions that you can act on right now, rather than giant complex models that no one understands and are too difficult to maintain. Answer key questions or use cases that will have the biggest impact in the shortest time to build confidence in the outcomes, and then use that as leverage to go deeper into granularity and complexity.
Also get comfortable that not all your measurement needs to be happening all at the same time; brand or long-term outcomes take time to emerge - you don’t need to keep continually re-evaluating. But assessing how your evolving assets on TikTok are hitting the mark or not is something that should be done much more frequently to learn from what is working and continually improve.
5. Modelling isn’t an exact science
Models can’t ever be regarded as truth as they’re based on the best data available at the time. And they are usually better than no models. Provided those building them have both the domain and modelling expertise to make the right choices.
It’s essential they know the data, its meaning, and how to make it available. As well as how media and marketing decisions and activities are attempting to influence consumer behaviour and how behaviour may change as a consequence.
Modelling methods can also have a significant bearing on the outcome. All methods are continually evolving as the landscape on which they’re applied changes. New data can unlock new insight. Insight that may change decisions and take you in new directions. But action based on your best knowledge today is almost always better than inaction or instinct alone.
6. Communication between data scientists and their customers
It is common to hear that analytics teams and business teams feel important information gets lost in translation. It can be difficult to translate a business need into an analytics approach. Sometimes, too much focus on the analytics – either in doing the work or presenting it – means the business need drops out of focus. At best, good work goes to waste. At worst, competitive advantages don’t materialise.
Data science teams need commercially-focused measurement experts who understand the business, can commission the right analysis to answer the right questions, and translate the results.
Ensuring insight is useful, usable and used very much depends upon the culture of data-led decision-making in an organisation. Organisations led by those who favour instinct may struggle to make the most of their data asset and fall behind more data-literate competitors.
7. Clarity of the question
Clarity over the expected outcomes and measurement requirements from the outset of marketing activity is essential to measurement success. This ensures data science teams are prepared to collect the right data to answer measurement questions (for example the level of granularity in geo-tagging), set-up experiments so the results are statistically valid and can be used in meaningful ways (for example in test design and sample sizes), or select the right evaluation methods.
This requires that marketers brief with effectiveness in mind, involving their marketing science teams as early as possible. Otherwise learnings may be limited and future budget may be wasted on activity that doesn’t perform.
8. Things change
Competitors catch you off guard. Consumer habits shift. Politicians make decisions that impact the market. New channels take off. Your product goes out of stock and you can’t fulfil orders. Stuff happens.
All of which can have a significant impact on both consumer behaviours and model efficacy. Increasingly, insight and effectiveness teams are employing more agile approaches, with more in their toolkit to help them overcome fast moving change or shocks to the system. But even the best prepared team can be taken by surprise.
Embedding data and measurement within your BAU (business as usual) decision making processes, and using a wide range of methods from econometrics to experiments and attribution, can allow the flexibility and agility to respond to shocks. You can’t control the environment you operate in, but you can make sure you’re set up to respond quickly when things change.
In summary, effective marketing measurement involves a wide range of challenges to overcome, from analytics to people. But it is the skill (and job) of the marketing science teams to tackle these and build capabilities that support smarter decisions in the short- and the long-term.