top of page

6 Questions to Ask a potential 
MMM provider

Selecting a good Market Mix Modelling (MMM) provider can be a daunting task.

Here are a few questions to ask to guide your choice of a potential MMM provider. 

 

For more information on this, please contact a reputable MMM provider or visit the highly experienced Measure.Monks at https://media.monks.com/solutions/measurement
or contact them on measure@mediamonks.com

Will the measurement show the incremental uplift of media?

If the MMM models are measuring media but don't include the impact of other factors—such as Covid, seasonality, economic effects, etc.—then it is not providing you with an incremental measure and the media effects will be overstated.

Always ask what factors other than media will be included in the model, and the sources of the data they use, to ensure your results are as accurate as possible.

2

What period of time does the model cover?

MMM needs at least two, preferably three, years of data to ensure it is deriving an accurate measurement of media and not conflating it with factors such as seasonality, or other longer term impacts such as economic movements. If you are getting results with a lookback window of three months, then it's very unlikely to be MMM and therefore it will not be incremental measures you receive. 

Ask how much historical data the provider will require. 

3

What is the KPI that is being modeled?

Ask what the “dependent variable” will be. This is the KPI that is being modeled, and should be the metric on which your business success is judged. A sales metric—such as acquisitions, sales volume, revenue or similar—is ideal, as you can convert uplifts into revenue, then use margin to get to profit which enables you to assess true payback to the business bottom line.

If it is just web visits or digital conversions, alarm bells should be ringing! 

4

How are you dealing with interactive channel effects?

Any model needs to be reflective of how things work in the real world. For example, brand media can drive consumers to search for your products or services, which then drives up paid search. This needs to be accounted for correctly in the model specification, as well as any synergistic effects between channels and media’s ability to drive online and offline sales. If these are not accounted for, it’s probably not proper MMM. 

Ask how interactive media effects are taken into account.

5

How are you testing for causality, collinearity and significance?

These sound like complex terms, but they are not as scary as they seem.

Causality states the directionality of impact, i.e. which way something impacts something else. For example, does brand media drive consumers to search for a brand or does volume of searches impact brand media performance? There are certain econometric tests that can be done, which help determine this and validate your results. 

 

Ask for a list of all the possible data variables they would like to include in the model as well as the processes they will use to determine causality. 

Collinearity occurs when two factors move in a similar way and it becomes difficult to separate their impact, e.g. if TV and radio were planned with a constant weight over the same four weeks, an MMM model would struggle to determine the impact of each of these separately. Occurrences of collinearity can be tested for and should be flagged by the modeler.

 

Ask what kind of tests the modeler will use to determine collinearity.

Significance tells the modeler how important each of the factors are in the model. You need to be careful when there is low significance (usually on low spending media channels), as this is where the modeler cannot be confident in the result—which should be flagged to the client. 

Ask at what statistical level media is considered, and how the modeler will flag lower measures.

6

What is your verified forecasting error?

To establish a verified forecasting error, information about how the KPI has performed over a period of time is “held back” or not revealed to the modeler. The modeler needs to then use their analysis to “forecast” what they expect the KPI results to be. The forecast can then be compared to actual sales to verify the accuracy of the model.

The aim should be to have an error no greater than 8%, with a sensible range being between 2% and 8%. Non-incremental models (e.g. last click or attribution models) are poor at forecasting.

 

Ask if they have any validated forecasts from previous clients 
 

Get in Touch

This is a Paragraph. Click on "Edit Text" or double click on the text box to start editing the content.

bottom of page