Subscribe to series Attribution Enablement

Owning The Relationship: How To Get The Most Out Of Digital Marketing Vendor Reporting

by Hollis Bowman

If you’re someone making decisions about digital marketing, chances are you work with a number of different solution providers who were each hired in pursuit of the same fundamental goal — to help you improve your company’s digital performance.  Chances are also pretty good that each of these hard-working vendors provides performance reporting as part of their services. But if you’re like the marketers I work with, I bet a lot of those reports provide conflicting information and each of them tends to paint a particularly rosy picture of the performance of the particular service they’re selling. If everyone says they’re the primary driver of revenue in your marketing mix, they can’t all be right — right?

If you have a bunch of conflicting information, what’s the truth and how do you judge that?

I’ve been on the phone more times than I can count with our founder Jim Cain as he tells people that, as much as he loves his son, he wouldn’t let him write his own report card.  This same principle applies when you are working with vendors reporting their own performance. You can have nothing but respect for the individuals involved and still be better off with an independent evaluation of performance against goals.  

Here’s the thing: you’re in the driver’s seat to both defining and leading your vendors toward your definition of success against the goals you have set out to achieve with your marketing budget. If you lay the groundwork for what ‘success’ looks like, the reports your vendors provide can be a really powerful accompaniment to your internal attribution practice, rather than a source of confusion  when numbers don’t line up.

Today, we’re going to go over how to understand and manage conflicting information from your vendors to help you make more informed decisions and better understand the impact of those decisions. And, unfortunately, there are people out there who do act in bad faith. So we’ll touch on some red flags to watch for when you’re considering working with a vendor to make sure you can maintain confidence in what your budget is returning.

Understanding Your Reporting – Data Biases In Attribution Reporting From Multiple Sources

What convinced the customer to purchase? There are a lot of ways to tilt your head when you depict the influence any interaction had on the decision to convert, and often a vendor has a high estimation of the impact of the particular lever they control in the bigger scheme of your digital marketing stack. This isn’t because they’re trying to pull a fast one.  Your vendors (anyone, for that matter) can only report on the data to which they have access. This limited scope of information tends to bias data in the favor of the information available.

So what can you do? Ultimately, you need to establish consistent standards for data markup and availability in a way that numerous platforms can feed into a larger data set. Check out this excellent post by my good friend Michelle on how to get this going for your organization. These are the fundamentals for how you can establish any attribution model as your source for truth at your company. With that in place, you can leverage your vendor reports while also judging their conclusions against your internal standard for truth.

That said, even the most sophisticated and dispassionate attribution models face very legitimate challenges for how you can determine fact from fiction (or just incomplete facts). Here are some of the challenges to judging the accuracy of vendor reporting, and how you can solve them.

Legitimate Challenges In All Attribution Reporting

1. Duplicated Users

Issue: User journeys fractured over numerous devices and platforms create multiple records for the same users and it is a challenge to tie all marketing touches to eventual conversions.

Details: The reality is that most people are connected on numerous devices and interact with your brand in different ways at different stages of their journey to conversion. Without a method to connect users across multiple devices you are limited in your insight on user paths to conversion. Multi-device and data silos lead to multiple records for the same user. This is a limitation of cookie-based tracking when there is no data source that takes in consistent user tokens to deduplicate exposures and interactions for specific persons. This leads to a limited view of the effectiveness of specific paths and lost connections between top-of-funnel and conversion activities.

Solutions: User ID in Google Analytics, Cross-device Tracking deployed in Google Analytics, Customer Data Platforms (CDP). Psst, our partner Tealium has a fantastic CDP called AudienceStream

2. Rules-based Models

Issue: Rules-based models are static and over/under value certain marketing efforts.

Details: Attribution models start from a very basic rules-based approach. These assign credit to marketing channels based on their position in the overall customer journey. These rules are applied to all situations equally and provide simplistic credit in a way that is easy to determine, depict, and validate. Cool story, right? But the challenge with these types of models is that the rules, being static, do not adapt to changes made in your marketing mix or consumer behavior. They also tend to skew results in favor of specific types of marketing, leaving other types of marketing under-credited for valuable contributions made at various points in the funnel. As an industry, digital marketing has increasingly been turning to algorithmic (or data-driven) models to attribute marketing credit. These are valuable because they analyse an enormous amount of converting and non-converting paths to determine the actual proportional credit that exposure to a given marketing effort at a specific time has to the decision to convert in aggregate. These models are flexible and less susceptible to the influence of opinion or selective application that supports a desirable conclusion.

Solution: Deploy a data consistency and governance framework sufficient to deploy data-driven attribution models.

3. Poor Data Quality

Issue: Data on marketing and conversions is inconsistently labelled and deployed, or exists in siloed data sources that do not interact with each other making it difficult to create a complete picture of user journeys.

Details: If you ever listen to or attend an intro lecture on Machine Learning you’re going to hear the old  nugget of, “Garbage in, garbage out”. Models trained against incomplete or messy data can’t do a good job – simple as that. If your enterprise data is rife with poorly planned or maintained tagging on your digital assets, if you have incomplete conversion tracking or a lack of consistency over different data sets, data sets that are difficult or impossible to blend, or a lack of cost data info, you don’t have the raw materials for any fully-formed reporting on your marketing attribution. This leaves you at the mercy of those vendor reports to try to determine what the truth is, and those vendor reports are going to be as good as the data that goes into them. It is of vital prudential interest to provide clean, consistent data to any model before using it for decision making.

Solution: Creation, roll-out, and governance of a comprehensive data architecture to feed a data-driven attribution model. 

4. Timeliness

Issue: Algorithmic data takes time to process and so if you are looking at conversions that have happened in the very near term you may not be getting all the associated marketing activities that lead to an eventual conversion.

Details: It is vital that for attribution reporting, especially when comparing your internal data versus your vendors, that you make sure you are looking at the same period for conversions and attribution lookback.

Solution: Establish internal best practices based on your customer’s typical buying cycle to review attribution data. For many people, this is often at least one month of data behind current date, with a lookback window appropriate to your business.

Red Flags

… Which is a much more polite way of saying illegitimate challenges. Like it or not, some people are not playing fair. This is what to watch for.

1. Mathmagical Conclusions

Will they show you their math and how it works? There are fundamental assumptions being made in any probabilistic model. Algorithms are not alchemy. There should be a good explanation for the moving parts and how they work. While it’s important for companies to protect their trade secrets, the assumptions made should be explicit and available for you to question and understand.

2. Limited Access To Your Own Data

Are you able to access and independently verify the data they generate as part of your campaigns? Vendors that will not divulge details to you about the data they are generating from their campaigns limit your ability to validate their conclusions.

3. Models That Do Not Limit The Impact Of The Vendor’s Efforts

Do they acknowledge the impact of context from other elements of your marketing mix when they report conclusions about their effectiveness and the return on your spend? Vendors who consider their efforts in absolute isolation are not providing a fair depiction of their efforts.

One way we use to test a theory that a vendor may be giving themselves multiplied credit on the same marketing activity is to test their conclusions against either the last click or the last non-direct click model in Google Analytics. This is a good way of quickly assessing the delta between a model provided by your vendor and a reliable set of rules.

4. Unwillingness To Work With Your Goals And Definitions

If you have provided a clear, well-defined primary goal for what you need to accomplish with your ad budget, a good vendor will work with you to tailor their service to pursue that goal, and provide feedback for how they are performing against your particular goals. If you are working with a vendor who is unwilling to present data against the guidelines you have provided, they are using your data against you.

So, Should I Ignore Vendor Reports?

Nope.  

Vendor reporting is often quite irreplaceable, as different platforms will generate data points (specifically when tied to user-specific details) that provide meaningful context, and likely cannot be replicated in the analytics tool itself. There may be specific and granular user activities that are platform-specific and uniquely provided by the vendor reports. All vendor data is subject to bias that extends as far as the context they consider. It is far more important to, as an organization, address data quality and user duplication across devices so that you are powering data-driven reports that define the standard-bearing version of attribution truth. It’s important that this standard is owned by your organization, transparent, and well understood. Vendor reports are a fantastic accompaniment to a practice like that.

Things No Model Can Fix

No matter how sophisticated your attribution practice, and how well you understand and manage the data coming from your vendors, there are some things that attribution just can’t fix.

1. Data Silos And/Or Poor Governance

If the data from multiple platforms isn’t inter-operable, or data standards are not being adhered to, attribution just isn’t going to work, and you won’t be able to confidently integrate and/or compare vendor data to your internal sources of truth. You can’t complete user journeys and your attribution pathing will be by its very nature incomplete. I don’t personally believe data will ever be “perfect” when it’s at an enormous scale, but in order to make any conclusions from a data set it’s important that it is at least 90-95% blended (or blend-able, as it were), to ensure that your conclusions are being generated from quality inputs.

2. Focusing On The Merely Interesting Instead Of Action

No matter how well formed your data is, if you don’t do anything with it, there’s no point. It is a significant resource ask to deploy and maintain an attribution practice (or data practice in general for that matter). When I do Google Analytics 101 training, this is one of the topics I tend to insist on covering. If you can’t do anything with data once you’ve gathered it, then it is at best an interesting distraction. Attribution and deep analysis of vendor data is meaningless if you can’t change marketing mix or spend.

3. Bad Storytelling

Reporting quality is vital. If you have good quality, trusted data, and people who are willing and able to act on that data, it is still necessary that the right data is presented in a way that tells the story with the absolute least amount of friction between the reader and the insight. This starts by having a plan for how to measure activity, a goal to achieve, and a visually compelling way to mark progress against those goals.

Bringing It All Together

This was a long road. If you’ve made it this far, it’s probably not a bad idea to recap at this point. Vendor reporting is not only valuable, it can be an irreplaceable source of context and platform-specific insight. You can set both yourself and your vendors up to succeed by clearly defining your objectives, conversions, and attribution models.  This is enforced by having a centralized, governable, and trusted data set that your organization owns so that you can confidently evaluate vendor performance on your terms, and leverage contextual information to enrich these conclusions.

To summarize:

  1. Data is only as good as it is consistent and complete. You can’t usefully compare what is not understood or inconsistently implemented.
  2. Vendor reporting can provide valuable context, but should accompany explicit understanding of organizational goals, and consistent and governed marketing metadata.
  3. You are in charge of determining your goals. It is beholden on vendors to play in your sandbox.
  4. Your vendor can only present the data that is available. Siloed data is biased by its very nature.
  5. Beware vendors who restrict your access to information or understanding of methodology and data flow, models that do not consider all available inputs, or consider vendor activities more than once.
  6. No model can fix incomplete, siloed data, poor understanding and reporting, or unwillingness or inability to act.

Hey, since you’re here — Napkyn can help you with this stuff from top to bottom. Talk to us.  

Hollis Bowman

Senior Analyst and Google Analytics 360 Practice Lead

Hollis Bowman is Napkyn Analytics' Practice Lead for Google Analytics 360. As a senior member of our Analyst Team, Hollis' specialty is working with our clients to turn questions into data and data into answers.

See more posts from Hollis