Making Better Marketing Decisions with Contextual Data

by Colin Temple

So much of digital analytics is an artefact of how it all began, and it began with some pretty serious limitations. Even if you’re a digital analyst by trade, to do your job well you need to start looking at the bigger picture.

And then there were hits

Digital analytics got its start when people quickly realized that the server logs that were provided by web servers could be used to glean insight not just about how a web server was technically performing, but about how the site it hosted was performing from a business perspective. Ever since websites have been employed to achieve business goals, there has been a desire to understand whether or not they were achieving those goals.

This started by counting hits, which was always a bit of an ambiguous term. Sometimes it meant any call to the server, which included web pages, but also things like images, scripts, stylesheets, applets or other embedded objects. Sometimes it meant pageviews; when a hit counter was installed on a page it only tracked a single view each time the page was loaded. And sometimes it meant visits, when you had a hit counter installed on your home page only. And this led to all kinds of confusion — why do the server logs show so many more hits than the counters we installed? What do you mean people sometimes don’t come in by the home page? How does the Internet even work?

If you were around long enough to experience these wondrous moments of an infant e-business’ self-discovery it’s likely that the limitations of web analytics at the time has infected your thinking about how a business performs on the web.

The limited view of web interactions

If your organization is in the business of selling things online, there’s a single metric that kind of stands out:

  • Did I make more money than I did before?

Even this is confuddled a bit by the average web analyst. Analysts who focus on the revenue a website earns are at least focusing on the performance of a website at booking sales. But there is vital information that can dramatically change the answer to the above question that isn’t captured by revenue alone.

  • How much of the revenue was returned to customers, or lost by fraud?

  • How much did it cost me to earn that revenue?

Typically a web analytics tool isn’t going to tell you these things until you get fancy with your implementation, and if things like returns and costs happen outside of the Internet, you’re not likely to have that information handy in your analytics tool.

This hardly makes the information that the average web analyst provides useless, but when it comes to making key decisions based on data, providing revenue numbers just isn’t good enough.

Web data must live in context

I harp on the concept of context in data quite a bit, and with good reason. It really can be the difference between making a good decision about merchandising and marketing spend, and making an ill-informed one.

At the top level, the difference between tracking revenue and tracking an adjusted net return is not catastrophically huge. We made $X according to the website and finance tells us we made $Y at the end of the day. So what? We can at least track the effect of our efforts on $X and assume that similar variance happens to $Y. That’s still effective, broadly speaking.

But when we break that information down by, say, marketing attribution, or by individual products sold, you might find that, for given dimension combinations, $Y is significantly different than $X.

Consider this case: Let’s say we ran an email campaign and a display campaign, and we measured how much revenue came out of each.

Campaign Revenue
Email blast #33 $30,541
Display campaign #2 $52,921
Total $83,462

So, by those numbers the display campaign looks like our winner; it made the most money. But, of course, we presumably paid more on the display campaign than we paid in sending the email blast. What happens when we factor in the marketing costs?

Campaign Revenue Marketing Cost Result
Email blast #33 $30,541 $150 $30,391
Display campaign #2 $52,921 $24,236 $28,685
Total $83,462 $24,386 $59,076

Suddenly, things look a little different — the email blast may have brought in less overall revenue, but the display campaign made less money overall. That’s an important consideration.

Marketing cost data is becoming a little easier to integrate into digital analytics tools. Google Analytics, for instance, provides automated integrations for the marketing costs you incur from Google’s advertising services through AdWords and DoubleClick. That tool also has an API for importing cost data for non-Google spend, and tools like Analysis Engine automate that process. So, you can start to get real return on advertising spend (ROAS) numbers like the above.

But you’re still thinking web analytics. We spent money on the web, we made money on the web, we had these results on the web. But the business, even an ecommerce one, incurs costs offline, too. What happens when we start to consider those?

What about merchandising costs, for example? Let’s say we’re selling shoes that our company makes. Suppose that the email blast was focused on selling running shoes, but the display campaign was advertising summer sandals. Those two items are going to be priced differently but, perhaps more importantly, they also cost more to make. Let’s say we tend to overprice our sandals, such that the profit margin on them is significantly higher than it is on runners. What happens when we factor in the cost of producing the goods that were sold?

Campaign Revenue Mktg. Cost Merch. Cost Result
Email blast #33 $30,541 $150 $20,192 $10,199
Display campaign #2 $52,921 $24,236 $17,321 $11,364
Total $83,462 $24,386 $37,513 $21,563

Suddenly we’re back to our previous conclusion, but for different reasons. The display campaign may have cost more in terms of marketing, but the merchandising costs for the products it was selling were lower. So, overall, that campaign actually made more money for the business than the email campaign did.

This could have gone the other way; given the information in our first table, there’s no way to understand which campaign actually earned the most overall. Even with the information in the second one, we’re still not getting an accurate picture of the effect of those marketing campaigns on the business. The fact is, these two campaigns, even though they had significantly different revenue numbers, had very similar overall effectiveness, all other things being equal.

The problem is, when we look just at the revenue numbers or even after adjusting for marketing costs, we’re still assuming too many other things are equal. That’s what happens when you don’t have enough context, and why it’s critical to ensure that you do. I didn’t even go the last mile in the above example. What about returns? What if our sandals suck and most people returned theirs after a couple of weeks? That campaign would yield a negative return, and the decisions we made based on the above may lead to further losses for the business.

And if that can happen with a number like revenue, which seems to be such a cut-and-dry measure of how effective your site is, what about derivative metrics? If you’re basing the effectiveness of campaigns on conversion rate, or considering bounce rate, are you really getting down to what matters at the end of the day? Are the decisions you’re powering really improving the bottom line, or are you pumping up vanity metrics?

There was a time when conversion rates were the best you could do, but we are past that. They still play a role, but out of their proper context, most metrics can be misleading. We’re at a point where integrating your different sources of data is easier than ever and, even if your domain is the web, you just can’t do your job without the appropriate context.

In short, replace as many assumptions as possible with actual context. This will mitigate the risks they introduce, and help you make better decisions with your data.

Of course, if you need help getting the data all together, calling Napkyn Analytics is a good place to start.

Colin Temple

VP, Product

Colin serves as VP, Product for Napkyn Analytics. A diverse background in data, software and marketing and an education in logic and philosophy give Colin a unique perspective on where the analytics practice is, and where it should be.

See more posts from Colin