Data Disruption: What You Should REALLY Be Worried About

Digital analytics consulting and engineering consultancy expands portfolio with new line of services geared toward helping clients confidently evolve to GA4 as their analytics source of truth.

There are a  lot of  things happening right now that have a lot of people worried: GA4, the end of current GA, using or not using GA in Europe, the list goes on. But I don’t think any of these are the real issue. The real issue is, why are things changing so quickly right now? How many more iterations of major change are on the horizon?  And how do we prepare the business to not be set back every time a something does change? Or,  how can businesses set set up themselves up to be proactive not reactive?

There’s been a lot of interesting news in digital analytics land over the last few months, especially if you’re a user of Google Analytics.  I’m not going to talk about these issues at any length, they are being covered in detail (I even contributed some thoughts).  

What I do want to talk about is what I see as being the real issue that businesses and marketers should be focusing on, data disruption.

The easiest way to think about data disruption is when you have a tool in your adtech or martech stack break or be removed without proper planning (i.e. GA on French websites…).  As most of these tools are badly implemented, when they are broken or removed, all associated data is turned off. This  can be exceedingly painful to upstream activities like reporting, AI/ML initiatives and other data programs like advanced marketing.

In 2022, businesses are looking down the barrel of required migration to GA4, regulatory changes that are a moving target, and an adtech playing field that is changing the rules on the daily.  That’s a heck of a lot of potential data disruption, and there’s going to be many more things coming down the pipe over the next few years.

So what can you do to promote data order and prevent future disruption?  Take a step back and think about your information supply chain.

I always think about restaurants when I’m visualizing a clients information supply chain (but I like to cook).

With rare exceptions, restaurants don’t grow all their own ingredients.  Steakhouses don’t have a back lot full of cows and vegan restaurants don’t press their own tofu.  They source their ingredients from a supplier.  It’s highly likely that Mexican, French and fine dining restaurants all get lettuce from the same produce distributor, even though they all make very different applications of the product.  I’d go so far as to say that a restaurant that tried to produce 100% of it’s own ingredients will create inconsistent food and ultimately fail.

Each adtech or martech tool in a supply chain is a restaurant that produces different kinds of food.  Google Analytics (or any reporting tool for that matter) is a fine dining restaurant, the email marketing tool is a Mexican restaurant, and the media buying tool is a French restaurant.  Each restaurant has the goal of producing a specific type of food with an expected level of quality, on demand.

Earlier in this piece I mentioned that when we review deployments of martech and adtech tools, they are badly implemented.  Most of the time ‘badly’ means that instead of calling on a standardized data layer, these tools are scraping data from the page.  Each tag for each tool is therefore creating data on the fly.  These are restaurants that are trying to grow their own ingredients, they produce inconsistent results and they will fail.  Inconsistent ingredients will lead to inconsistent results, and a consumer base that eventually loses faith in the product altogether.

Major changes to the user experience?  Tags fail, tools break down.  Upgrading to GA4?  New custom scripts are required for each call, data continuity breaks down, new points of failure are propagated through the site or app. Data disruption removes trust in reporting and the efficacy of active marketing activities.

We believe strongly at Napkyn that the easiest way to mitigate data disruption is by building out a common data layer into the business that is both the source of information to the adtech and martech ecosystem, as well as a valuable and ‘owned asset’ of the business.  When a user of your app or site clicks ‘Buy Now’, a single piece of accurate data is created and then shared to all relevant tools in the ecosystem.  Each restaurant gets the same high end ingredient, regardless of how they are going to prepare it.

Will businesses with proper data supply chains still be blindsided by major changes in the marketplace?  Yes. 

Will they be able to make changes to cope with minimal cost and disruption to the business? 

Yes.

In short, GA4 is an issue for many businesses.  Changes to regulations are concerning to all marketers.  More major changes to how we use data and market to our customers are going to come down the pipe.  The real issue that businesses should be tackling right now however, is how to ensure that managing change doesn’t lead to constant disruption to the business and an erosion of trust on the impact of digital.

Cheers,

Jim

PS. Want to talk about getting some help in building organic free range data in your business?  Talk to someone on Team Napkyn, they’re really really good at it.

Latest Articles

March GA4 & Google Updates

GA4 UI changes for Key Events and Conversions

What is changing?

At a high level, measurement teams across Google recognize that differences in...

Ricardo Cristofolini April 10, 2024
Ketul Dave, Senior Implementation Specialist March 21, 2024
Ketul Dave, Senior Implementation Specialist March 20, 2024

Ready to grow your company to the next level?

Graphics