Drew Smith, Director of Growth, Revenue Operations at Directive
Many organizations do not have an intentional motion around data. This leaves GTM teams in situations where sometimes they’re not able to their data to make decisions. Drew Smith discusses how to create a plan for building a data architecture and developing a measurement strategy
Drew Smith is the Director of Growth, Revenue Operations at Directive Consulting. He has built a reputation in the niche of data and reporting with more than 20 years of experience in both Marketing and Sales. Drew is a leader in marketing data and attribution and always has an eye on ensuring that marketing is able to prove the value that they bring to an organization.
Why is it wrong to think that because some marketing channels can't be measured, you shouldn't measure marketing at all?
Since getting into Operations, my niche has always been data and reporting. I love data. I love statistics. I love numbers. I came from a sales background so I always want to understand how my efforts as a marketer are influencing pipeline and deals. That’s because as a sales rep, that's what I was measured on. It’s also what business cares about.
From a marketing standpoint, I always wanted to try to figure out how to measure things. How am I influencing the pipeline? How am I influencing deals? There are a lot of loud personalities out there who tell you that all the best marketing channels and all the best marketing tactics and strategies just can't be measured.
By their logic those things can't be measured, so you just shouldn't measure marketing at all. They feel that we should just stop measuring marketing, because measuring marketing is flawed. I disagree with that wholeheartedly. If you can't measure something, you can't optimize it and improve it.
You can measure those things. It’s just incredibly difficult. The best channels and the best tactics and the best strategies that can't be measured? Most of them can (most of them, not all). You just have to get creative.
Those who are trying to measure marketing grapple with some of the things that you can't match, right. There's always going to be a category of things that are unmeasurable in many cases. There are proxy metrics that give you ways to generally look at things, but how do you balance the qualitative and quantitative signals?
Some people say you can balance the qualitative and quantitative signals by your “gut” or your experience, but it’s basically your perspective or your observations alongside the things that you can track.
Both quantitative and qualitative have tremendous importance in terms of your ability to understand how markets are performing.
Quantitative is where we talk about the number of leads that we're making, number of MQL’s, and stuff like that. Qualitative is where we're trying to understand. Are those quantitative things turning into the things that matter for the business? Are they turning into pipeline? Are they turning into deals?
If all you're measuring is the quantitative, you're missing the qualitative. If all you're measuring is qualitative, you're missing the quantitative. You need a healthy balance of both. There’s this overlooked aspect to qualitative data. A lot of times we look at qualitative data and we just look at it in terms of going into Marketo or Salesforce or whatever platforms we have at our disposal and pulling reports. While that's great, there's another aspect of qualitative data that a lot of people don't do that is super, super important as well: having a conversation with people. Talk with your SDRs. Tell them things like, “Hey, we sent you 25 MQs last week. What are your thoughts on those? Are they any good?”
I can go look into Salesforce, but I want to hear it from their mouths. What was good? What was bad? That's also qualitative data and that's something that is just as valuable, if not more valuable, than just pulling the reports from the tools you have.
How can MOPs bridge the gap between tactical and strategic?
A lot of organizations do not have an intentional motion around data. What they have is an incidental motion around data. That means you have the data you have, and sometimes you're able to use it to make decisions. Sometimes you’re not able to use it to make decisions. You need an intentional motion around data. You need a plan for what you’re trying to achieve through data.
If you don't have a plan, you're going to get stuck in this box where you don't have the data you need to make strategic decisions, or to see what's going on at the 30,000 foot view. Or you won’t have the data to make tactical decisions or to see what's going down on the ground level.
One of the first ways you bridge this gap is by creating this intentionality around your data collection and your data creation. Data can be both collected and created. You are not at the mercy of what you can only collect. You can also create data as well. Both of those can have an intentional motion behind them and intentionality behind them.
If you want to be able to make strategic decisions and see that 30,000 foot view, you need to intentionally collect and create data that allows you to do that same thing. If you want to manage at ground level, you need to intentionally create and collect data that helps you see that ground level.
If you don't have that intentionality, you get stuck. That's where people say they can't see what's going on at this level. So they just make ground level decisions, which is fine. You're going to make a lot of ground level improvements, but you're not going to make a lot of high level improvements.
That’s why from a data standpoint, you have to have a plan for both the 30,000 ft view and the ground level view. This is an area that requires people to interpret the data. Operations is criminally underfunded in terms of people and talent to take on those roles.
We need a lot more people and talent and organizations that can understand what the data is saying. Not just surface level understanding, but understanding at a deep level, pulling the insights out of the data. You need somebody capable of analyzing data that can help narrate what's going on at this level and farther down levels as well as everywhere in between.
It's not that I don't think people can do it. I think there's plenty of people that can do it. Operations just doesn't have the butts in seats. You've got operations professionals doing four or five or six different jobs. They don't have time to devote to putting that effort in.
They might be really good at interpreting the data at the ground level, but they need extra time and extra bandwidth to be able to learn how to interpret data at the 30,000 foot level. We really just need more people to take on those roles.
What is the best version of how companies should be thinking about their long-term and short-term strategies?
It ties back to measuring campaigns and making decisions about campaign performance. Every campaign is different. When you go through the campaign planning process, you need to understand who the campaign targets along with the goals and objectives for the campaign.
Let’s say we're running a campaign targeting people to become an MQL. My measurement strategy around that campaign should not be around lead creation. We're not targeting that. We're not trying to create leads. We're trying to create MQL from leads that already exist.
We may have an element of that campaign targeting net new leads and bringing them into the database for the first time, but we need to differentiate that in terms of our goals. The goal for that campaign may be to track MQLs, but we all know that campaigns also have a longer term goal of pipeline and deals.
We have to understand the length of our sales cycle. I just had this conversation the other day with a client who said they have a 12 month sales cycle, so we're running a specific campaign. They wanted to know how long before they could see pipeline and deals. The answer would be 12 months, because that's your sales cycle, right? If that's how long your sales cycle takes, that's how long it's going to take for you to get results.
You need to set proper expectations. Make sure people understand, “Hey, we're not going to see this type of result until this time period." What it also means is that people tend to make decisions too quickly before things have had a chance to fully mature.
I always use the analogy that it's like cookie dough. We all love cookie dough. Lots of people love eating cookie dough raw. But you don’t automatically have cookies if you have cookie dough. It needs to bake first. Your results, your data, your measurement strategy, has to get fully baked in the oven before you start making decisions that you can check.
Like cookies baking, you can check on things periodically and see how things are doing. Those are your leading indicators. But until the cookies are fully baked, you don't know how that batch of cookies turned out. You have to wait for it to be fully baked. We see a lot of folks trying to judge the performance of marketing and the performance of campaigns when they're still in the cookie dough stage or when they're still at the half-baked stage.
The clients that are the best at countering that are very intentional about their measurement strategy for their campaigns and their marketing activity. They create measurements and milestones at 30 days, for instance. They’ll say, “These are the KPIs we're going to measure at 30 days. And these are the goals that we're hoping to achieve in 30 days. Then we're going to do a 90 day check-in we're gonna have a 90 day milestones. These are the KPIs and leading indicators that we're going to check in at 90 days. And these are the goals we're hoping to see in 90 days.”
That doesn't mean that while the campaigns are running, you can’t optimize things. It doesn't mean you have to wait until the very end of an entire sales cycle to optimize. That's just far too long. Optimize against the leading KPIs, not the KPIs that we're supposed to have 12 months down the road when we're at full sales cycle.
So you can put these things in place and check in on the progress of your cookies, but you're not measuring too soon. You're not shutting down the oven before everything's baked.
How can you create a data architecture if you haven’t been intentional with your tracking?
I need to be able to tell what the campaign name is. I need to be able to tell that they made it to MQL. I need to be able to tell that they made it past MQL or that they were disqualified. If they were disqualified, I need to know why they were disclosed. That's five data points that I know that I need to be able to do an analysis about MQLs. What that means is number one, I have to start with my lead sourcing strategy.
Do I have the ability to capture the lead source for lead that comes in to tell that they’re paid media, yes or now? yesterday? You can answer that question pretty quickly and easily. Creating that strategy is actually the easy part, because all you have to do is understand that it came from paid media.
The next is, can I tell what the campaign name was? You should be able to, but if you can't, then it becomes a question of, “Am I using auto-tagging in AdWords to make sure that the campaign name comes through the URL? How am I capturing that campaign name through the URL?” When they fill out the form, I need a field for that. Then you go create your field and create your process to capture that UTM. Now you have the campaign name.
Next step is I need to know that they became an MQL. Can you currently tell what stage somebody has been in? Not what stage they're in now. What stage have they been in somewhere in your life cycle or buyer journey?
That means I need two data points. I need to know that they became an MQL and I need to know the date that they became an MQL. So now I have two more data points to add to this strategy for having to track this particular thing.
Then I need to know what the outcome of that MQL was. I need to know that next step. Do I have the ability to tell how the SDR dispositioned that lead? Can I tell that next step? If I can, great. If I can't, I need to create the mechanism that allows me to do that. Then if they're disqualified, I need to be able to know why they're disqualified. I need to be able to track the disqualified reason.
Just going through the process of outlining those five key things that I need to have in order to do this type of analysis with a campaign, you're now able to see very quickly and easily how I can create the data architecture to be able to track this now.
There’s no right or wrong way to set this up. There's a way that works for you. Levels of maturity are usually involved. Everybody always wants to get to the top level of maturity. You're not going to get to that level of maturity overnight. Take the first few steps and get those five things taken care of.
The next time you come up with a campaign, it might have a different set of objectives and goals. Ask yourself how to track that. Maybe the new campaign isn’t geared towards MQL but specifically towards accelerating pipeline. Okay, how do you track that. Put together an intentional motion. Ask yourself, “What are the data points that we need to be able to track that?” Then build those into the system and build the process to automatically get that data automatically as much as possible. Get that data so that you can then go report on those things.
Most organizations don't take that moment to say, “What do I need to do to be able to report on?” They just run the campaign and then, and then see what data they have available at the end of the campaign. The best clients pause to see what the measurement strategy is for the campaign. They check to see if they have the data architecture to be able to do that. If the answer is no, they know they have to build it.
What are some telltale signs of reporting up-leveling?
I have what I call a four dimensional reporting framework. As the name hints, there are four dimensions in the reporting.
The first dimension is campaigns. That’s being able to report on the performance of campaigns.
The second dimension is channels. That’s being able to report on the performance of channels.
The third dimension is the funnel. That’s being able to report on the performance of the funnel itself.
The fourth dimension is time. That’s being able to report on time and how time impacts these things.
A lot of our clients come in being able to report on the performance campaigns. They may be able to report on the performance of channels. Fewer are able to report on the performance of the funnel. Even fewer still are able to report on how time is impacting those things.
I look at how many of the different dimensions you can report on at any given time. More so than that, can you report on these individual dimensions? Can you layer these dimensions on top of one another and relate them to one another?
The most mature organizations can layer all four of those dimensions on top of one another and do some really robust reporting that not only just looks at the campaign, but the channels within the campaign, the points of the funnel that they're interacting with, and compare that against different cohorts while adding different time contextualization to those reports and datasets. They can pull reports that say this compares favorably to a similar campaign we ran last year at this time, just generally quarter over quarter. That's where you truly understand how you’re performing as a marketing organization.