3 steps to determine the key activation event

Most people by now have heard of the “Product key activation event”. More generally, Facebook’s 7 friends in the first 10 days, Twitter’s 30 followers… get lots of mentions in the Product and Growth communities. Theses examples have helped cement the idea of statistically determining goals for the onboarding of new users. A few weeks ago, somebody from the Reforge network asked how to actually define this goal and I felt compelled to dive deeper into the matter.

I love this topic and while there’s already been some solid answers on Quora by the likes of Uber’s Andrew Chen, AppCues’ Ty Magnin and while I have already written about how this overarching concept a couple weeks ago (here) I wanted to address a few additional/tactical details.

Below are the three steps to identify your product’s “key activation event”.

Step 1: Map your events against the Activation/Engagement/Delight framework

This is done by plotting the impact on conversion of performing and not performing an event in the first 30 days. This is the core of the content we addressed in our previous post.

To simplify, I will call “conversion” the ultimate event you are trying to optimize for. Agreeing on this metric in the first place can be a challenge of itself…

Step 2: Find the “optimal” number of occurrences for each event

For each event, you’ll want to understand what is the required occurrence threshold (aka how many occurrences maximize my chances of success without hitting diminishing returns). This is NOT done with a typical logistic regression even though many people try and believe so. I’ll share a concrete example to show why.

Let’s look at the typical impact on conversion of performing an event Y times (or not) within the first X days:

There are 2 learnings we can extract from this analysis:
– the more the event is performed, the more likely to convert the users are (Eureka right?!)
– the higher the threshold of number of occurrences to perform, the closer the conversion rate of people who didn’t reach it is to the average conversion rate (this is the important part)

We therefore need a better way to correlate occurrences and conversion. This is where the Phi coefficient comes into play to shine!

Below is a quick set of Venn diagrams to illustrate what the Phi coefficient represents:

Using the Phi coefficient, we can find the number of occurrences that maximizes the difference in outcome thus maximizing the correlation strength:

Step 3: Find the event for which “optimal” number of occurrences has the highest correlation strength

Now that we have our ideal number of occurrences within a time frame for each event, we can rank events by their highest correlation strength. This will give us for each time frame considered, the “key activation event”.

Closing Notes:

Because Data Science and Machine Learning are so sexy today, everyone wants to run regression modeling. Regression analyses are simple, interesting and fun. However they lead to suboptimal results as they maximize for likelihood of the outcome rather than correlation strength.

Unfortunately, this is not necessarily a native capability with most analytics solutions but you can easily dump all of your data in redshift and run an analysis to mimic this approach. Alternatively, you can create funnels in Amplitude and feed the data into a spreadsheet to run the required cross-funnel calculations. Finally you can always reach out to us.

Don’t be dogmatic! The results of these analyses are guidelines and it is more important to pick one metric to move otherwise you might spiral down into an analysis-paralysis state.

Analysis << Action
Remember, an analysis only exists to drive action. Ensure that the events you push through the analysis are actionable (don’t run this with “email opened”-type of events). You should always spend at least 10x more time on setting up the execution part of this “key activation event” than on the analysis itself. As a reminder, here are a couple “campaigns” you can derive from your analysis:

  • Create a behavioral onboarding drip (case study)
  • Close more delighted users by promoting your premium features
  • Close more delighted users by sending them winback campaigns after their trial (50% of SaaS conversions happen after the end of the trial)
  • Adapt your sales messaging to properly align with the user’s stage in the lifecycle and truly be helpful

Images:
– MadKudu Grader (2015)
– MadKudu “Happy Path” Analysis Demo Sample

The “Lean Startup” is killing growth experiments

Over the past few years, I’ve seen the “Lean Startup” grow to biblical proportions in Silicon Valley. It has introduced a lot of clever concepts that challenged the old way of doing business. Even Enterprises such as GE, Intuit and Samsung are adopting the “minimum viable product” and “pivoting” methodologies to operate like high-growth startups. However just like any dogma, the “lean startup” when followed with blind faith leads to a form of obscurantism that can wreck havoc.

Understanding “activation energy”

A few weeks ago, I was discussing implementing a growth experiment with Guillaume Cabane, Segment’s VP of Growth. He wanted to be able to pro-actively start a chat with Segment’s website visitors. We were discussing what the MVP for the scope of the experiment should be.

I like to think of growth experiments as chemical reactions, in particular when it comes to the activation energy. The activation energy is commonly used to describe the minimum energy required to start a chemical reaction.

The height of the “potential barrier”, is the minimum amount to get the reaction to its next stable state.

In Growth, the MVP should always be defined to ensure the reactants can hit their next state. This requires some planning which at this stage sounds like the exact opposite of the Lean Startup’s preaching: “ship it, fix it”.

The ol’ and the new way of doing

Before Eric Ries’s best seller, the decades-old formula was to write a business plan, pitch it to investors/stakeholders, allocate resources, build a product, and try as hard as humanly possible to have it work. His new methodology prioritized experimentation over elaborate planning, customer exposure/feedback over intuition, and iterations over traditional “big design up front” development. The benefits of the framework are obvious:
– products are not built in a vacuum but rather exposed to customer feedback early in the development cycle
– time to shipping is low and the business model canvas provides a quick way to summarize hypotheses to be tested

However the fallacy that runs rampant nowadays is that under the pretense of swiftly shipping MVPs, we reduce the scope of experiments to the point where they can no longer reach the “potential barrier”. Experiments fail and growth teams get slowly stripped of resources (this will be the subject for another post).

Segment’s pro-active chat experiment

Guillaume is blessed with working alongside partners who are willing to be the resources ensuring his growth experiments can surpass their potential barrier.

The setup for the pro-active chat is a perfect example of the amount of planning and thinking required before jumping into implementation. At the highest level, the idea was to:
1- enrich the visitor’s IP with firmographic data through Clearbit
2- score the visitor with MadKudu
3- based on the score decide if a pro-active sales chat should be prompted

Seems pretty straightforward, right? As the adage goes “the devil is in the details” and below are a few aspects of the setup that were required to ensure the experiment could be a success:

  • Identify existing customers: the user experience would be terrible is Sales was pro-actively engaging with customers on the website as if they were leads
  • Identify active opportunities: similarly, companies that are actively in touch with Sales should not be candidates for the chat
  • Personalize the chat and make the message relevant enough that responding is truly appealing. This requires some dynamic elements to be passed to the chat

Because of my scientific background I like being convinced rather than persuaded of the value of each piece of the stack. In that spirit, Guillaume and I decided to run a test for a day of shutting down the MadKudu scoring. During that time, any visitor that Clearbit could find information for would be contacted through Drift’s chat.

The result was an utter disaster. The Sales team ran away from the chat as quickly as possible. And for a good cause. About 90% of Segment’s traffic is not qualified for Sales, which means the team was submerged with unqualified chat messages…

This was particularly satisfying since it proved both assumptions that:
1- our scoring was a core component of the activation energy and that an MVP couldn’t fly without it
2- shipping too early – without all the components – would have killed the experiment

This experiment is now one of the top sources of qualified sales opportunities for Segment.

So what’s the alternative?

Moderation is the answer! Leverage the frameworks from the “Lean Startup” model with parsimony. Focus on predicting the activation energy required for your customers to get value from the experiment. Define your MVP based on that activation energy.

Going further, you can work on identifying “catalysts” that reduce the potential barrier for your experiment.

If you have any growth experiment you are thinking of running, please let us know. We’d love to help and share ideas!

Recommended resources:
https://hbr.org/2013/05/why-the-lean-start-up-changes-everything
https://hbr.org/2016/03/the-limits-of-the-lean-startup-method
https://venturebeat.com/2013/10/16/lean-startups-boo/
http://devguild.heavybit.com/demand-generation/?#personalization-at-scale

Images:
http://fakegrimlock.com/2014/04/secret-laws-of-startups-part-1-build-right-thing/
https://www.britannica.com/science/activation-energy
https://www.infoq.com/articles/lean-startup-killed
https://en.wikipedia.org/wiki/Activation_energy

3 reasons why B2B SaaS companies should segment trial users

99% of the B2B SaaS companies I talk to don’t segment their free trial users.

This is a shame because we all know our trial users can be very different from one another.

For example, have you heard of accidental users? Those users signed up thinking your products was doing something else and leave soon after realizing their mistake (much more common than what you might think!).

Or what about tire-kickers? Yes, a surprisingly large number of people like to try products with no intention of buying ever (more about it in this great post from Matt Pope).

There are also self-service users. They are actively evaluating your product but don’t want to talk to a human being, especially a sales person.

The enterprise buyer is an interesting profile. She will likely buy an expensive plan and will appreciate to get help from an account executive.

 

“Sure thing… why should I care now?”

Fair question. Here is what happens when little is done to identify the different types of trials.

1. The overall conversion funnel has little meaning

A SaaS company we work with was worried because their trial-to-paid conversion rate had decreased 30%. Is this because of the new product feature they just released? Or maybe there is an issue with the email drip campaign? The explanation was simpler: A large number of tire-kickers coming from ProductHunt suddenly signed up. Their very low conversion rate crashed the overall conversion rate.

Looking at the trial-to-paid funnels by customer segment is the best way to understand how your product and sales activities affect conversions, regardless of variations in customer signups.

2. You are selling and building the wrong product features

Understanding how your product is used is essential to effectively sell and improve your product.

But looking at overall product usage metrics is misleading. The accidental users and tire-kickers usually make up a large chunk of your customers. Looking at overall usage metrics means that you may well be designing your sales and product strategy to fit your worst customer segments!

When looking at product usage, make sure to focus on your core user segment. The features they care about are the features to sell and improve.

3. You are spending your time and money on the wrong trial users

There are lots of ways in which a lack of segmentation hurts your sales and customer success efforts:

  • Tire-kickers take away precious time from sales and customer success. This time could be spent on selling and helping core users.
  • Customers with high potential value don’t get extra love. Many sales teams spend huge amounts of time on tiny customers while underserving larger customers.
  • Trying to get buyers to use your product and trying to get users to buy is a waste of everybody’s time. In B2B, the buyer is often not a heavy user. For example, a CTO will pull the credit card and pay for an app monitoring software, but he or she will use the software only occasionally. Educating the CTO on the nuances of the alert analysis feature doesn’t help anyone!
  • Sales trying to engage self-service users hurts conversions. Some users appreciate having an account representative help them evaluate a product while others want to do their evaluation on their own. Knowing who’s who is critical for both customers and sales teams.

 

How to get started?

One way, of course, is to use MadKudu (passionate, self-interested plug). Otherwise the key is to start simple. Talk to your best customers to get a qualitative feel of who they are, and look at your customer data to find out what similar characteristics are shared by your best customers. Then put together a simple heuristic to segment your customers and implement this logic in your CRM and analytics solution.

This effort will go a long way to increase your trial-to-paid conversion rates.

Now back to you. Do you have different segments for your trial users? If no, why not? If yes, what are those segments? Who is using them? Continue the conversation on twitter (@madkudu) or email us hello@madkudu.com!

How To Identify Your Ideal Customer Profile (Podcast)

Last week I had the pleasure of being invited to speak about B2B SaaS Sales on Livestorm’s podcast. In the interview I discussed how, at MadKudu, we led our research for our Ideal Customer, how we’ve kept on refining it and how it helped shape our business.

Here’s the full interview :

 

And here’s the transcript (a big thank you to Livestorm)

Hi Francis, first, could you tell us what is MadKudu and how you help other SaaS businesses improve their sales process and grow?

MadKudu is a predictive analytics solution. We help sales team prioritize leads. We focus solely on B2B SaaS companies, we work with companies like Segment, Mattermark and Pipedrive.

Those companies love us because they come to realize that in order to be successful their sales team need to be helpful and in order to be helpful they need context.

We provide that context on who’s talking to them and why they are talking to them. We provide all the customers data that is available on the behavioral side as well as third party data with systems like Clearbit.

We provide the triggers to sales team in order to reach out properly and maximize their efficiency.

From what I understand, you are one step ahead of traditional lead scoring where all sales interactions are based on specific lead scoring activity such as, for example, “has downloaded a PDF”.

If you think about it, lead scoring is more of a methodology to make sure that you have leads prioritized. The traditional way of doing this is: you pick certain events and certain criteria and assign point to them based on your preconception of how it is important to do one or the other.

Where predictive comes into play is figuring out what number points should be allocated to certain events, or to having certain behaviors.

The three founders of MadKudu have backgrounds in engineering and mathematics and we saw the huge opportunity to stop having preconceived ideas of what criteria were needed to consider a lead to be qualified.

We use historical data to find out what truly is important.

The predictive side is one way of doing lead scoring. It is more tailored to every business out there.

Right, but in order to get predictive, you need to have a certain amount of historical data, including “win moments” such as an upgrade, as well as “lose moments” such as churn events.

Not every company has enough data on the conversion side in order to run statistical models. So, either you have this amount of conversion events, that is top of the funnel events, or you can use “proxys”.

Basically, you can pick other events further down the funnel. Those users with less data can look at their activation rate. So, if you are a CRM it could be uploading your contacts. And this become your “win event” and you can base your model on that.

Then as you get more volume you can iterate on that “win event” and pick another criteria.

So, companies with a certain amount of data can use MadKudu but if, younger companies can also use your predictive analytics based on their activation rate, then does it mean that all companies can use MadKudu?

It’s a very relevant question to the topic today. Not every company is a good fit for MadKudu.

We define a very narrow customer profile to make sure we execute well and deliver maximum value to them.

First, if you have a low volume of data, our statistical model is maybe the way to go.

Maybe you should first make assumptions, test them and then refine your process. Up until you get to a certain point where the amount of leads requires a more complex statistical modeling.

That’s why our typical sweet spot customer have 5–30k new leads coming in every month. Which is a pretty high volume where statistical modeling starts shining.

What are the other parameters that you look at for your Ideal Customer Profile (ICP)? Do you have empirical data that helped you shape your ICP based for example on deal velocity?

Defining your ideal customer profile is the most important thing for an early stage startup. If you think about it, if you aggregate all you ideal customer profile you have your target market, that is the market you want to deliver your product for.

You have to define your product based on the market you are going for. And that’s a pretty big change lately.

200 years ago your local butcher knew exactly how you wanted your meat, a 1:1 personalized approach where the product would be defined by your needs.

Then came the industrial revolution where we became able to mass product, and it was all about how do I ship and distribute the product. That is all the marketing standards such as the 4p’s. It was all driven by “how do I ship this product”.

Today with all the data that is available, with the ability to create a product and distribute it at a very low price, we’re back at this initial stage of people wanting to build product for specific targets. It’s all about the customer. It puts back the ideal customer at the center of every single strategy.

So, you should start with early assumptions of who is your ideal customer that you want to solve a problem for. You want that to be narrow very early on.

If you take the BANT framework (Budget, Authority, Need and Timing), you want to focus on Budget and Need first. Those are the two parameters that will help you build a company.

Need is what will help you generate traffic to your website. If you have the right need you will be able to have a message that resonates and engage people. Once they are engaged you will be able to talk about budget.

When we were at Techstars, our managing director told us “call a hundred of these companies that you define as your ideal customer profile, don’t try to sell them anything, see if the need you are trying to solve is actually there”.

That started generating traffic, people got interested, then we were able to look at the data at how the message resonated with smaller categories than what we had defined with the ICP.

Then we closed our first clients and we refined our definition of the ICP more and more to the point where it was super precise.

We started aiming B2B SaaS that had raised an A round in the past 6 months, that had an Alexa rank lower than a 100 and integrations on their website such as Mixpanel, KISSmetrics or Segment.

So, when we reached out to them it was really relevant and often on point. We had a huge reply rate.

So everything started from those hundred calls, then you refined your ICP, until you reached this level of precision. What specific data points did you focus on?

At that time, we were focusing on improving our trial conversion rate and selling to B2B SaaS appeared to be extremely important. Also, you had to use a technology that we could connect (e.g Segment, Mixpanel, or KISSmetrics.).

Behavioral data and declarative data must be tied together. They bring different kind of information.

I recommend you watch the Ted talk from Hans Rosling called The best stats you’ve ever seen. The main point is that, in this world, all the data is available. The big issue is that we drive our decisions on preconceptions.

We have this customer, very similar to Clearbit, that monitors companies’ growth. They had a definition of their ICP being mostly VCs. The sales team was to trained to deal with those profiles, they knew the playbook to convert them.

What we found in the data is that they had a huge amount of conversion in the recruiting space. They did not understand it and the sales team was constantly rejecting those leads. We realized that those HR companies were interested in spotting companies that were not growing in order to find sources of engineers for their own clients.

There was a great use case and they had not trained the sales team to sell to those companies.

Then, this is where behavioral data come into play. You want to make sure people get a successful experience. Those are events you monitor through behavioral data. For this company, we were able to determine which persona were getting the most successful experience.

So, it’s really important to combine the demographic and the behavioral together.

And how do you integrate the sales feedback to complete that empirical approach and close the loop?

Usually, marketing teams have a budget, they find leads, qualify them, marked as MQLs and send them to the sales team. Then on the other side, on the sales standpoint, they have SALs (Sales Accepted Leads). They take the marketing leads, they see if they are qualified enough and they accept it or not.

So, it’s super important to have this interface between sales and marketing and for any MQL there should be only two options: either it’s accepted or it’s rejected.

Being able to monitor those rejected is where you are going to gather a great amount of feedback. Feedback that can actually correct historical patterns that could be misleading.

It’s also important to have regular meetings with the sales team and go over the list of those rejected leads and say why they were rejected. That’s where you can optimize your MQLs.

photo credit: Francis Brero

Make the right “build versus buy” decision with 3 simple steps

A couple weeks ago I attended a Point Nine and Algolia happy hour in Dublin. The premise was on point with a recurring question we deal with on a daily basis when it comes to software: “Should you buy versus build internal solutions?”

Many at the event shared the story of an in-house solution turning into a big costly distraction for their team. The main culprit? The decision to build in house was taken lightly without the hypotheses behind this decision being written down and communicated.

 

I’d like to share here a simple framework I’ve used and that I’ve seen work in this form or another at some of the SaaS rising stars (Algolia, Intercom, Mention…).

This framework helps support data driven, thus dispassionate, decisions on the topic of building vs buying.

The high level structure is:

Step 1: Validate the business need
Step 2: Get a rough but realistic estimate of the cost for the “build” option
Step 3: Decide and review your hypotheses in a given timeframe.

Step 1: Validate the business need

Even Chris Doig in his analysis of the problem writes that everything starts with well-defined requirements. However, as most founders know only too well, every decision to even think about doing something starts with a hypothesis of much positive impact the company can get from a new set of functionality.

Make sure to always go through the exercise of determining how much value you will get from this feature/product you’re considering.

Let’s take the example of building a lead scoring mechanism to help the sales team know which leads they should de-prioritize. The assumption is that the sales team is wasting time on leads that are unlikely to purchase your product at a high-enough price point. Seems fair. But how much value can we expect from implementing such a solution? Keep it simple. Let’s assume you have 10 SDRs, each at a base salary of $50k. If 20% (1 out of 5) of the leads they are reaching out to are unqualified, you are essentially wasting ~$100k annually. And this is without considering the opportunity cost from not spending that time on higher value leads.

With this rough estimate in mind, let’s proceed with evaluating the cost.

Step 2: Define basic requirements and compute an estimate of the cost for the “build” option

Whenever we consider building a solution internally, I like to approach it as I would if I were to write a RFI. This is a great forcing function to decompose the problem and identify the different required functionalities along with their impact on the expected value (aka their criticality). The individual costs are always higher than you initially thought, and the estimates for each item add up quickly!

For example, using the example of lead scoring, decomposing the problem could bring us to the following set of critical features:

– Build a programmatic way to fetch information about new leads from LinkedIn
– Define a heuristic to score leads based on the data obtained
– Build a scoring mechanism
– Build a pipeline to feed this score back into your CRM
– Add the score in a workflow to route leads appropriately
– Set up reports to measure performance in order to make adjustments if necessary

Once you have those listed, get an estimate from the engineering team for building each feature. This will enable you to have an idea of the cost of the “build” option.

You can use a simple spreadsheet to estimate the annual cost of building and maintaining a solution based on your team’s size, current MRR…

Download this calculator here.

For an early young company (6 engineers, $100k MRR), the cost of such a solution over the course of a year would be about $80,000.

This may seem high and the truth and that we all have a hard time estimating opportunity cost, maintenance cost (we are typically twice that of initial development)…

In parallel, look around to see what SaaS solutions are available to solve your problem and how much they would cost. A lot of them offer free-trials and/or demos. I recommend going through at least a demo as you will be able to get some valuable information about others who have worked on solving the problem you’re addressing. On the pricing aspect, if no pricing is shown and the product requires a demo, you can be fairly certain the cost will be at least $999/month.

Step 3: Decide and review your hypotheses in a given timeframe

You are now armed with all the data points to take a data driven decision! If you’ve decided to build in house, set a “gate” with your team to revisit the hypotheses you’ve made. For example, decide with your team to have a quick check-in in 90 days to discuss the learnings from building the solution in house and decide whether or not to continue or re-evaluate.

 

Notes
I want to emphasize that no answer can be right without context. What is initially true might very well become wrong. Therefore we’ve built a lot of software to help us determine what are the critical components we would be looking for when shopping around. In these cases it was always essential to timebox the build phase and to constantly remind ourselves that the objective was to reduce uncertainty and unknowns.

Secondly, there is a hidden cost in buying that can come from the rigidity and inadequacy of the SaaS product you buy with your problem. This is why trials and POCs are so popular nowadays (which is why we offer one).

Lastly, the example picked seems like a no-brainer as the solution is for the “business” team. The level of rigour required to go through this exercise for tools used by dev teams is much greater. The main fallacy lying in the illusion that an internal tool will always be a better fit for all the company-specific requirements. This is not only a highly inaccurate; it also leads to ill-defined features. Going through step 1 can save hours of wasted time and headaches.

Use predictive analytics to reduce churn by 20% in 2 days – with 3rd-grade math

Most SaaS companies have 3 misconceptions about churn:

  1. They don’t realize how much churn is costing them.
  2. They think they know why customers churn.
  3. They think predicting churn with data is too hard.

If you’re not using predictive analytics to prevent churn this hack will help reduce your churn by about 20%. It takes about 2 days of work over a few weeks and you can do it in Microsoft Excel.

We used similar techniques to help Codeship retain 72% of their at-risk users.

 

Download the spreadsheet to follow the example below.

You need to predict churn with data

Your customers cancel for lots of different reasons. Projects get scrapped. Users get stuck and bail. The key user takes a sabbatical to breed champion goldfish.

Quite often you can intervene before this happens and prevent it – but the primary predictors of churn are not always obvious.

For instance many SaaS marketers assume last_login_at > 30 days ago predicts churn. We almost always identify better predictors such as changing patterns in user behavior.

Let me re-phrase this point a little stronger:

If you’re not looking at data to predict churn you are almost definitely missing the fastest, easiest way to increase your MRR.

Why this hack is effective

You don’t need a data scientist. Or developer time.

As long as you have access to metrics in Mixpanel, Intercom, etc. even junior members of your marketing team can do it.

Credit card companies invest massively in predicting churn because slight improvements generate millions of dollars. You’re not Capital One – you’re a SaaS company. You don’t need know what “entropy” is to start predicting churn.

You don’t need need statistics

Can you add? This the only math skill you need. There is one equation but we’ve already put it into the spreadsheet for you.

If addition is too complex consider outsourcing to a 3rd-grader. They’ll work for peanuts (or at least cookies).

The results are immediately actionable

We’re going to start with the data you already have in your analytics or marketing automation platform – so you can use the results to send churn-prevention emails or generate alerts for your sales team.

Step-by-Step: find the best predictors of customer churn

Download the spreadsheet

Click here to download.

The examples are easier to understand if you spend a few minutes looking at the spreadsheet. I break down each step below.

PR Power! – our example company

I’m going to walk you through each step using examples from a fictitious SaaS startup called PR Power! we introduced in a previous post.

PR Power! helps media managers in mid-sized businesses do better PR by generating targeted media lists. Customers pay $50-$5,000/month after a free trial. Marketing Mark, the CMO, is charged with reducing monthly churn from 5% to 4%.

Step 1 – Identify predictors of churn

Try to identify predictable reasons why customers cancel.

Mark’s team spent a few hours looking at the last 20 customers who canceled and identified a few predictors. He also interviewed the sales and customer success teams about these customers.

They came up with the following events that are likely to predict why a customer cancels an account with PR Power!

Champion departs – Usually PR manager leaves the customer’s company.

Project canceled – Customer signed up for a specific PR campaign and then decides not to run the campaign.

No journalists – Customer can’t find a good journalist in PR Power! to cover a story.

Support fails – Customer contacts support a few times and the problem isn’t solved – usually indicated by support tickets open a long time.

Stale list – Customer’s media list is less useful because journalists no longer available or active.

Step 2 – Translate the churn predictors to data rules – or eliminate them

Mark’s team took these qualitative events and tried to identify existing data in Mixpanel that might predict them. 3 were straightforward 2 took a bit of investigating.

No journalists required identifying customers who had searched for journalists but didn’t add them to the media list.

Support fails was simply too hard – the support desk data on tickets isn’t in Mixpanel so they decided to skip it.

Step 3 – Count the occurrences of each predictor

Mark put the predictors at the top of his spreadsheet and identified every customer who matched a data rule yesterday.

For instance, User 80374 last_login_at > 30 days ago is TRUE so he entered a 1 for Project canceled.

Step 4 – Track every customer who churns until you hit 100

Mark adds a “Canceled?” column to the spreadsheet. Each day he identifies every customer who cancels until 100 customers cancel. This takes 2 ½ weeks.

Step 5 – Count the matching events for each predictor

Now for the 3rd-grad math …

For each predictor, count every customer where the churn predictor is TRUE and the customer canceled.

matches

Mark starts with the Project canceled rule and counts the following

Number of times last_login > 30 days ago is TRUE and YES, the customer canceled.

For instance, customer 80374 and 89766 fit this criteria. He counts 22 instances.

Step 6 – Enter the results into the spreadsheet

Enter the total in the appropriate block of the 3×3 matrix to calculate the Prediction Score (This is implementation of the Phi coefficient).

Mark enters 22 and calculates Prediction Score for Project canceled at 0.009

Step 7 – Identify the biggest predictors of churn

Rules with the higher Prediction Score are better predictors of churn.

Mark compares the Prediction Score for each rule and sees an obvious pattern.

results

Two observations immediately jump out at Mark:

First, last_login_at > 30 days ago doesn’t tell him much about Project canceled. Since PR Power! has long-term customers who use the product periodically this isn’t surprising.

Second, No journalists is the clear winner. In hindsight, this makes sense – customers who try to find a journalist and can’t are getting no value from the product.

Step 8 – Take steps to prevent churn

Mark creates 2 rules in Mixpanel for the No journalists predictor.

Small accounts

When a customer has total_searches > 5 within last 30 days AND media_list_updated_at > 30 days ago Mark creates an auto-message inviting a customer to watch a webinar on “How to search for a journalist”.

Large Accounts

When a customer has total_searches > 5 within last 30 days AND media_list_updated_at > 30 days ago Mark creates an alert for the sales team to notify them about a customer at risk for churning.

An easier way – ask us to do this for you

You don’t need even need 3rd grade math.

Just take a free trial of MadKudu and let us run these calculations for you.

Cancel anytime if you don’t like it – keep whatever you learn and all the money you make from reducing your churn.

 

Want to learn more? Sign up for our new course.

 

Photo credit: Rodger Evans