Leading, lagging and proxy metrics
A worked example
This is a short article, with a worked example, explaining the use of leading, lagging and proxy metrics.
TL;DR There is a reason that Netflix spend up to 6 months identifying a metric. Metrics can have an enormous impact on your business. Leading metrics inform your decisions in a timeframe to power shorter feedback loops. They require more work to identify, but create far more usable insights.
Although the base concepts of OKR are simple, making it work for your organisation is another matter. Bad advice is one of the reasons. While researching my recent post on KPI v KRs I came across plenty of bad articles. This is one of the reasons OKR is so difficult.
My job here is to create some clear and concise articles, that I improve over time.
Why metrics?
The purpose of metrics in business is to inform decision making. Good metrics tell us whether the actions we are undertaking are driving the outcomes we desire. The challenge is to identify metrics that provide insight to future success in a timeframe that allows timely course correction.
Let’s start with some basic definitions:
Types of metric
Lagging metric a measure of something that reacts slowly to our activities. Usually something of high importance to the organisation. Leading metric a measure of something that changes more rapidly as a result of our activities. Useful because it predicts subsequent changes to a lagging metric. Proxy metric a metric used instead of the thing you really want to observe. Used because it’s expensive or very difficult to measure the desired metric.
These definitions are useful, but the relationships between metrics are often complex and multi-faceted. I’ll cover some of that in this simplified example.
The Learning Centre - An example
For the purpose of this illustration I use an online learning platform which helps students pass public exams. The students can find lessons, pay to add them to their library and use them. They can also purchase a monthly or annual subscription for all lessons.
Success for The Learning Centre (TLC) is predominantly measured in sales and subscriptions. The organisation needs a continuous flow of new students as current students move out of the education system.
The most important commercial metrics are usually lagging
Subscriptions and lessons purchased are the key metrics for TLC’s success. These are both lagging. They react slowly to our activities.
The problem with lagging measures is that you have little or no idea if you current activities are improving them. It can take many months for changes to take effect and by then the money and time is gone.
This is where leading measures come in
The usual purpose of a leading measure is to predict a lagging measure. So let’s drill down on TLC’s goal of subscriptions.
What leads to a subscription? The cross-functional team looked at their data and found that most users complete two lessons successfully before subscribing. Great, but lessons completed is still a lagging measure. It doesn’t change quickly.
Taking a deeper look at the metrics they discovered a leading metric for completing lessons. They noticed that before registering for their first lesson most non-referred users undertook a successful topic search. Successful meaning that they clicked through to the lessons details.
This is less lagging. The product team can focus on getting more users to have a successful search. Many factors may decide whether a search is successful and the team can build hypothesis and run experiments to decide what they need to change.
The team also carried on with their analysis and identified site visits as pre-cursor to a topic search. One further step back was achieving a good marketing contact.
The marketing team can test their ideas to create great marketing contacts, particularly in target markets.
Side note, there is a useful simple metric framework called the AARRR (or pirate) Framework. It can help your thinking about the job of metrics. Several of the metrics in my example relate to it. I think the HEART Framework is more useful, but that’s a longer discussion.
Using this working backwards approach the team found a couple of key leading metrics to work on this quarter. Neither marketing contacts or topic searches are meaningful outcomes for the TLC business, but they are important leading metrics for the things they value.
What about proxy metrics? We’ve looked at acquiring users via marketing. Another route that TLC use is what we call a viral loop. A big source of new users for TLC is referral from one student to another. A referred user who completes a lesson, short cuts our acquisition journey.
In their search to understand how many of their subscribers are likely to refer them, the TLC team runs a NPS study. The NPS score is a proxy for referrals, it measures intent, not behaviour. In that respect its value is limited, like many proxy metrics.
Note: NPS receives a lot of valid criticism. It is a good example of a proxy metric and their limitations.
A general rule, when thinking about metrics: Those that represent behaviour are more reliable and useful than those that measure intent.
The complexity
As with KPIs and KRs, the status of a metric can change. The border between a lagging and leading measure is a matter of perspective. Clearly things like sales and churn are always lagging, but other metrics might be considered lagging at the team level, but leading at the organisation level. Teams work in much shorter feedback loops than organisations.
I hope this was helpful. Please connect on twitter or https://www.linkedin.com/in/enrvuk/ if you’d like to discuss the article or tell me where it can be improved.
Next: The OKR Check In