Startup literature is full of ardent advice on how to measure activation, engagement and how users are interacting with your product.
These concepts might help your business in the short term, but can leave you blinded by the very data you hoped would open your eyes in the long term.
So how do you go from simply plugging your numbers into cookie cutter formulas to answering key questions about your product and business? And are you even asking the right questions in the first place?
Not all products are the same
In the early days of the analytics team at Intercom, our tracking mostly consisted of typical SaaS company finance metrics, such as the conversion rate of our customers from trial-to-paid and Monthly Recurring Revenue. While these metrics are essential to understanding the overall health of a business and its strategy, they fall short for product teams because they don’t answer all of the questions that are important to them, such as, “Do customers find this feature valuable?” or “How simple is our product to use?”. Without additional metrics focusing on user experience, the analytics team will have a diminished impact on the decisions the product team makes.
Where most startups trip up is they don’t know how to ask the right questions before they start measuring.
We started by reading about what other companies had done. We found that a lot of the “best practice” advice on product metrics was useful for introducing concepts, but that the rules can’t be uniformly applied. So much depends on the type of business you’re running. A $99 B2B SaaS app will define engagement very differently to an eCommerce website.
A key value of the analytics team at Intercom is “start with the right question”. If we simply applied well-worn frameworks blindly, we would be starting with someone else’s question, not our own. And if the metrics these frameworks produce don’t start with the right question, they don’t influence how a product is built or the direction a business goes in. These metrics become false proxies that might look good on paper, but won’t give you real insight on where to take the product next or what to improve upon.
“Most people use analytics the way a drunk uses a lamppost, for support rather than illumination.” – David Ogilvy
What’s more, a single set of metrics to serve an entire company becomes less and less effective as the company grows in size. Teams tend to diverge in terms of the metrics they care about. Although they all may share a common high-level mission, they contribute to the mission in different ways and so their success must be measured differently. The growth team are focused on engagement in one part of the product; the marketing team on an entirely different part.
The right metrics start with the right questions
Where most startups trip up is they don’t know how to ask the right questions before they start measuring. Doing this requires a collaborative partnership between analyst and product team, rather than a more traditional stakeholder-resource relationship.
To guide this partnership, we took inspiration from Google’s HEART framework, which gives advice on defining metrics that follow from product goals. For example, here’s a few questions we ask our product teams to help us understand their goals so that we can help them define meaningful metrics:
- If we imagine an ideal customer who is getting value from our product, what actions are they taking?
- What are the individual steps a user needs to take in our product in order to achieve a goal?
- Is this feature designed to solve a problem that all our users have, or just a subset of our users?
To help our product partners answer these questions, we use product usage concepts that, over time, have become well understood and relied upon. These terms can be directly related to key points during a customer’s journey within our product:
- Intent to use: The action or actions customers take that tell us definitively they intend to use the product or feature.
- Activation: The point at which a customer first derives real value from the product or feature.
- Engagement: The extent to which a customer continues to gain value from the product or feature (how much, how often, over how long a period of time etc).
Armed with these simple concepts, we can look to answer questions like the ones posed above. The next step is to look for signals that are specific to the product or feature which map to these concepts. We have found that open collaboration with people across our product teams – managers, designers, researchers and engineers – yields many useful signals that we can use to develop impactful product success metrics.
As long as you’re asking the right questions, you’re going to get valuable insights that you can act upon. In short, start with the problem, not with the data.
Developing your key metrics
Much like our philosophy of “ship to learn”, defining product success metrics is just the beginning. To ensure their own success, they need to be advocated for, communicated, and even critiqued. Just like with the product they measure the success of, we must measure the success of the metric. Are the metrics giving us a true picture of product success? Are they influencing how we think about the product? Are they motivating the team who build the product? These are questions an analyst must constantly ask of the metrics they produce, advocate for and report on.
This collaborative approach to metrics definition has led to a much more seamless relationship between analytics and product teams. Equally, using a common, consistent way of working means anyone in the organisation can easily understand any product metrics, what they mean and why they are important. This means changes to the product can be informed – or even led by – the insights gained from exploring the data framed by these metrics.
A recent example of this is with our newest product, Educate. Our analytics and product teams partnered on this project from idea right through to launch. Metrics were decided upon at the start, using the simple, well-understood questions-first approach outlined above. This meant that early betas of the product already had a robust set of metrics in place so we could test our assumptions.
For example, we hypothesised that it was important to understand how long it took a customer to get from actively creating articles (showing intent to use) to getting their customers’ eyes on those articles (activating) efficiently. If customers could see the value from the product quickly, they would be more compelled to convert from a trial to a paying user.
Beta customers’ interactions with early versions of the product indicated it was taking customers a long time to reach this point. As a result, the product was simplified to allow for a more seamless experience. At launch we could see improved time-to-activation across the board, and this had a positive effect on overall conversion of customers from trial to paying.
We are keen to explore data-informed design more. This is only the beginning, and we have many areas to improve. But starting with a partnership-based approach has put us on a good trajectory towards becoming a truly data-informed organisation.