Tuesday, December 2, 2025

Measuring Impact: Let's make it easy

My Documentation checklist says, "Why Business Success Depends on Being Thorough."

Being thorough - the easiest part. Think of more scenarios, corner cases, first-time experiences, peak experiences, and end experiences. 

Depends on - harder. Most people don't put enough effort in thinking this through and most of the time there are fewer critical dependencies, so you're saved. Still think about dependencies on upstream and downstream systems. 

Business - easier to define who is asking for it, or what this feature can help with. Subjectivity is easy. 

Why - Hard. Most people get the "What" right. They skip the "Why". E.g. User wants to filter the list based on a date range. Ok, but why? Why would they want to do that? Why would they want to see the filtered list? If you don't know - Ask! Don't imagine. If you imagine why a user would need something because you've already been told to build it - you're likely to imagine a problem that fits well with your version of the solution but doesn't really exist in the real world. 

 E.g. Bad reasons: The absence of a feature is not a great reason to build the feature. E.g. We need a search function because the user is unable to search for <xyz>. Doesn't explain why a user needs to search for <xyz>.   

 E.g. Bad reasons: It's a standard expectation. E.g. We need sorting on columns on the tabular view. Sorting is great, I like it. But, calling it as basic expectation is running away from thinking under what scenarios will a user want to sort the data. It makes you figure what all types of sorting will be needed. What should the default sort order? How does it go with pagination etc.  

What you need to identify and analyze is "Why" and then document it. And it doesn't end here. You should now quantify the problem. How many users have this problem? How frequently will a user need such a thing?

Success - Hardest. Here's when you define the success metric or measure of the impact of a feature. This is what most people find the hardest to determine. The fact is also highlighted by Rich Mironov

Two approaches:

1. Derive from Why? because they've mostly done a poor job of defining the why. If you've defined the "Why" really well. You'd know what success will look like. 

2. Quantify Usage: If the impact is tough to quantify, you can try quantifying usage. How many clients will adopt this? How many users in each client will use this?

Quantify using General Metrics 

Try to put your outcomes under one of the following categories - ACME

Acquisition - how many new clients, new users will this help in acquiring/adopting. 

Conversion - how many new clients, new users will convert due to this.  

Monetization - how much uptick in revenue can we expect due to this?

Engagement - how frequency of usage and/or quality of usage increase because of this feature?


Quantify using the Value System 

Most relevant in B2B: The Value Matrix, main categories of customer benefits. Based on the work by James Andersson et al in the book “Value Merchant”. [Have read the book, this is roughly what it is. But it is not called out in the book, don't sweat looking for it.]

Increase revenue

Reduce cost

Improve brand (virality, user delight)

Minimize risk (future costs/losses)


Setting up the Metrics (copied from internal memo)

Get Goals right! -- most critical to get this right. Be honest about why you are doing this. At this point, typically, we are mostly doing stuff to unblock sales, implementation, and retention (by building what was promised). First list that out. If you think the feature has the potential to do more, add another goal about how it can be upsold, convert other clients or engage clients more frequently.

Good Goals v/s Bad Goals 

  • Whatever gives one-time value is a Bad Goal from the product's pov. 
  • If the goal doesn't lead to a multifold ROI, it is not a good goal.  

Get the Success Indicator right! -- this should answer the question "What would indicate the Goal is getting achieved?" If your goal is improving experience the success Indicator will either be something like improvement in NPS or more frequent usage. In most cases, you'd have to make it quantitative. Don't put hypothetical metrics that you'd never be able to calculate. Like if you aren't already measuring NPS regularly, what's the likeliness of you measuring it for this feature specifically. 

Get Current Values right! -- Current value is the baseline. If you say Faster Implementation, you should state how much time it takes to implement right now. If you say increase Retention you should state what is the Retention right now. If you get the baseline wrong you won't be able to measure the change later on. If something doesn't exist, look for potential usage. E.g. there is no favorite in Team Calendar. A good proxy can be that Adding favs should increase views on the Team Calendar. Of course, the best metric would be how many users actually mark any favorites.

Get the Milestone right! -- Milestone is the estimate of impact. If you don't get this right, you haven't really understood the impact of the feature, which means you might have got the priority wrong as well. So, how much revenue will be unblocked? How much NPS or feedback rating do you expect to improve? How fast would the implementation become? It should have an exact date when you want to measure and exact targets that you'd measure against.




No comments:

Post a Comment

PM is a Double Agent

Most Popular on this blog