Tuesday, December 2, 2025

Measuring Impact: Let's make it easy

My Documentation checklist says, "Why Business Success Depends on Being Thorough."

Being thorough - the easiest part. Think of more scenarios, corner cases, first-time experiences, peak experiences, and end experiences. 

Depends on - harder. Most people don't put enough effort in thinking this through and most of the time there are fewer critical dependencies, so you're saved. Still think about dependencies on upstream and downstream systems. 

Business - easier to define who is asking for it, or what this feature can help with. Subjectivity is easy. 

Why - Hard. Most people get the "What" right. They skip the "Why". E.g. User wants to filter the list based on a date range. Ok, but why? Why would they want to do that? Why would they want to see the filtered list? If you don't know - Ask! Don't imagine. If you imagine why a user would need something because you've already been told to build it - you're likely to imagine a problem that fits well with your version of the solution but doesn't really exist in the real world. 

 E.g. Bad reasons: The absence of a feature is not a great reason to build the feature. E.g. We need a search function because the user is unable to search for <xyz>. Doesn't explain why a user needs to search for <xyz>.   

 E.g. Bad reasons: It's a standard expectation. E.g. We need sorting on columns on the tabular view. Sorting is great, I like it. But, calling it as basic expectation is running away from thinking under what scenarios will a user want to sort the data. It makes you figure what all types of sorting will be needed. What should the default sort order? How does it go with pagination etc.  

What you need to identify and analyze is "Why" and then document it. And it doesn't end here. You should now quantify the problem. How many users have this problem? How frequently will a user need such a thing?

Success - Hardest. Here's when you define the success metric or measure of the impact of a feature. This is what most people find the hardest to determine. The fact is also highlighted by Rich Mironov

Two approaches:

1. Derive from Why? because they've mostly done a poor job of defining the why. If you've defined the "Why" really well. You'd know what success will look like. 

2. Quantify Usage: If the impact is tough to quantify, you can try quantifying usage. How many clients will adopt this? How many users in each client will use this?

Quantify using General Metrics 

Try to put your outcomes under one of the following categories - ACME

Acquisition - how many new clients, new users will this help in acquiring/adopting. 

Conversion - how many new clients, new users will convert due to this.  

Monetization - how much uptick in revenue can we expect due to this?

Engagement - how frequency of usage and/or quality of usage increase because of this feature?


Quantify using the Value System 

Most relevant in B2B: The Value Matrix, main categories of customer benefits. Based on the work by James Andersson et al in the book “Value Merchant”. [Have read the book, this is roughly what it is. But it is not called out in the book, don't sweat looking for it.]

Increase revenue

Reduce cost

Improve brand (virality, user delight)

Minimize risk (future costs/losses)


Setting up the Metrics (copied from internal memo)

Get Goals right! -- most critical to get this right. Be honest about why you are doing this. At this point, typically, we are mostly doing stuff to unblock sales, implementation, and retention (by building what was promised). First list that out. If you think the feature has the potential to do more, add another goal about how it can be upsold, convert other clients or engage clients more frequently.

Good Goals v/s Bad Goals 

  • Whatever gives one-time value is a Bad Goal from the product's pov. 
  • If the goal doesn't lead to a multifold ROI, it is not a good goal.  

Get the Success Indicator right! -- this should answer the question "What would indicate the Goal is getting achieved?" If your goal is improving experience the success Indicator will either be something like improvement in NPS or more frequent usage. In most cases, you'd have to make it quantitative. Don't put hypothetical metrics that you'd never be able to calculate. Like if you aren't already measuring NPS regularly, what's the likeliness of you measuring it for this feature specifically. 

Get Current Values right! -- Current value is the baseline. If you say Faster Implementation, you should state how much time it takes to implement right now. If you say increase Retention you should state what is the Retention right now. If you get the baseline wrong you won't be able to measure the change later on. If something doesn't exist, look for potential usage. E.g. there is no favorite in Team Calendar. A good proxy can be that Adding favs should increase views on the Team Calendar. Of course, the best metric would be how many users actually mark any favorites.

Get the Milestone right! -- Milestone is the estimate of impact. If you don't get this right, you haven't really understood the impact of the feature, which means you might have got the priority wrong as well. So, how much revenue will be unblocked? How much NPS or feedback rating do you expect to improve? How fast would the implementation become? It should have an exact date when you want to measure and exact targets that you'd measure against.




PMs, Problem Discovery is where you are losing out

Product Management gets a lot of flak for not gathering the requirements correctly. The prime reason is that problem discovery is Hard, and PMs take it lightly.


This is what makes it hard: 
1. Customers are confused about what they need versus what they want versus what they are asking for
2. Customers aren't able to articulate clearly - often not their core skill 
3. Most customers describe solutions they want you to build instead of the problem they want you to solve
4. Product guys are biased by their limited product and domain knowledge. They look at new things with the old lens. 

Understand the problem-solution continuum 

If your vision and roadmap are not clear for the product. All your work, including client communication, requirement gathering, and problem discovery, will be very sacrificial. You need to have a very clear understanding of: 

  • What are the general problems in the domain
  • What are the general problems of your customers and of your users? 
  • What specific problems is your product solving 
  • What specific adjacent problems is your product not solving
  • What are the alternatives being used today (direct competition (Coke v/s Pepsi) and indirect competition (SaaS v/s Excel), and alternatives (Netflix v/s Uber v/s Wakefit))

When you understand the problems and solutions in the domain, you’d be able to understand whether to push back on a request or add it to the roadmap? It will help you know if you should dive deeper in to it and start requirement gathering or request the clients to gather their thoughts and schedule a meeting when they are more mature.

Get answers to “Why” without asking Why. 

  • 5 Why’s are super helpful in lab conditions. However, like our brain, in our conversations as well, we need to prioritise emotion before logic. Asking Why and getting a useful answer is super hard. Asking “why" forces people to think things they haven’t thought through earlier. Thinking creates friction. It may also make them feel stupid, or they may feel challenged. It causes cognitive dissonance as their initial reasons may not justify the original ask. Some clients feel you should not even ask “Why", it is your duty to do it, since they asked for it. This is mainly because that’s what they may be doing in their role. They may not have a clear answer to your “Why”.

Alternatives to asking a direct WHY question

  • How would this help you? 
  • What will you achieve with this?
  • What is the problem you are trying to solve with this?
  • What kind of decisions would you take if this was available?  
  • Let’s assume we have this today. How will you use it, and what will you do next?

Listen to understand, not to respond. 

  • Clients may have very wrong assumptions about your product. It’s ok. Listen completely and take notes.
  • They might ask for a feature that you already have, but they have never used. It’s ok. Blame the discoverability of your feature.
  • When they complain (or rant), try to understand why YOU are not able to feel the same pain. 
  • Error Messages: When they don’t read the message and take corrective action, think about how you can make the error more understandable, instead of just more readable. Can you suggest an alternative action?

Solve Problems.

 Your product is an enabler, not a constraint. Clients look up to you as a problem solver. 
  • Don’t explain your limitations and constraints. They may be important, but they don’t sound interesting and do not encourage the client to convey more.
  • Ignore your mental screams that “this is not possible”, “this is not what you created the product for”, “you never asked for it earlier”. All of it may be true, but it hardly matters.
  • If you cannot immediately think of a solution on how you can enable them to solve this, just listen and take notes. Tell them you’d work on it and come back to them later. Helps you give a better response and avoid conflict.

PM is a Double Agent

Most Popular on this blog