Figure out how RISKY your next product update is
I recently shared a simple decision tree on LinkedIn.
I wanted to help product managers decide if they had done “enough” customer discovery before building.
But the decision tree begins with RISK.
And determining the right risk level for upcoming product updates can be the most complex part of deciding how much discovery work to do.
There were a lot of comments from Product Managers on that post and in messages. For most, the topic of risk is not clear, straight forward, easy to measure or act on.
Because defining your risk is not an exact science. There are many ways of doing it, and all have significant bias.
It’s pretty damn subjective.
But making good decisions - consistently - requires us to get out of emotions and into logic.
Whether you’re making investment choices, playing poker, or choosing which product to develop, you have to find a way to evaluate the risk as objectively as possible.
I’ve spent years looking for a way to identify the risk of building and launching that consistently works for product teams.
No process or framework is perfect for every case, but I’ve seen this one work for most.
Every move your team makes already has a “risk profile.”
You probably just haven’t labeled it yet.
Every feature update and discovery you run is low risk, high risk, or some shade of gray between.
But if can’t place your product plans on the risk spectrum, you’re likely wasting time - on delivery or discovery, or both.
For Product teams, Customer Discovery should always be about mitigating risk.
Uncovering risks, and reducing risks.
But we can’t figure out what amount of discovery is “enough” unless we first identify the risks we’re facing each time.
Categorizing risk
Marty Cagan says the “Four Big Risks” are (in his words):
Value risk (whether customers will buy it or users will choose to use it)
Usability risk (whether users can figure out how to use it)
Feasibility risk (whether our engineers can build what we need with the time, skills and technology we have)
Business viability risk (whether this solution also works for the various aspects of our business)
But I see a few more risks that often contribute to a Product Manager panicking if we don’t address them. I like to be more explicit than using the “big four” alone:
5. Reputation risk
6. Financial risk
What % of existing customers and revenue will this change affect?
How much will the targeted product update cost?
What’s the cost of delivery alone vs. customer discovery + experimentation cost?
In the model I use to weigh risk types, I use the big four risk types for one question, and cover the two additional risk forms above in extra questions because they can build on top of each other.
Identifying the risk level of a product update
STEP I: What’s the potential business risk?
The biggest and potentially longest-lasting impacts are usually felt as a result of business risk and value risk.
Ask a few key questions:
What type of change are you making? (Pricing, UI, New Product, New Feature, Update feature,…)
What type of risk is greatest in this change? (Value, Business, Usability, Feasibility)
How much of the business' revenue might this product update impact? (All, Most, Some)
How many existing users might this product update impact? (All, Most, Some)
How many non-users might this product update impact? (Many, Few)
What is the cost to deliver the product update (without discovery)? (High, Med., Low)
STEP II : Can you “undo” it?
Jeff Bezos has famously used one measure of risk in Amazon decisions that I’ve seen work well.
Ask: “Is the decision reversible or irreversible?”
Reversible —> Low(er) risk
Irreversible —> High risk
In other words, will it be impossible to step back and fix the negative impact of a bad build and launch experience? When the answer is no, we are dealing with high risk.
Will the product update affect users in a way that cannot be undone?
Ex: Drastic change to the pricing model (ex: flat rate per tier to usage based pricing)
Will the product update involve such high costs and time commitment that will be completely “lost” if this doesn’t succeed?
Ex: Building a new product from scratch
STEP III: What assumption needs to be true, and is there already “proof”?
All product updates and decisions are based on an assumption.
We’re assuming someone wants the new feature. We’re assuming they’re willing to pay per seat instead of a flat fee per account. We’re assuming that rearranging the navigation makes it easier for users to find the right actions.
Whether some of those assumptions are wrong or right can have big impacts on the business and the user.
We need to acknowledge what core assumption we’re building based on, and whether it’s backed by proof (or not).
Ask: What would be greatest negative effect if our assumption is wrong?
Have you explored/tested this assumption before?
If yes to above, did the test strongly support/confirm your assumption?
If you have an assumption with a big potential negative impact if wrong, plus no existing evidence to support the assumption, you’re dealing with high risk.
Weighing the risk
I apply weights to each risk question and possible answers to tally up a total score. It’s helpful to me and teams I’ve worked with to see it all laid out in a spreadsheet as a visual map of the risk amount.
It’s not perfect, but it does the job better than anything else I’ve seen.
I hope this can offer a starting point for anyone struggling to identify risk, weigh it for your next product update, and decide how much customer discovery to do as a result.