Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Playing Whack-A-Mole With Risk

Tags:
Playing Whack-A-Mole With Risk

Assumptions are interesting things – we all make them all the time, and we rarely acknowledge that we’re doing it.  When it comes to developing a product strategy – or even making decisions about how best to create a product, one of these assumptions is likely to be what causes us to fail.  We can, however, reduce the chance of that happening.

Being Wrong

What does it feel like to be wrong?  Watch about 25 seconds of this TED talk from Kathryn Schultz, starting at 4:09

Go back later and watch her entire talk – it is really worth it.  But stay with me for now.  All you need for this article is the 25 seconds, and the realization that you don’t know you are wrong until you know you’re wrong.

Hidden in Plain Sight

Assumptions are like being wrong.  But with an added degree of difficulty.  Not only do you not know you’re wrong – but you didn’t realize you were incorrectly asserting something, and then betting on it to be right.

Every strategy, every product idea, every design approach, and every planned implementation is built upon a pile of assumptions.  Those assumptions are there, if you just look at them.  But you have to look for them in order to see them.  They are hidden in plain sight.

The only question is if they are going to cause you any trouble.  You might not be wrong, in the assumptions that really matter.

Wouldn’t it be nice to know when you are wrong?  Before it’s too late?  Before it’s really expensive?  Before your window of opportunity closes?

Identifying Risky Assumptions

Laura Klein spoke at the Lean Startup Conference about identifying risky assumptions and her talk was published in Dec 2014.  Laura is also rapidly becoming on of my favorite gurus.  I just wish I’d become aware of her work sooner.

Laura identifies that every product has at least three different classes of assumptions.

  1. Problem Assumptions – we assume there is a market-viable problem worth solving.
  2. Solution Assumptions – we assume our approach to addressing the problem is the right one.
  3. Implementation Assumptions – we assume we can execute to make our solution a reality, such that it solves the problem and we succeed.

Hold onto this thought – I need to segue and dust off a tool I found five years ago, and some work I’ve done with clients over the last couple of years.  We’ll look at how to incorporate some of those ideas with the ones Laura shared.  And eventually, the whack-a-mole reference will make sense.

Hypotheses and Assumptions

With a client last year, I ran a workshop to elicit assumptions on our project.  We were working to develop what Harry Max calls the theory of our product.  Basically, we were working to develop the vision, the value propositions (for a two-sided market problem), the business model that would enable a credible market entry strategy given the company’s current situation, and a viable solution approach.  Essentially, product strategy and product ideation.

My assertion in that workshop was that assumptions and hypotheses, practically speaking, are risks.

Product strategy and product design are a formulated plan of action, built upon a set of beliefs – assumptions and hypotheses.  The risk is that those beliefs are wrong.  And we don’t realize it.  Materially, the only difference between an assumption and a hypothesis is that the assumption is something no one has said out loud.  It represents an implicit risk.  Once you acknowledge the assumption, you can then treat it explicitly – and explicitly decide to do something about it or not.
In the workshop I prompted the participants (senior executives, domain experts, product stakeholders and team members) to identify their assumptions and hypotheses.  I started by presenting several hypotheses and assumptions that had been part of conversations prior to the workshop.
This helped elicit ideas from the group, but it wasn’t really enough.  What did get things moving was some prompts from Harry, such as the suggestion to complete the sentence “It will never work because..” or “The only way it will work is if…”
We were able to elicit and then organize (affinity mapping) the inputs into a collection of testable hypotheses.

What To Do With a Pile of Hypotheses?

Now, armed with a list of hypotheses, and limited time and resources to go test them all, we were faced with the challenge of determining which risk to address first.  Remember – hypotheses and assumptions are risks.  Risks of being wrong (and not knowing it).  Risks of product failure.
I’ve historically used potential impact and likelihood of happening to manage risks.  I first learned to assign a score from 1 to 3 for likelihood of the risky thing happening, and a score from 1 to 3 of how bad would it be if it did happen.  Multiply the two together, and you get a score from 1 to 9 (1,2,3,4,6,9).  I learned this from PMO-trained people in the late 1990’s.  Maybe their thinking has evolved since then.  There are two problems with creating a score like this.
  1. Likelihood of occurrence and potential impact are treated as equally important factors.  An unlikely but major impact risk would be “as important” as a likely risk with minimal impact.  Each particular approach to risk management will value these differently.
  2. Combining the two pieces of information into a single piece of information is discarding useful information.  If I tell you one risk is a “3” and the other is a “4”, you cannot know which risk is more important to you.  The “4” is something that reasonably could happen, and would be “bad.”  Would that be more important than understanding the risk of an unlikely, but company-ending risk? Would it be more important than a very likely annoyance – one which may cause death by a thousand cuts for your company is large volumes of support costs absorb profits.
That’s why I’ve treated this as a two-dimensional space – visualizing a graph of likelihood vs impact.
Laura proposed my now-favorite labels for this graph, relabeling my vertical axis.  I’m shamelessly stealing this from Laura.  It seemed fitting as Laura credits part of her presentation to Janice Frasier.  Maybe one of the ideas I’m adding to the mix will be stolen by the next person to add to our blog-post conga line.
As a team, you can reach consensus around the relative placement of all of the risks.  We then began tracking against our top 10.
As Laura would say – you start with the “uppiest and rightiest.”  What you are doing is asking the question  – what risk is most likely to kill your product, damage your stock price, get your CEO fired, etc.
There’s another dimension which makes treating risks this way difficult – uncertainty.  You don’t actually know that this risky think is likely to happen.  You’re incept-assuming as you make assumptions about your assumptions.
The easiest way to think about this it acknowledge that your impact and likelihood “measurements” are not measurements – they are estimates.  They may be calibrated estimates, ala Hubbard’s How to Measure Anything or they may be guesses based on which way the wind is blowing.  Treat them as estimates, and then – plot them either as your “most likely” or your “worst case” point of view – that’s a stylistic call, I think.

Removing Risks

The reason you test a hypothesis is to reduce a risk.  I think Laura used the phrase “to de-risk” the risk.
To de-risk the risk, the first think you need to do is remove the uncertainty you have about how bad things could really possibly be.  You need to run an experiment.  In the example above, you would prefer to test hypothesis 7 first if you can – it is the uppiest and rightiest.  You would not be far wrong if you tested 4 or 8 first (assuming it is easier, faster, or cheaper to test one of those).  If you were to first test anything other than 4,8,7, you really should have a good reason.
Once you run your experiment and determine that the risk is not a risk, go back and address the next-most-important risk.  This is a game of whack-a-mole.  You will never run out of testable risks.  You will only eventually reach a point where the economic value of delaying your product to keep testing risks no longer makes sense.
Note that an experiment could result in multiple outcomes and next steps.  Here are a couple
  • This risk is not as impactful as we thought, we won’t address it with product changes, we will absorb those costs into our profitability model and revisit pricing to assure the business case still holds up.
  • This risk is every bit as likely as we were afraid.  Let’s determine a problem restatement (or solution design approach) where this risk no longer has  the impact or likelihood it did before.  As an example – a risk of users not adopting a product with an inelegant experience may justify rethinking the approach and investing to improve the user experience.

Trying to tackle all the ways you can respond to risks (and de-risked risks) would make this overly long article ridiculously long.

Validation Board

n 2012 I came across the hypothesis board from leanstartupmachine.com.  At the time, it was free for use by consultants :)  I don’t believe it has gained widespread adoption.  At least people look at me funny when I mention it. Maybe now, more people will know about it.
I personally never used it because something felt not-quite-helpful enough for me, for the problems I was helping my clients to solve.  I could never figure out why, however.  The board has many of the important components.  In hindsight, this is an indicator that the validation board is likely solving a problem I don’t have (as opposed to being a bad solution to a problem I do have).
The validation board is structured more for early-startup customer-discovery work – with three categories of hypotheses to track – customer, problem, and solution
  • How big is the potential market?
  • How valuable is the problem we would solve?
  • Are we able to solve the problem for these people?
The tool was positioned as something to help you pivot as you discover that you have the wrong customers, or problems, or solutions.
What I need is to know what hypothesis to test next.  I think that may be best done with a simple graph like the ones Laura and I use.  but use her labels.

Whack Some Moles

Instead of debating about implementation details, consider assessing the risks to your product.  Determine if those risks warrant making an investment to reduce them.  Form a measurable hypothesis and validate it.
Then go after the next risk.  Until the remaining risks are no longer big enough for you to pursue.


This post first appeared on Tyner Blain | Agile Product Management And Strateg, please read the originial post: here

Share the post

Playing Whack-A-Mole With Risk

×