Bayesian Product Management: Getting Out of Your Own Head

Written by joshbendavid | Published 2019/05/22
Tech Story Tags: bayesian-statistics | product-roadmap | bayesian-pm | product-management | cognitive-bias

TLDRvia the TL;DR App

MBayesian Product Management: Getting Out of Your Own Head

Reality is complex, but when humans are involved it is incomprehensibly complex. Just think about physics… Now imagine if electrons had opinions.

This poses a unique challenge when building a product because you need to deeply understand users, your market, competitors, your business needs, and your team’s ability to execute. In short, a multitude of human factors.

Oh yeah, you probably have limited time and resources, too.

So how do you sift through the sea of often conflicting data points and transform it into a coherent product roadmap that will lead you in the right direction?

Not your standard “lean” advice

You might say, that’s what lean methodology is for! In an environment of uncertainty and incomplete information, we build minimum viable products to get feedback from users quickly with little risk. If one idea fails, no worries. On to the next experiment.

I’m a strong advocate for lean methodology in many situations. But the most significant wins — the ones that give your product a major competitive advantage — are rarely the result of trial and error alone. To achieve the kind of impact that really moves the needle, you have to make strategic bets.

Strategic bets come in various shapes and sizes, but they all involve an element of risk. There’s the cost of planning, implementation, maintenance, and the less tangible but equally important cost of adding complexity.

To mitigate risk and maximize your chances of success, you need to somehow predict the future.

The Bayesian crystal ball

Bayesian reasoning is a statistical method used to estimate the probability of specific future outcomes.

It is essentially a forcing function that leads us to overcome our cognitive biases which many times distort our view of reality.

Cognitive biases are the result of our brain’s attempt to simplify data processing. They’re mental shortcuts that allow us to avoid information overload, make sense of the world, and reach decisions quickly. In other words, they drive our instincts and intuitions.

Hundreds of thousands of years ago, cognitive biases actually served our ancestors well. If there was a rustling in the bush, they didn’t want to sit around thinking, “It’s a probabilistic world, what are the chances it’s a lion?” Even if that noise might not have been an actual lion 95% of the time, the benefit of running every single time outweighed the potential cost.

Image credit

But today, cognitive biases are not very helpful for effective decision-making because in a business environment, we can’t afford to be wrong 95% of the time.

This is where Bayesian reasoning can play a critical role.

The Bayesian method requires us to ask: What is the relevant prior knowledge that we can apply to a problem? What new evidence should we consider? And how do they affect the probability of achieving our desired results?

These questions might sound banal, but explored properly they can shed light on the likelihood of success for a new product in the market, the business value of a new feature, or anything else involving a high level of uncertainty and risk.

For a more in-depth look at Bayesian reasoning I highly recommend reading “Predicting the Future with Bayes’ Theorem”.

Product roadmapping Bayesian style

Bayesian reasoning is especially useful for building products at scale because complex and imperfect information is thrown at you from all directions.

The framework I like to use — inspired by Intercom’s RICE model — incorporates Bayesian principles into the product roadmapping process using four basic parameters:

1. Goal2. Impact3. Confidence4. Effort

This model is based on Bayesian principles, not the full Bayes’ theorem which isn’t exactly tenable for non-statisticians. It captures much of the value in Bayesian reasoning while still being usable for the average product manager, and offers a great balance between effectiveness and simplicity.

In this model, the goal represents whatever you want to achieve, be it market adoption, user engagement, retention, revenue, etc.

Impact is a score of 1–10 and represents the amount of improvement you expect to gain toward your goal. So for example if your goal is retention and you expect this particular initiative to increase retention significantly, you might give it an impact score of 9 or 10.

Effort is also a score of 1–10, but here it represents the amount of work necessary to complete an initiative.

Impact and effort should be relative to all of your other potential initiatives. There’s no absolute formula to derive them. A good approach is to start from the extremes, benchmark what you would consider the highest impact or effort as a 10 and the lowest as a 1, and work out the other scores from there.

The confidence score is where it gets interesting. This number is a percentage that represents your level of confidence in your impact score. Theoretically this can be anywhere from 0 to 100%, but I recommend setting the minimum at 50% so you don’t waste your time thinking about anything with less certainty than a coin toss.

To calculate confidence, you’ll need a set of criteria that represents various types of evidence that you can collect to strengthen your knowledge around this particular initiative. Some examples of evidence are market research, user surveys, usability tests, user interviews, and behavioral data. By the way, intuition can also be considered evidence for your confidence score. Just be careful not to succumb to overconfidence bias.

After you’ve listed all of the types of evidence you could collect, give each of them a weighting in your confidence score so that combined, they would make you 99% confident in your impact score (I never give 100% out of principle).

Now comes the fun part — go out and collect your evidence!

This is an ongoing process. As you form new insights, you should always be adjusting your impact and confidence scores up or down to match the most accurate picture of reality. The continuous adjustment of scores based on prior knowledge and new evidence is the crux of Bayesian reasoning.

Once you’ve listed all of your potential initiatives and filled in your goals and impact, confidence, and effort scores, you’re ready to start prioritizing.

Calculate the return on investment of each initiative using this simple formula:

ROI = Impact * Confidence / Effort.

The idea of the formula is that impact increases ROI to the extent that you’re confident in your estimates, while effort decreases ROI.

You can then sort your initiatives by ROI for each goal.

The resulting list should not be viewed as your finalized roadmap, but rather as a pair of Bayesian glasses to see the future in a more objective light.

Talk the walk

As a product manager, it’s not enough to employ Bayesian principles on your own. Getting everyone on your team to use Bayesian language is also important because it invites collaboration.

When we make declarations in absolute terms, others may be reluctant to share potentially important and relevant information. To prevent that from happening the next time you’re having a discussion, try talking about the issues in terms of confidence levels.

For example, instead of saying “everyone does x” or “no one does x”, you could say that you think it is “likely” or “unlikely” that people do x in scenario y, which demonstrates that reality is not black and white and that you’re willing and able to dive into all the nuanced shades of gray.

And don’t assume that if you don’t come off as 100% confident, others will value your opinions less. The opposite is actually true. Most people appreciate open-mindedness.

Process is king

Even if you build a killer Bayesian model and follow it to the T, you’ll be wrong sometimes. Many times. That’s because good decisions don’t always lead to good outcomes.

Here’s a poker analogy to illustrate. Say you’re dealt a 2–7 offsuit, arguably the worst hand in poker. The odds of winning this hand are very low, and most players would fold. But say you play it, place a huge bet, and win! Does that mean you made a good decision? Does it mean you should bet big on 2–7 offsuit every time? Of course not. You’d lose a ton of money.

Image credit

The point is that since you can never judge a decision by its outcome, the best course of action is to focus on your process. It might be hard at first, especially if like most product managers you’re constantly being pulled in multiple directions. But if you manage to carve out even 10% of your week for this, I guarantee you’ll see a difference in the quality of your decisions.

Give it a try

So invest the time. Allocate the resources. Build yourself a process that forces you to get out of your own head. Because a good pair of Bayesian glasses will exponentially improve your judgement and your product in the long run.


Published by HackerNoon on 2019/05/22