Double Diamond Discovery - Part 4 and final

Written by NickO | Published 2020/01/19
Tech Story Tags: discovery | product-discovery | double-diamond | product | design | product-management | dual-track | double-diamond-discovery

TLDR This is the fourth and final post in my series on an approach to product discovery - Double Diamond Discovery. In the diagram above there are two steps after discovery - “Delivery” and “Learn”. In D2 most of the learning is done in D2 where learning is cheap and fast. But shipping is only half the job. Most of our discovery work will have a handful of users and mostly qualitative feedback. We need to setup a dashboard in the analytics tool to easily view and draw understanding from the usage behaviour.via the TL;DR App

This is the fourth and final post in my series on an approach to product discovery - Double Diamond Discovery (again thanks to the British Design Concils super awesome framework). So far I have posted an overall view and then expanded on Diamond 1 (D1) and Diamond 2 (D2). This final post discusses the delivery track (https://www.jpattonassociates.com/dual-track-development/) and how this feeds back into the discovery track. In the diagram above there are two steps after discovery - “Delivery” and “Learn”. Although we want to have a ‘ship to learn’ (https://www.intercom.com/blog/intercom-product-principles/) mindset I need to emphasize most of the learning is done in D2 where learning is cheap and fast. There are always many new things to build/fix/improve so it’s easy to ship and forget. But shipping is only half the job.
Shipping is only half the job.
Most of our discovery work will have a handful of users and mostly qualitative feedback so although we may have done a thorough job in discovery and are ready to ship with confidence, most of the time we are not shipping with certainty. No matter how much discovery you do, even AB testing with statistical significant results, shipping is the only way to answer the question “Do our customers use the feature?”.
Delivering production quality software (https://svpg.com/discovery-vs-delivery/) provides feedback at scale - but only if appropriate instrumentation to capture the metrics (e.g. our key results) are implemented. We need to setup a dashboard in the analytics tool to easily view and draw understanding from the usage behaviour. I suggest spending 15min every day looking at the dashboard and sharing the interesting metrics with the team.
Do a deep dive
Do a deep dive into the usage data with the full team - design, engineering, product, analytics and data science. What is the data telling us? What can we learn? How can we take these learnings back into discovery? What is the adoption? The engagement? The retention? Are the key results moving?
And then... how does the adoption, engagement, retention vary between users?
And then... is there a segment of users highly engaged? What are the characteristics of that segment? Which of those characteristics makes the software more useful? If the segment is large enough focus your future efforts on that segment (be careful not to create custom solutions for each segment - although this is great consultancy work its lousy product management).
Fiddle with the dashboard, look for patterns, look for the “hm… that seems a bit odd”. Dig in and become one with the data {-_-}.
As mentioned in step 5 of the ground work we need to setup a recurring meeting to review the outcome of what we have shipped and reflect/share/capture what we have learned.
Take these insights from the data and the lessons learned back into discovery. Rinse and repeat.
Good luck.

Written by NickO | Father. Husband. Hyperactive. Creative. Enthusiastic. Loyal. Passionate. Funny.
Published by HackerNoon on 2020/01/19